Monday, January 20, 2014

Social Media Marketing

Social media 1
Social Media Marketing
by Spencer Wade
Social media marketing refers to the process of gaining website traffic or attention through social media sites. Social media marketing programs usually center on efforts to create content that attracts attention and encourages readers to share it with their social networks. A corporate message spreads from user to user and presumably resonates because it appears to come from a trusted, third-party source, as opposed to the brand or company itself. Hence, this form of marketing is driven by word-of-mouth, meaning it results in earned media rather than paid media.
Social media has become a platform that is easily accessible to anyone with internet access. Increased communication for organizations fosters brand awareness and often, improved customer service. Additionally, social media serves as a relatively inexpensive platform for organizations to implement marketing campaigns.
Social networking websites allow individuals to interact with one another and build relationships. When products or companies join those sites, people can interact with the product or company. That interaction feels personal to users because of their previous experiences with social networking site interactions.
Social networking sites and blogs allow individual followers to “retweet” or “repost” comments made by the product being promoted. By repeating the message, all of the users connections are able to see the message, therefore reaching more people. Social networking sites act as word of mouth. Because the information about the product is being put out there and is getting repeated, more traffic is brought to the product/company.
Social media 2
Through social networking sites, products/companies can have conversations and interactions with individual followers. This personal interaction can instill a feeling of loyalty into followers and potential customers. Also, by choosing whom to follow on these sites, products can reach a very narrow target audience.
Social networking sites also include a vast amount of information about what products and services prospective clients might be interested in. Through the use of new Semantic Analysis technologies, marketers can detect buying signals, such as content shared by people and questions posted online. Understanding of buying signals can help sales people target relevant prospects and marketers run micro-targeted campaigns.
Mobile phone usage has also become beneficial for social media marketing. Today, many cell phones have social networking capabilities: individuals are notified of any happenings on social networking sites through their cell phones, in real-time. This constant connection to social networking sites means products and companies can constantly remind and update followers about their capabilities, uses, importance, etc. Because cell phones are connected to social networking sites, advertisements are always in sight. Also many companies are now putting QR codes along with products for individuals to access the company website or online services with their smart-phones.
Social media 3
Facebook profiles are far more detailed than Twitter accounts. They allow a product to provide videos, photos, and longer descriptions. Videos can show when a product can be used as well as how to use it. These also can include testimonials as other followers can comment on the product pages for others to see. Facebook can link back to the product’s Twitter page as well as send out event reminders. Facebook promotes a product in real-time and brings customers in.
As marketers see more value in social media marketing, advertisers continue to increase sequential ad spend in social by 25%. Strategies to extend the reach with Sponsored Stories and acquire new fans with Facebook ads contribute to an uptick in spending across the site. The study attributes 84% of “engagement” or clicks to Likes that link back to Facebook advertising. Today, brands increase fan counts on average of 9% monthly, increasing their fan base by two-times the amount annually.

21st Century – Money Well Spent

httparrow
21st Century – Money Well Spent
by Spencer Wade
Spending on Internet advertising in 1996 totaled $301 million in the U.S. While significant compared to the zero dollars spent in 1994, the figure paled in comparison to the $175 billion spent on traditional advertising as a whole that year. As the number of Internet surfers continued to rise, however, interest in the Internet as a mass-media vehicle increased. Online advertising grew to an industry worth nearly $1 billion in 1997. The Internet became increasingly popular in the late 1990s, and the viability of the Internet as a marketing medium emerged as more than mere speculation. Millions of surfers logged on to the Web each day, and many businesses were determined to reach this new audience. Web sites emerged for companies in nearly every industry, ranging from household cleaning products and cosmetics to electronics and automobiles. At the same time, many firms realized that simply creating a Web site wasn’t enough to create a solid Internet presence; they also needed to drive traffic either to their sites or to their specific advertisements.
graph
For example, drug company Bristol-Myers Squibb Co. launched an Internet marketing campaign designed to build brand awareness for Excedrin. For 30 days during the 1997 tax season, the firm proclaimed Excedrin to be the “tax headache medicine” on a variety of financial Web sites. To entice surfers to click on the advertisement, Bristol-Myers offered a free sample of Excedrin to anyone who entered their name and address. According to Business Week writer Linda Himelstein, “The response was as good as any elixir. In just one month, Bristol-Myers added 30,000 new names to its customer list—some 1,000 per day and triple the company’s best-case scenario. What’s more, the cost of obtaining those names was only half that of traditional marketing methods.”
wwwworld
Hoping for similar results, many traditional firms began incorporating the Internet into their existing marketing plans. Even technology industry giants like IBM Corp. and Microsoft Corp. began dumping millions of dollars into Internet marketing efforts. Many smaller firms, including Internet upstarts, turned to highly trafficked sites like Internet portal Yahoo!, paying for advertisements such as banner bars. In fact, Yahoo! was one of the few Internet-based firms actually able to earn a profit from online advertising. By developing technology that allowed it to track a visitor’s online activity and control what banner bars and button ads that visitor saw, Yahoo! was able to target its messages in a manner never before seen by the marketing industry. Yahoo! could also monitor the number of hits each advertisement received as a means of evaluating an ad’s effectiveness. This innovative technology, coupled with the site’s intense traffic levels, attracted dot.com upstarts hoping to reach as many Internet users as possible.
However, like most other ventures reliant on dotcom businesses, Yahoo! saw its customer base dwindle when the U.S. economy began to cool in 2000 and many dot.com firms were forced to tighten advertising budgets as they fought to stay afloat. Making matters worse, many industry analysts began to argue that online advertisements like banner bars were simply ineffective more often than not. Despite the dot.com fallout, though, the $8.2 billion online advertising industry did not disappear. The fact remained that millions of people were surfing the Internet on a regular basis, and businesses were not willing to turn a blind eye to this mass market.

A Computing History

Image

by Spencer Wade
A computer is a general purpose device that can be programmed to carry out a finite set of arithmetic or logical operations. Since a sequence of operations can be readily changed, the computer can solve more than one kind of problem. Conventionally, a computer consists of at least one processing element, typically a central processing unit (CPU) and some form of memory. The processing element carries out arithmetic and logic operations, and a sequencing and control unit that can change the order of operations based on stored information. Peripheral devices allow information to be retrieved from an external source, and the result of operations saved and retrieved.
The first electronic digital computers were developed between 1940 and 1945 in the United Kingdom and United States. Originally they were the size of a large room, consuming as much power as several hundred modern personal computers (PCs). In this era mechanical analog computers were used for military applications.
Modern computers based on integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction of the space. Simple computers are small enough to fit into mobile devices, and mobile computers can be powered by small batteries. Personal computers in their various forms are icons of the Information Age and are what most people think of as “computers”. However, the embedded computers found in many devices from MP3 players to fighter aircraft and from toys to industrial robots are the most numerous.
Image
The history of the modern computer begins with two separate technologies, automated calculation and programmability. However no single device can be identified as the earliest computer, partly because of the inconsistent application of that term. A few devices are worth mentioning though, like some mechanical aids to computing, which were very successful and survived for centuries until the advent of the electronic calculator, like the Sumerian abacus, designed around 2500 BC of which a descendant won a speed competition against a modern desk calculating machine in Japan in 1946, the slide rules, invented in the 1620s, which were carried on five Apollo space missions, including to the moon and arguably the astrolabe and the Antikythera mechanism, an ancient astronomical computer built by the Greeks around 80 BC.The Greek mathematician Hero of Alexandria (c. 10–70 AD) built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions and when. This is the essence of programmability.
Image
Around the end of the 10th century, the French monk Gerbert d’Aurillac brought back from Spain the drawings of a machine invented by the Moors that answered either Yes or No to the questions it was asked. Again in the 13th century, the monks Albertus Magnus and Roger Bacon built talking androids without any further development (Albertus Magnus complained that he had wasted forty years of his life when Thomas Aquinas, terrified by his machine, destroyed it).
In 1642, the Renaissance saw the invention of the mechanical calculator, a device that could perform all four arithmetic operations without relying on human intelligence. The mechanical calculator was at the root of the development of computers in two separate ways. Initially, it was in trying to develop more powerful and more flexible calculators that the computer was first theorized by Charles Babbage[14][15] and then developed. Secondly, development of a low-cost electronic calculator, successor to the mechanical calculator, resulted in the development by Intel of the first commercially available microprocessor integrated circuit.

Databases and DBMSs: Making Data Your Friend


a
What is a Database?
by Spencer Wade
The term database refers to any organized collection of data. This data is typically organized to model aspects of reality in a way that supports processes requiring this information. The term is correctly applied to data and the supporting data structures, but not to the database management system, or DBMS. A database data collection with DBMS is known as a database system.
It is implied by the term database system that the data is managed to some level of quality. This is usually measured in terms of accuracy, availability, usability, and resilience. The quality management seen in the database system is evidence of a general-purpose database management system. A general-purpose DBMS is a complex software system that meets many usage requirements to properly maintain its often large and complex databases.
The use of client-server, real-time transactional systems where multiple users have access to data highlights the importance of DBMS. In these systems, data is concurrently entered and inquired for in ways that preclude single-thread batch processing. Most of this requirement complexity can be found in personal desktop-based database systems.
There are many well-known DBMSs available today. These include DBMSs from Oracle, FoxPro, IBM DB2, MySQL, SQLite, Sybase, Linter, Microsoft Access, Microsoft SQL Server, and PostgreSQL. Databases are not usually portable across different DBMS, but different DBMSs can interoperate to some degree using standards like SQL and ODBC together to support a single application built over more than one database. DBMSs must also provide effective runtime execution to properly support, in terms of performance, availability, and security, as many database end-users as needed.
The classification of databases is directly related to their contents. Bibliographic databases, document-text databases, statistical databases, and multimedia objects databases are just a few examples of the different types. Databases can also be classified by their application area. Examples of this classification method include accounting, movies, banking, manufacturing, music compositions, and insurance. The term database may be narrowed to specify particular aspects of the organized collection of data, and may refer to the logical database, the physical database as data content in computer storage, or to many other sub-definitions.
Database Concept
The concept that led to the development of the database has been evolving since the 1960s. This was primarily due to the increasing difficulties in designing, building, and maintaining complex information systems. This was especially true when dealing with systems that have multiple concurrent end-users, and contain a large amount of diverse data. Database management systems have evolved as well in order to facilitate the effective handling of databases. DBMSs and databases are different entities, but they are inseparable. A database’s properties are determined by its supporting DBMS.
It is believed, though it may be argued, that a 1962 technical report was the first to use the term “data base”. In the intervening years, there have been enormous strides made in processing power and speed, computer memory and storage, and computer networking. This growth has been reflected in the size, capability, and performance of databases and their respective DBMS. For many years, it has been deemed unlikely that any complex information system could be built effectively without a proper database supported by a DBMS. Database usage has spread to such a degree that virtually every technology and product relies on databases and DBMS for its development and commercialization, and companies and organizations rely heavily on them for their operations as well.
There is no clear, accepted definition of DBMS, but it is widely accepted that a system must provide considerable functionality to qualify as one. Its supported data collection must also meet usability requirements to be considered a true database. This basically means that a database and its DBMS are loosely defined by a general set of requirements. All existing mature DBMS meet these requirements to a great extent, and less mature DBMS strive to meet them or are converging to meet them.
 Evolution of Database and DBMS Technology
The definition of the term database, as was discussed earlier, coincided with the availability of direct-access storage from the mid-1960s onwards. It represented a fundamentally different approach from the tape-based systems of the past, and allowed shared interactive use rather than simple daily batch processing.
The earliest database systems were primarily concerned with efficiency, but developers already recognized that other important objectives existed. One of these objectives focused on making data independent of the logic application programs, so that this data could be shared among the different applications. This was a groundbreaking change in database development and usage.
Since the 1970s, there has been an exponential increase in capacity and speed of disk storage and main memory on computing platforms. Database technology has kept pace with this explosion, and by doing so has enabled the creation of ever larger databases and higher throughput volume. This has allowed the development of many of the applications we use on a daily basis; both in personal life and business.
The general-purpose database, in the beginning, was navigational. The applications would typically access the data by following pointers from one record to another. At this point in database development, there were two main data models being used. The hierarchical model used by the IBM IMS system, and the Codasyl model implemented in products like IDMS. This remained the case until 1970.
It was then that Edgar F. Codd proposed the relational model. This model departed from the norm by insisting that applications should search for data by content rather than following links. This was necessary to allow the content of the database to evolve without constant rewriting of links and pointers. The relational model consists of ledger-style tables that each correspond to a different type of entity. Any new data may be freely inserted, deleted, and edited in these tables, and the DBMS is responsible for maintenance necessary to present a table view to the application/user.
The term relational comes from entities referencing other entities in what is known as a one-to-many relationship like a hierarchical model, and a many-to-many relationship like a network model. So, a relational model can express both navigational and hierarchical models as well as its tabular model. This allows for pure or combined modeling as the specific application requires.
Relational models, in their earliest forms, did not make relationships between different entities explicit in the way users were accustomed to, but as primary keys and foreign keys. These keys can be seen as pointers, of a sort, stored in tabular form. The use of keys rather than pointers obscured relations between entities in the way it was presented, so the relational model was considered to emphasize search over navigation. It was deemed a good conceptual basis for a query language, but not so for a navigational language.
This gave rise to the development, in 1976, of the entity-relationship model. This model gained instant popularity for database design since it emphasized a more familiar description than the earlier relational model. In time, entity-relationship constructs were retrofitted as data modeling constructs for relational models, and the differences between the two became irrelevant.
Relational system implementations lacked the automated optimizations of conceptual elements and operations when compared to their physical storage and processing counterparts. Their simplistic and literal implementations placed heavy demands on the limited processing resources of the time. It took the arrival of the mid-1980s, with its increases in computing power, for relational systems (DBMSs and applications) to be widely deployed. In the 1990s, relational systems became the dominant system used for large-scale data processing applications, and still hold that lofty spot today. The dominant database language for the relational model is SQL which has influenced the evolution of many other database languages.
The inflexibility of the relational model has increasingly been seen by users as a limitation when dealing with information that is richer or more varied than the traditional “ledger book” data of corporate information systems. This issue is most prevalent when modeling multimedia databases, molecular science databases, document databases, and engineering databases. The rigidity in the relational model is due to the need to represent new data types other than text and text-alikes. Examples of unsupported data types include:
  •  Graphics – Pattern-matching and OCR
  • Multidimensional Constructs – 2D geographical, 3D geometrical, and multidimensional hypercube models.
  • XML – Hierarchical data modeling technology evolved from EDS and HTML used for data interchange among dissimilar systems.
Object-oriented methodologies, focusing on encapsulated data and processes, brought on more fundamental conceptual limitations. Traditional data modeling constructs emphasize the total separation of data from processes, but modern DBMSs allow for limited modeling in terms of validation rules and stored procedures.
Attempts have been made to address the issue of conceptual limitation. Banners such as post-relational or NoSQL are prime examples of this movement. The development of the object database and XML database were noteworthy steps in the right direction, but relational database vendors have combated this competition by extending the capabilities of their own products to support a wider variety of data types.
 Final Thoughts
 Database technology has grown from its archaic origins into an ever-changing field of complex technical innovation. There are breakthroughs being made every day that open up the possibilities offered by databases and DBMSs, and this will only expand with the influx of resources being funneled into development from all over the world. The next few years should be a very exciting time for database users. The sky is the limit for this technology moving forward.

Not Your Ordinary Cup of Coffee: Object-Oriented Design in Java

b
Object Oriented Design (OOD)
by Spencer Wade
Object-oriented design (OOD) principles make up the core of object-oriented programs and systems (OOPS). It is the process of planning a system of interacting objects for the purpose of solving a software problem. An object contains encapsulated data and procedures grouped together to represent an entity. The object interface, or how the object can be interacted with, is also defined. An object-oriented program is described by the interaction of these objects. Object-oriented design is the discipline of defining the objects and their interactions to solve a problem that was identified and documented during object-oriented analysis.
There are innumerable ways that programmers use object-oriented design in software development. Many of these programmers tend to have a very one-sided view of OOD; especially when working with Java. Programmers often chase design patterns like Singleton, Decorator, or Observer without giving enough attention to object-oriented analysis, or following the core principles of using OOD with Java. There are developers and programmers of all skill levels who have never learned the correct design principles, do not fully understand the benefits a particular design principle offers, or are unsure of how to properly apply these principles when coding.
The most important thing when working with OOD in Java is to strive for cohesion in any solution, code, or design. Taking the time to look over successful applications of OOD in Java coding can be very helpful, and examples of open source Java OOD can be found all over the Internet. Apache and Sun offer excellent views of how Java coding should be tackled using OOD.
These examples are helpful, as was stated before, but they can never replace the importance of real world experience.  This type of learning teaches not only the basics of successful design, but also gives users an idea of what happens when these design principles are violated. Mistakes teach valuable lessons in life and programming, so novice designers and those with little experience will benefit greatly from seeing firsthand what works and what does not.
a
Principles of OOD in Java
The list that follows defines and describes some core principles that developers must keep in mind when working with OOD in Java coding. These principles represent some information that all programmers need to know to be successful when working in this medium. These principles are:
  •  DRY: DRY, or don’t repeat yourself, means just what it says; never write duplicate code. Use abstraction to group common things together in one place. For example, if a hardcoded value is used more than one time consider making it a public final constant, or if a block of code is located in more than one place try making it a separate method. This is a general rule of thumb, but, as with all things, there will be exceptions. The benefits from doing things in this way can be clearly seen when it is time for code maintenance.
  • Encapsulate What Varies: Change is the only constant in software development. This is a truism and leads to the next design principle; encapsulate what varies. The code expected, or suspected, of being changed in the future should be encapsulated. Encapsulation of code allows for easier testing and maintenance. In Java coding, it is best to make variables and methods private by default and increase access step-by-step from private to protected. The Factory design pattern in Java uses encapsulation on object creation code to provide flexibility when releasing new products. The code, due to its encapsulation, is not impacted by this change, so the programmer does not need to rewrite anything.
  • Open Closed: Classes, functions, or methods should be open for extension, or new functionality, and closed for modification. This is an excellent principle for preventing changes being made to functional, tested code by anyone other than the developer.
  • Single Responsibility: This principle is based on the assumption that a class should never have more than one reason to change, and should always handle single functionality. In Java coding, it is unwise to put more than one function into a class. By doing so, coupling between the two functions is impossible to prevent, so if one function needs to be changed there is a good chance that the coupling will be broken. This will require another round of testing to safeguard against any unforeseen problems cropping up in a production environment.
  • Dependency Injection or Inversion: The framework will provide the dependency, so that is not an issue for programmers. Spring framework achieves this beautifully.  Any class that is injected by DI framework is easy to test with mock objects, and easier to maintain due to the fact that the object creation code is centralized in the framework, and the client code is entirely free of this clutter. There are multiple ways to implement dependency injection.
  • Composition over Inheritance: Whenever possible, it always best to favor composition over inheritance. This point is highly debatable, but composition seems to allow more flexibility than inheritance. Composition gives the option of changing the behavior of a class at runtime by setting the property, and using interfaces to compose a class using polymorphism. This provides the flexibility to upgrade implementation at any time.
  • Liskov Substitution: This principle states that subtypes must be substitutable for super type. It is closely related to both the single responsibility principle and the interface segregation principle that will be discussed later. In order to follow the Liskov substitution principle, all derived classes and subclasses must enhance functionality; not reduce it.
  • Interface Segregation: The interface segregation principle states that a client should not implement an interface if it is not necessary. The most common problem that arises from ignoring this principle happens when an interface contains more than one function, but the client only needs one of them. This is one of the reasons that interface design is so difficult. Once the interface is released from development, it cannot be changed without breaking all implementation. In Java, interfaces with only one function mean fewer methods to implement, and an overall easier task to complete.
  • Program for Interface not Implementation: Always program for interface instead of for implementation. This will lead to more flexible code that can coexist with any new implementation or interface. It is best to use interface type on variables, return types of method, and argument types of method when working with Java.
  • Delegation: This principle basically means that it is never a good idea to try to accomplish all the necessary steps alone. Delegate everything possible to respective class. Equals and hashcode methods in Java allow users to ask class itself to do a comparison of two objects for equality. The greatest benefits of this principle are ease of behavior modification and minimizing code duplication.
b
Final Thoughts
The listed principles of working with OOD in Java will help users write flexible, higher-quality code by maximizing cohesion and minimizing coupling. As in all things, theory is only the first step. Developing the knowledge necessary to properly apply these principles takes real-world experience and trial-and-error. This will give the programmer the tools to know when they are violating a key principle, and compromising the flexibility of the code. These principles are not fool-proof, and like all things in life they are to be used as guides and not blindly followed. It is beyond question that when working with OOD in Java these core principles will allow developers to make fewer mistakes, and get the upmost out of their coding work.

White Papers and Webinars: Free Information Fast

a
by Spencer Wade
White Paper: An authoritative report or guide helping readers to understand an issue, solve a problem, or make a decision. White papers are used in two main spheres: B@B marketing and government. Many more white papers are produced for B2B vendors today than are produced for government agencies.
Webinar: The term webinar is actually two terms combined: web conferencing and seminar. The term refers to any service that allows conferencing events to be shared with remote locations. The service is made possible by TCP/IP connections on the Internet. It allows real-time point-to-point communications as well as multicast communications from one sender to many receivers. Voice, text-based messages, and video chat can be shared simultaneously across geographically dispersed locations.
 Technology and Users
Technology has become the driving force behind many of today’s fastest growing career fields. In all of its forms, and there are many, technology has changed the world. It has made its way into nearly every human endeavor, and, in most cases, drastically reduced the workload required of its operator. This is progress of the highest order, but it is not without its own problems. There is an enormous amount of learning needed to master technology. In fact, if one truly wants to become an expert then the learning never stops. Technology is constantly changing through modification and upgrades, so the only way to maintain a high skill level is to absorb as much information as possible every day.
The information necessary to stay on top of the technological game can be difficult to gather, digest, and use in a meaningful way. The Internet is vast and the information is found in many places. This makes it difficult and time consuming to bring it together, and more so to compile it in a meaningful way. Time is money in the modern world, and every second spent finding the right information equates to lost revenue.
The demands technology places upon its users in terms of time spent studying can be enormous. The whole experience can be overwhelming to many, but there is a light at the end of the proverbial tunnel. That light is given off by white papers and webinars. These two resources are invaluable to technology users. They save all the time spent searching for the right information, and cover everything one must know in order to get the most out of a specific technology.
The definitions for white paper and webinar were given at the beginning of this paper to help the reader understand these terms. These resources have become the backbone of many users understanding of the technology they use in everyday life. Without this freely-given information, there would be much more confusion concerning, and misuse of, technology. The information found within these resources is directly, and specifically, related to a given technology, and allows users to quickly and efficiently become familiar with the technology in question. This is a godsend to the millions of people out there who are required to use technology they do not fully understand.
a
White Papers
There are countless white papers available on the Internet today. These white papers are divided into three categories: backgrounder, numbered list, and problem solution. The definitions for these categories are listed below:
  • Backgrounder: Describes the technical and/or business benefits of a certain vendor offering. This can be a product, service, or methodology. This type of white paper is best used to supplement a product launch, argue a business case, or support a technical evaluation.
  • Numbered List: Presents a set of tips, questions, or points about a specific business issue. This type of white paper is best used to get attention with new or provocative views, or cast aspersions on competitors using fear, uncertainty, and doubt.
  • Problem/Solution: Recommends a new, improved solution to a nagging business problem. This type of white paper is best used to generate leads, build thought leadership, or inform and persuade stakeholders.
These papers can be combined in many ways to better accomplish the writer’s goals. There are, however, some caveats to combining them. For example, a numbered list may be combined with any other type, but it is unworkable to combine the detailed product information of a backgrounder with the industry-wide perspective of a problem/solution white paper.
Most B2B white papers argue that one particular technology, product, or method is superior for solving a specific problem. They may present research findings, list questions or tips about an issue, or highlight a particular product or service. These papers are marketing communications documents designed to promote the products or services from a company. They use selected facts and logical arguments to build a case favorable to the company publishing the document.
In June of 2010, Stephanie Tilton published an article that reaffirmed white papers as one of the most important tools companies have for accomplishing all of the previously stated white paper goals. Here is a sampling of the research from her article:
  •  70% of IT buyers used white papers to get information on enterprise technology solutions in the past three months.
  • 77% of respondents responsible for either making B2B technology purchases or influencing purchasing decisions read at least one white paper in the first six months of 2009, with84% rating white papers as moderately to extremely influential when making final purchasing decisions, and 89% passing them along to others.
  • Trial software and white papers are the most utilized, along with being the most effective, forms of content for researching IT problems and solutions.
  • More than 76% of IT buyers use white papers for general education on a specific technology topic or issue.
  • Over 73% of IT buyers use white papers to investigate possible technology solutions for the business/technology need.
  • 68% of IT buyers use white papers to learn about a specific vendor and their solution technology.
  • 93% of IT buyers pass along up to half of the white papers they read/download.
  • 36% of IT buyers made a purchase as a result of reading a white paper.
  • 32% of IT buyers included a white paper in a business case to support a purchase.
The statistics listed in Ms. Tilton’s article give all the evidence needed for arguing the case for white papers. These documents are created with an ulterior motive, but they give anyone who uses them free information. That is priceless in technological fields. In fact, it is downright altruistic of the companies who spend the time to do the research and offer it freely to any and all who would use it.
a
Webinars
Webinars are inexpensive, beneficial tools that can be found online covering a broad range of topics related to technology and its applications. They allow their host to speak directly to an audience in real time. A webinar can be offered as a one-on-one session or given to an entire virtual classroom or conference. The number of viewers is secondary in importance to the overall use of time, the participation, and the feedback afterwards.
The number of webinars offered is growing by leaps and bounds. There is no end in sight to this unprecedented growth, and new uses for the medium are being found all the time. There are countless ways that these webinars can help users reach their goals. Below is a list of just a few of the ways in which webinars can accomplish this:
  • Raise awareness and ideas for products and services.
  • Increase productivity and connectivity with educational opportunities.
  • Build relationships and reputations based on real life interactions and experiences.
  • Develop high conversation rates benefitting income and profitability.
  • Listbuilding and creation of virtual audiences.
Webinars give the host immediate authority on their chosen topic. This is actually one of the main reasons people attend webinars. They are looking for new information and ideas from specifically targeted industry leaders. This is excellent for building lists of fans who are hungry for the knowledge offered as long as it is current, meaningful, and, above all else, valid.
 Final Thoughts
These very different resources, white papers and webinars, offer users a direct line to all the knowledge they need to fully realize their potential in a technology-dominated world. The use of both will greatly improve a user’s understanding, efficiency, and productivity when working with technologies in their personal and professional lives.