Monday, January 20, 2014

Brand Spanking New Web Technologies

a
by Spencer Wade
Web Technology
The development of new web technologies is one of the reasons programming is such an exciting field. There are few things more pleasing than getting your hands on cutting-edge technology, and bending it to your purposes. New technology turns every day into Christmas for developers, and the gifts just keep getting better and better. This piece will discuss some of the technology that has been released to the programming community, and give some indication of the ways these breakthroughs are being used in the industry. Most of these are already available for commercial and private use, and they are definitely worth keeping an eye on in the coming year.
 getUserMedia
There are many APIs out today that are mistakenly labeled “HTML5”. This is not the case with getUserMedia. It began its life as a HTML5 device, but was later renamed and hived off to W3C’s WebRTC suite of specifications. The getUserMedia technology, or gUM, gives access to a user’s microphone and camera. As a part of the WebRTC suite of specifications, gUM enables peer-to-peer-in-browser video conferencing. The fact that gUM has other uses means it has been separated from the rest of the suite.
a
In Opera 12 final for Android, Opera Desktop, and Google Chrome Canary camera access was successfully implemented. Opera and Chrome do not yet have microphone access as the specification is still being worked on, but there is a JavaScript snippet called The gUM Shield addresses this issue. When video is streaming from the device, it can be made into the source of a video element, and positioned off-screen if necessary. It can then be copied to canvas and manipulated as required by the user. There are also tools available that allow the gUM data to be copied into other formats for easier manipulation.
Giving web applications the very same functionality as native applications is the primary mission of gUM, and it goes a long way toward accomplishing this lofty goal. Functionality is the real key in any new technology, and as functionality grows in production browsers, there will be an incredible amount of web-based QR code-readers and augmented reality apps developed.
 SVG
Scalable Vector Graphics are nothing new in the development industry. They have been around for quite some time in Chrome, Firefox, Opera, and Safari. It was not until Internet Explorer began supporting it in IE9 that SVG became fully realized. Sadly, by this time it was largely overshadowed by HTML5 even though they are, at heart, different tools for different tasks.
Canvas 2D is an excellent tool for quickly adding graphics to a given screen, but all you can do is paint the graphics. There is no record of what is where, or what is layered above or below as is found in Photoshop or similar tools. All records of these layers must be kept in a separate JavaScript code. With no DOM in memory, this technology is superfast, and works wonders in video games requiring speed for proper performance.
a
 SVG is an excellent tool when a DOM is required due to its ability to allow users to move and animate objects independently with JavaScript. Since shapes and paths are described via markup, they can be styled with CSS. Text remains text in SVG, unlike canvas where it is turned into pixels, and is mash-uppable, accessible, and indexable. This is excellent for modifying text, but the real standout feature of SVG is that its vector based. Illustrations, graphs, and user interface icons are just as clear on a large screen TV as they will on a mobile device. This is of the upmost importance in this Internet-everywhere age. SVG even allows for the inclusion of Media Queries. This gives the response time necessary to keep nuances of shading and detail when size is reduced.
SVG has garnered support from all the latest browsers. It has great mobile support on all platforms with the lone exception of versions of Android older than 3.0. There are many online tutorials and articles available that go into much greater detail when discussing SVG and how it may be used. That is a bonus for all of the newbie developers who may not have had any exposure to SVG. It would be a great benefit to anyone starting out with SVG to take the time to visit these sites, and get acquainted with all the ins and outs of this tool.
 WebGL
Web-based Graphics Library, or WebGL, is managed by the Khronos Group. It is used, in tandem with HTML5, to produce 3D graphics. WebGL is notoriously difficult to learn and master due to the fact that it is extremely low-level. It runs on graphics processing units, and is actually a JavaScript port of OpenGL. OpenGL is a long-established set of APIs used by game developers. These developers are the target market for this tool, so WebGL is expected to replace OpenGL in game development.
a
The difficulty in learning WebGL is a major hurdle for many users, but there are resources available to help with this learning curve. WebGL is not just for game developers either. There have been music videos, excellent graphics, and professional visualizations created with WebGL. Here are some excellent resources to use:
  •  Raw WebGL 101
  • Learning WebGL
  • WebGL 101
WebGL can be found on almost all browsers, excluding IE10, and is equally prevalent on mobile operating systems as well. It has been excluded from Internet Explorer 10 due to Microsoft’s refusal to support it. This is an internal decision made by Microsoft, and should not be interpreted as a failure of WebGL.
 File APIs
File APIs give JavaScript access to files on a local system. FileReader is the most commonly used API, and is available in IE10, Chrome, Opera, and Firefox platform preview. It provides an API for representing objects in applications, selecting them, and accessing their data. It is possible to upload files into a browser and discern information such as name, size, and file type without having to query the server. The files can also be opened and manipulated. This amplifies interaction in browser-based applications making them more like desktop applications.
a
 Traditional image upload dialogue is great, but it can be upgraded by allowing dragging and dropping into the browser rather than navigating through the specific file system. This allows the user to start with a normal file and progressively add enhancements. If the user detects for HTML5 drag and drop support, and locates it, they can replace the input with a drag target for the image. Now when an image is dragged into the target area the FilReader API shows a thumbnail of the image. There are also specifications for writing and manipulating file systems, but, as of today, these are not currently usable due to cross-browser insufficiency. Here are some resources that may be of use:
  • The W3C File API
  • Exploring the FileSystem API
 Final Thoughts
The technologies described here are only a small sampling of what is available to users in 2013. These tools represent the cutting-edge of web technology, but in a field that changes as fast as this it is only reasonable to assume that the next great tool is already coming down the pipeline. That is just one of the things that make technology so exciting to work with. There is never a dull moment, or a penultimate tool. Things change. Adaptation and enthusiasm are the still the best tools we humans have.

Prezi: Reinventing Web Presentation

a
by Spencer Wade
The world has changed for the better today, my friends. There is now a way to take all that old content you have lying around your site, and turn it into something brand spanking new. It’sPrezi! An absolutely free site that gives users the ability to reformat content into just about anything they can dream of. Prezi has tools and templates galore to help you make something you can proudly show off on your site. It takes no time whatsoever to get acclimated to the Prezicontrol system, and before you know it you’ll be displaying content in ways you never imagined. From presentations to straight text you will be hard pressed to find another site that gives as many options for refreshing content as Prezi. Try it out for yourself, for free, and I’m sure you’ll agree with me. Prezi is just what the site doctor ordered.

Programming: Abacus to Apple

What is Programming?
by Spencer Wade
Computer programming, often shortened to coding, programming, or scripting) is the process of designing, writing, testing, debugging, and maintaining the source code of computer programs. The source code is written in programming languages like C++, C#, Smalltalk, Python, Java, etc.. Programming’s true purpose is the creation of instructions that tell computers how to perform specific tasks and exhibit desired behaviors. Writing code requires expertise in many subjects. These include knowledge of the domain, algorithms, and formal logic.
A debate has been raging over the extent to which writing programs is considered an art, an engineering discipline, or a craft. The reality is good programming should be the measured application of all three, and the goal is producing an evolvable, efficient software solution. It is important to note that the criteria varies considerably when defining software as efficient or evolvable. The field differs from other technical professions in that programmers do not need a license or any other certifications in order to call themselves programmers or software engineers. Due to the fact the discipline covers so many areas, that may or may not include critical applications, licensing is a question mark for the profession as a whole. The field is self-governed in most cases by the entities that require the programming, and in certain instances this can mean strict working environments. For example, the military uses AdaCore and requires a security clearance to do any programming work. The debate is still ongoing in the US, but in many parts of the world portraying oneself as a “professional software engineer” without licensing is illegal.
a
There is another debate going on in the field of programming that focuses on programming languages. It concerns the extent to which the language used to write a program affects the form the final program takes. This debate is similar to the one in linguistics and cognitive science surrounding the Sapir-Whorf Hypothesis. This hypothesis states that a spoken language’s nature influences the habitual thought of its speakers. In other words, different language patterns produce different patterns of thought. If true, the mechanisms of language condition the thoughts of its speakers, so representing the world perfectly through language becomes impossible.
The History of Programming
Simple arithmetic was the pinnacle of human computing for millennia. The abacus was the only mechanical device for computing numbers for thousands of years of human history. It was invented sometime around 2500 BC, and was not surpassed until the Antikythera Mechanism was invented around 100 AD. This device was used to track the lunar-to-solar cycle in order to hold the Olympiad on the same day in different years. In 1206 AD, the Kurdish scientist Al-Jazari constructed automata that used pegs and cams to sequentially trigger levers. These levers in turn operated percussion instruments that caused a small drummer to play various rhythms and patterns.
In 1801, the Jacquard Loom used a series of cards with holes punched in them representing the pattern to follow when weaving cloth. The machine could produce different weaves by using different cards, and this was its groundbreaking feature. Punch cards were used in 1830 to control an invention called the Analytical Engine. The very first computer program was written for this engine to calculate a sequence of Bernoulli numbers. The Industrial Revolution accelerated the development of computer programming into what we know today. Numerical calculation, predetermined operation and output, and conceptually easy to use instructions for organization and input were all products of this mechanical boom in human history.
Herman Hollerith, in the 1880s, invented a process by which data was recorded to a medium that could then be read by a machine. All prior readable media, such as punch cards, had been for giving instructions to drive machines instead of data they could use to perform specific tasks. The punch cards that he used were different in that they used a keypunch, sorter, and tabulator unit record machines to encode the information to the cards. These inventions formed the basis of the entire data processing industry. In 1896, he founded a company that later became the core of IBM. A control panel was added to Hollerith’s machine in 1906 that allowed it to do different tasks without having to be physically rebuilt. By the middle of the 20th century, there were several machines functioning as archaic computers, and each had control panels that allowed them to perform a sequence of operations. These were the first truly programmable machines.
a
The advent of von Neumann architecture allowed computer programs to be stored in computer memory. The earliest programs had to be crafted using the instructions, or elementary operations, of the particular machine, and these instructions often had to be written in binary notation. Different model computers used different instructions to perform the same tasks due to the limitations of early programming languages. This was the case until assembly languages were developed to allow programmers to specify each instruction in a text format, and abbreviations for each operation code were substituted for numbers an addresses being written in symbolic form. Assembly language is more convenient, less prone to human error, and faster overall than machine language, but due to the fact that assembly languages are little more than different notations for machine languages any two machines with different instruction sets also have different assembly languages.
The first high level programming language, dubbed FORTRAN, was invented in 1954. A high level programming language is defined as one that allows for the creation of programs using abstract instructions. This allowed programmers to specify calculations by entering a formula directly into the code. The actual source code is converted into these instructions using a program called a compiler that translates the FORTRAN into machine language. In fact, the name FORTRAN actually stands for Formula Translation.
There have been many other languages developed including those, like COBOL, specifically for commercial programming. Yet, most programs were still entered using punch cards or even paper tape. As the late 1960s rolled around data storage devices and computer terminals had become inexpensive enough to directly type the programs into the computers. Text editors were developed allowing changes to be made to a program much easier than if using punch cards. Punch card errors meant that the card had to be destroyed, and a new one created to replace it.
a
Through the years, computers have made exponential leaps in processing power. This has allowed for the creation of new programming languages that are even more abstracted than the underlying hardware. These modern languages include C++, Python, Visual Basic, SQL, Perl, Ruby, C#, Haskell, HTML, PHP, Java, and Objective-C. There are literally dozens more programming languages available for use in the modern era. These high-level languages often come with greater overhead, but the increase in computer speed has made these languages much more usable and practical than in the past. Languages like these are typically easier to learn, and allow the programmer to work more efficiently and with less overall code. However, they are still impractical for some programs that require low-level hardware or maximum processing speed is vital.
Computer programming is a popular career today; particularly in the developed world. Due to the high cost of these programmers in the developed world, some programming has been outsourced to countries where the labor costs are low. These low cost alternatives have caused instability in the profession in developed countries just as was the case with manufacturing.
Modern Programming
The field of programming in the modern world is a very complex and lucrative industry. There are technological breakthroughs happening all the time, so a programmer must be a student their entire career in order to maintain their skills at a high level. It takes serious dedication and energy to stay on top of an ever-changing field. This is imperative, however, if the programmer wants to stay relevant and educated on the latest languages and their uses.
a
The approach to development in the software industry is as varied as the languages used to write programs. There are some fundamental properties that every program must satisfy in order to a successful venture. The following properties are among the most relevant:
  • Reliability: This indicates how often a program’s results are correct. Conceptual correctness of algorithms, logic errors, and programming mistakes can all affect this metric.
  • Usability: This basically refers to what is known as “the ergonomics” of the program. This is the overall ease of use of the program for its intended purpose. The usability of a program includes a wide range of textual, graphical, and hardware elements that improve the clarity, cohesiveness, completeness, and clarity of a program’s user interface.
  • Robustness: This is how well a program anticipates problems not due to programmer error. These issues can include incorrect or corrupt data, lack of necessary resources, and user error.
  • Maintainability: The ease with which a program can be modified by developers in order to improve or customize, fix bugs, or adapt it to new environments. This is where best practices during initial development make all the difference. The end user may never notice this property, but it can affect the fate of a program over the long term.
  • Portability: This is the range of hardware and operating system platforms on which the code can be interpreted and run. This depends greatly on the facilities provided by the platforms. These facilities include system resources, expected behavior of the hardware, and availability of platform specific compilers for the language of the code.
  • Efficiency: The amount of system resources a program uses is known as its efficiency. The less resource usage the better is the rule in computer programming.
Final Thoughts
The development of programming throughout human history has always been an exciting and rewarding pursuit. The rewards we as a species have reaped from this avenue of endeavor are exhilarating and life-changing. The concept, when realized fully in our time, is one of the most beautiful and meaningful creations of the human mind since the advent of language itself. The possibilities open to humanity thanks to this field are endless, and each new day brings another revelation to us all.

SEM: The New Way to Turn Dollars Into Customers

What is Search Engine Marketing?
by Spencer Wade
Search Engine Marketing (SEM) is a blanket term used to describe all forms of internet marketing. SEM concerns the promotion of websites by increasing their visibility in search engine results pages through optimization (on-page and off-page), and through paid placement, contextual advertising, paid inclusions, and other advertising avenues. SEM can be a broad term for a plethora of website marketing techniques. These include everything from search engine optimization where a higher search result is gained through adjustment or rewriting of content to pay-per-click marketing where the focus is solely on the individual paid components of the marketing strategy.
a
What is the Market for SEM?
The market for SEM is growing at an exponential rate. In the US alone in 2008, advertisers spent $13.5 billion on search engine marketing. The fact that the technology involved is so complex means that a secondary SEM agency market has evolved to handle the workload. Many marketers find the complexities of search engine marketing difficult to understand, so they rely solely on third party agencies to manage their endeavors in the field. Google Adwords, Microsoft AdCenter, and Yahoo! Search Marketing are the largest vendors of SEM service. They expect SEM to grow much faster than traditional advertising, and even outpace other channels of online marketing.
SEM History
In the late 1990s, the number of Internet sites dramatically increased. Search engines were developed to help users find specific information quickly. The firms developing search engine technology developed business models to finance their services, and the field of SEM began to grow at a dramatic pace. As of 2007, pay-per-click programs proved to be the primary revenue streams for these firms. Google has grown to dominate this market through their Adwords service, so Microsoft and Yahoo announced, in 2009, their intention to join forces. This alliance was approved by regulators in 2010, but Google has yet to be replaced as the preeminent service in SEM.
Search engine optimization professionals have offered new services year by year. Many of these services are geared toward helping clients understand and use the advertising avenues open to them through search engines, and new agencies have formed through the mergers of many marketing and advertising firms to better serve their clients in these areas. SEM, coined by Danny Sullivan in 2001, covers the spectrum of activities involved in SEO, submitting sites to directories, managing paid listings, and developing online marketing strategies for organizations, individuals, and businesses.
a
Metrics and Methods
There are many metrics and methods used to optimize websites through Search Engine Marketing. The following list includes many of these, and a brief description of each:
  • Keyword Research and Analysis: There are three steps involved in this process of keyword research and analysis. These are (1) ensuring the site can be indexed in search engines, (2) finding the most relevant and popular keywords for the site and its products, (3) and using those keywords in a way that generates and converts traffic.
  • Website Saturation and Popularity: The presence a website has on a given search engine can be analyzed through the number of pages indexed for the site, or saturation, and how many backlinks the site has which is known as its popularity. This requires keywords people are looking for being included in the content of the page, and these keywords ranking high enough in the search engines rankings. All search engines have some form of link popularity in their ranking algorithms. There are tools that measure various aspects of saturation and link popularity such as Link Popularity, Search Engine Saturation, Top 10 Google Analysis, and Marketeap’s Link Popularity.
  • Back End Tools: The back end tools in SEM include Web Analytics and HTML validators that provide data on a site and its visitors, and allow for site success to be accurately measured. Log file tools, simple counters, and more sophisticated tools based on page tagging are all tools used on the back end to deliver coversion-related information. Validators monitor the invisible parts of websites to highlight potential problems and usability issues. This helps to ensure that a site meets W3C code standards.
  • WhoIs Tools: These tools reveal the owners of various websites, and provide information relating to copyright and trademark issues.
a
Paid Inclusion
The need for search engine developers to turn a profit was the impetus behind the advent of what is known as paid inclusion. This is where a company is charged a fee by the search engine developer for the inclusion of their site in results pages. Paid inclusion products, also known as sponsored listings, are provided by search engine companies like Google, Microsoft, and Yahoo.
Structured as both a filter against superfluous submissions and a revenue generator, the fee covers an annual subscription for one webpage that will automatically be catalogued regularly. Some companies have turned to a non-subscription based pay structure where listings are displayed permanently. A pay-per-click fee may apply in these instances. Search engines are all different. One may only offer paid inclusion, though this has been proven to be less successful, while others offer a mix of per-page and per-click fees with web crawling. Google does not allow webmasters to pay for listings, and advertisements are shown separately and labeled as such. The rise of paid inclusion has its detractors as well. Many believe that it causes results to be returned based more on the economic standing of the website’s interests, and less on the relevancy of the site to end-users.
The line between pay-per-click advertising and paid inclusion is debatable. There has been a move by some to insist that paid listings be labeled as advertisements. This move has been counteracted by others stating that since webmasters do not control the content of the listing, its ranking, or whether it is shown to any users it cannot be identified as an advertisement. The debate is still raging among SEM professionals. Yet, paid inclusion has advantages that cannot be denied. It allows site owners to specify schedules for crawling pages. Usually one has no control over when their page will be crawled, or even added to a search engine index. Paid inclusion is particularly useful when pages are generated dynamically and modified frequently.
Search engine optimization, of which paid inclusion is a part, allows firms to test different approaches to improve ranking, and see results within a couple of days instead of weeks or months. The knowledge gained from this experimentation can be used to optimize other pages without paying the search engine company for those as well.
a
SEO Comparison
Search engine marketing is the broad term that includes search engine optimization. SEM includes both paid search and organic search results. SEM uses paid advertising through services like Adwords, pay-per-click, and article submissions to ensure SEO has been done. Analysis is performed on the keywords in both SEM and SEO, but not necessarily simultaneously. SEM and SEO must be monitored and updated frequently to reflect evolving best practices.
There are certain contexts under which SEM is used exclusively to mean pay-per-click advertising. This is particularly true in the commercial advertising and marketing communities. The wider search marketing community is engaged in other forms of SEM such as search engine optimization and search retargeting.
The field of SEM contains the rapidly growing environment of social media marketing (SMM). SMM exploits social media sites to convince consumers that one company’s products or services are valuable. The field of SMM has experienced a broad range of theoretical advancements in its short lifespan. This includes search engine marketing management (SEMM). This relates to activities including SEO, but focuses on return on investment (ROI) management instead of relevant traffic building. It also integrates organic SEO in an attempt to get top ranking in search engines without using paid services or pay-per-click SEO. This is the future of SEM in the social media dominated world we live in.
Conclusion
The broad spectrum of services and firms that offer SEM to their clients is growing every day. This is a direct reflection of the industry’s growth as a whole, and its importance to the marketing community in terms of dollars spent by clients. The next few years will see an explosion of new methods and technologies that will allow a company’s marketing dollars to go further, and give clients the means to achieve their marketing goals without relying on traditional marketing. The future is now in SEM, and the future looks bright indeed.

Networking: Connecting the World One PC at a Time

Networking: Connecting the World One PC at a Time
by Spencer Wade
The Internet is an amazing piece of human creativity and ingenuity. It allows computers to be interconnected, share data, and work in harmony with one another to accomplish tasks via a network. What is a network? That is a question with a somewhat complex answer. A network is, by definition, a group of two or more computer systems linked together. There are many types of computer networks. These include:
  • Local-Area Networks (LANs): The computers in a LAN network must be located close together geographically. Almost 100% of the time these computers are in the same building.
  • Metropolitan-Area Networks (MANs): This is a data network designed for a large city or town.
  • Home-Area Networks (HANs): Any network contained within a home that connects a user’s digital devices.
  • Campus-Area Networks (CANs): The computers are all located within a defined geographical location. This area constitutes the perimeter of a campus or military base.
  • Wide-Area Networks (WANs): Computers are located far apart geographically, and are connected by telephone lines or radio waves.
 a
There are also characteristics of networks, along with the specific type, that are used to categorize different networks. These characteristics cover everything from how the network is constructed to how it interacts with the other systems in the network. These characteristics include:
  •  Topology: This is the geometric arrangement of a computer system. Common forms of topology include a bus, a ring, and a star.
  • Architecture: There are two broad classifications of networks all computer systems are broken into. These classifications are peer-to-peer and client/server architecture.
  • Protocol: The defined set of rules and signals computers on a network use to communicate is the protocol of the network. There are many forms of protocols used by networks today, and these vary from network to network. Popular forms include Ethernet and Token-Ring Networks.
 a
 Network software is a general phrase used for software designed to helpset up, manage, and monitor computer networks. There are software products and applications available to users today that are able to manage and monitor networks of all sizes. This means there is a software capable of managing the largest enterprise network all the way down to the smallest home.
A network computer is a generic term given to any computer with minimal memory, disk storage space, and processor power designed to connect to a network. The concept of a network computer is based on the assumption that many users who connect to a network do not need all the computer power normally associated with a personal computer.  The network servers supply the necessary computing power for these users.
 The old concept of diskless workstations, computers with no disk storage on their hard drives, became the basis of network computers. These computers are wholly dependent on the network servers to store any and all data. Network computers go one step further than diskless workstations by minimizing the amount of processor power and memory they need to do their tasks. Network computers are often called Net PCs, Internet appliances, or Internet boxes.
a
Network computers reduce the total cost of ownership for users. This is due to the fact that the machines are less expensive than their fully loaded counterparts. The savings in hardware alone can be extensive, but network computers also have another benefit. They can be administered and updated simultaneously from a central network server.
The information given concerning networks thus far has merely scratched the surface of what a network actually is. There is as much information about a specific network available to users as one could ever ask for, and more than one could ever hope to digest in a short period of time. There is also an abundance of terminology that goes with networking. For example, computers on a network are often referred to as nodes, and devices that allocate resources for a network are called servers. These are just two of the countless terms associated with computer networking.

Network Security In A Nutshell

Network Security In A Nutshell
by Spencer Wade
There are many questions one finds themselves asking when network security is the topic of discussion. The most obvious questions would be what network security actually is, how it functions to protect you, how it works, and what the benefits are for business. These seem straightforward and easy to answer, but a trusted IT partner should be consulted to answer these questions for you.
Most small and medium-sized businesses do not have the IT resources of larger companies, so network security is not always sufficient to protect these companies from threats. Larger companies can afford the sizeable budgetary concerns that arise from strict security protocols. This desire of major firms has had an excellent secondary effect on the industry. It has advanced the science of security and lowered the cost for technological defenses against outside threats. This means small to medium-sized businesses can now afford the same level of security larger companies have come to expect over the years.
images
What Is Network Security?
The answer to the question, “what is network security?” is very simple. Network security is defined as any and all activities designed to protect your network. These activities protect the integrity, safety, reliability, and usability of your network and data. Security that effectively protects a network targets threats and eliminates them before they can enter your network and spread.
What Is Network Security and How Does It Protect You?
The next question to address is, “what are the threats to my network?” There are many answers to this question, and none of them are good for companies. Many threats to networks spread over Internet connections, and since all business today is conducted via the web our networks are susceptible to attack. The most common of these threats include:
  • Viruses, Worms, and Trojan Horses
  • Data Interception and Theft
  • Denial of Service Issues
  • Zero-Day Attacks
  • Hackers
  • Spyware and Adware
  • Identity Theft
How Does Network Security Work?
Understanding network security is realizing that no single solution will ever protect against a variety of threats. Effective network security consists of multiple layers, so that if one layer fails there is another to take its place. Hardware and software are the tools used to achieve network security, and these must be updated and managed regularly to maintain a high level of protection for systems and data. New threats emerge continuously, and the security software must be able to counteract whatever it faces. Network security systems consist of many components, but none more important than the software and its updates. When these components work together they minimize maintenance and improve security. A list of some of these components follows:
  • Anti-Virus and Anti-Spyware
  • Firewall
  • Intrusion Prevention Systems (IPS)
  • Virtual Private Networks (VPNs)
What are the Business Benefits of Network Security?
There are many benefits to your business adopting a network security system. The company will be protected against disruption from outside sources, and this will keep the workforce more productive. It will also allow the company to meet mandatory regulatory compliance with state and federal agencies. Most importantly, it will protect your customer’s data thereby safeguarding against legal action due to data theft. This kind of protection is priceless when considering possible damage to the company’s reputation.
images
How Security Pays Off
Businesses today require a network security system even if they do not rely heavily on the Internet. Partners, customers, and vendors will expect you to protect any and all information they share with you. The fact that network security has become a must for all businesses is not a bad thing when you look closely at the potential payoffs a company can expect to see. Here are just a few secured network benefits:
  • Customer Trust
  • Privacy Assurance
  • Encouraged Collaboration
Taking a strong stance on security assures customers that all sensitive information they share with you is safe from access and exploitation. This includes everything from personal information to credit card numbers. This will foster an atmosphere conducive to partners sharing more sensitive information like sales forecasts or pre-release product plans with you that may help to do business with them. The same security that protects the information they share also allows your partners to access your network securely, and this will make greater collaboration possible.
security
Final Thoughts
Network security has become more than a luxury. It is essential to all companies looking to do business in the globalized world of today. As a certified CompTIA Security+ IT professional I can tell you that security is one of the fastest growing fields in IT. Security threats are increasing in number and severity, and there is an enormous gap between IT professionals working in the field and IT personnel needed. This means enormous opportunity for both the IT professional and the companies looking to protect themselves from the threats they face. Even in the face of difficult economic times companies are looking to increase their security budgets. This shows precisely why network security is so important to both companies and IT professionals.

Google Analytics: The Gift That Keeps on Giving

Google Analytics: The Gift That Keeps on Giving
by Spencer Wade
Analytics is defined as the discovery and communication of patterns found in data. This is most valuable in areas saturated with recorded information, and relies on statistics, operations research, and computer programming to gauge performance. Google Analytics is the focus of this blog, and it favors data visualization to communicate its discovered insights.
The newest version of Google Analytics, released to users in phases from 2010 to 2011, is an excellent tool for analyzing data related to website productivity. It is made up of a plethora of cutting-edge features including an intelligence engine that is second to none, custom variables, expanded goals, mobile reporting, and many more. This blog includes some screenshots of the new and improved Google Analytics, and defines, to the best of its ability, the uses and benefits of the interface.
Navigation
The newest version of Google Analytics has adopted the look and feel of the Google navigation bar released in February of 2010. The new navigation bar focuses on the following selections:
1.)   Account Home: This is where all the accessible accounts are found.
2.)   Dashboards: This is the page containing all the dashboards available for a specific account.
3.)   My Site: All reports can be found on this page divided between reports and intelligence.
4.)   Custom Reports: This page allows creation and management of custom reports.
5.)   Account Manager: All accounts with user access are listed with links on this page.
6.)   Settings: Page containing all accounts with administrator access where settings may be changed.
a
 Account Home
Google Analytics v4 changed the face of the Account Home page. It became a dashboard display, and the ability to see metrics on this page was lost. However, Google has replaced the metrics with links that can be used to access specific reports pertaining to a chosen profile. The icons on the page will link to the main reporting tabs: traffic sources, content, conversions, and visitors.
a
Improved, Multiple Dashboards
One of the best features of Google Analytics is the ability to create multiple dashboards. Each dashboard may contain a set of any graphs the user chooses. This feature is excellent for large organizations that have many employees because it allows them to use the tools for their own specific work-related needs. Dashboards can be set by hierarchy, department, interest, or any other rule the user deems appropriate. As can be seen in the screenshot below, Google’s naming convention has been adopted in Analytics, so all boxes are now called widgets. The widgets are significantly more customizable than in the past, and it is now possible to define which metric will be seen as well as which visualization you prefer.
The dashboard functionality is excellent with one glaring exception; no button for adding reports to the dashboard. Traditionally, there was always a button in all reports allowing reports to be added, but it has been removed by Google for reasons only they understand. This means that the process of adding reports to the dashboard is more difficult than in previous versions.
a
 Report Nomenclature
BUsers of past versions of Google Analytics will notice that, with changes to the interface, the names of reports have been changed. These names may sound a bit strange to those accustomed to the older versions, but the changes were made to make the names more accurate and intuitive. Here are a few examples:
  • Technology tab contains Network Properties and Browser Capabilities tabs.
  • Pages tab contains the Top Content tab.
  • Goals tab is now Conversions tab.
  • Engagement tab now contains Visit Duration and Page Depth tabs.
 User Interface Improvements
In the screenshot below, the Google Analytics user interface can be seen, and the improvements made since the last version should be clear to previous users.
a
 Conclusion
 The release of the updated and improved Google Analytics tool is incredibly significant news for users and the industry in general. Google has raised the bar once again when it comes to usability and data visualization. There are, unfortunately, some issues that need to be addressed by Google as soon as possible. These issues range from importing campaign data into Google Analytics to the usage of Adsense clicks as ecommerce transactions. Once these issues have been addressed Google Analytics will be as perfect a system for users as can be constructed in today’s technological climate. In closing, Google has given users tools to make their lives easier, and for that they receive an excellent rating from this user.