On the Ecommerce Outtakes blog, we talk a lot about what not to do online. In fact, our main focus is to point out where websites go wrong—with the intent, of course, to help improve the e-commerce experience across the web. One trend we’ve been noticing a lot lately is a lack of good filtering and sorting options. It’s a widespread e-commerce epidemic, and it’s high time we cured it.
Archive for the ‘Web Design / Development’ Category
Page loading time is crucial to keeping visitors on your site and
maximizing conversions. Studies have been done that show the maximum
time people are willing to wait for a page to load is less
than 5 seconds. Make them wait more than that, and it’s game over.
They’ll hit their back button, never to return. It’s vitally important,
then, to make sure your web site is loading as fast as possible.
Sure, having a super-beefy server helps, but one important aspect of
having a fast loading website is reducing the size and number of your
page assets as much as possible. This can be a real challenge within the
current state of the Web. Web pages are becoming increasingly more
complex globs of code that require a huge amount of assets to display
and function properly. In addition to the plain old html, a ton of
background for the page to fully render in a browser.
“What’s the big deal?” I hear you ask. “All my visitors are on
broadband, and the js/css/images are only a few KB extra – hardly a drop
in the bucket!” Now, this may very well be true. However, the fact is
that the actual size of your files are only a small part of the overall
cost incurred on a page load. There is a much more subtle bottleneck
that has nothing to do with file size: The maximum concurrent connection limit.
This is a limit the browser enforces which dictates how many
connections can be open simultaneously to a single server. Even if
you’re on a super-fast connection, your browser will still limit the
maximum number of files you can download at one time. This number varies
from browser to browser, and may change slightly depending on connection
speed and web server configuration. The actual values for Internet
Explorer, Firefox, and Chrome are below:
- IE 7 and below: 2 – 4
- IE 8 and above: 6 – 4
- Firefox 3: 6
- Firefox 3 and above: 15
- Chrome: 6
Combine and Minify
It can be helpful to think of a concurrent concurrent limit as the end
of a funnel that your page assets pour through. Naturally, the more
assets you have the longer it takes for them to get through the funnel.
Making your assets smaller helps them pour through faster, but still,
only so many can go through at once no matter how small they are. The
key is to combine them into as few files as possible, thereby reducing
the connection limit bottleneck. Making your files smaller AND combining
them is a win-win situation. Smaller files + fewer connections =
faster loading site.
To “minify” a file simply means to strip out all the “human readable”
parts of the file such as indentation, line breaks, comments,
extraneous whitespace, long variable names, etc. E.g all the stuff that
makes it easy for a human to read, but which a computer couldn’t care
It’s pretty much common knowledge: Web developers hate SEO experts. In all fairness, however, the feeling is mutual. But there are some good reasons for this culture clash.
“Same Thing” Sickness
One thing that SEO’s hate about web developers has to do with the way they execute or fail to carry out a very specific request.
A case in point: An SEO requests a developer to create a 301 redirect between pages. The developer does a meta-redirect or a 302 redirect citing that it’s the “same thing”.
From the developer’s perspective, it’s the same thing for the user, but from an SEO standpoint, it affects the search engine rankings.
The Death of Optimization
Developer skills and SEO techniques go hand in hand, so when if a developer fails to do their job, then it doesn’t matter what the SEO team does. Even with a copy of Google’s secret algorithm in hand, the site won’t rank if the site won’t work.
A case in point: A client implements some redesign elements. Suddenly, traffic drops by 30%.
The problem: Many of the pages don’t load like they should and the ones that do load show 500 server errors. The developer failed to spot the errors during the development process.
The result: 3 weeks of seriously diminished traffic.
The “I Know SEO” Syndrome
This is a contagious disease that developers get that can quickly spread to other developers. If you have ever heard a developer say something like I’m pretty good at SEO, it can usually be translated into I’ve read a little about SEO and therefore I pretty much know more than you do.
But wait a minute, SEO’s. You aren’t immune, either. There is a related syndrome called “I can code”.
A case in point: An SEO expert successfully builds a Word Press site and suddenly deems themselves a web developer.
The Real Problem
At the root of the culture clash between coders and SEO’s are their driving philosophies. Business classes that teach search engine optimization focus on uniqueness. After all, differentiating yourself from the competition is a good thing. On the other hand, computer science classes center on making everything the same. Each discipline takes a different approach to reaching the same result: stability and efficiency.
Like any other art form, web design is completely subjective. A web site might look like a thing of beauty to one person, and a complete mess to another person. There is, after all, no accounting for taste, and everyone’s tastes are different. However, there’s more to a web site’s design than merely its appearance. A web site design can have an enormous impact on conversions and even the most subtle design decisions can have a big effect. For example, a user might be more inclined to click on a green “Buy Now!” button more so than a red one. Finding a good balance between a site that looks good and a site that performs well in terms of conversions can be a real challenge.
How then can something as subjective as web design be analyzed in an objective manner to find the most effective design? One widely technique is A/B testing. In a nut shell, A/B testing sets up two or more control groups: Group A will see one version of the site, while Group B will see another version. This way various design elements can be tested and compared.
But is A/B testing really the best way to determine the most effective web design? Perhaps not. This excellent blog post by Steve Hanov suggests another method for finding the best design. Best of all its fully automated. Set it, forget it, and the page will “learn” which elements result in the most conversions.
In his post, Steve outlines the epsilon-greedy algorithm, also known as the multi-arm banded problem. Given a set of variations for a particular page element, the algorithm can make an ‘educated’ decision on which element to show based on its past performance. The best performing page elements are displayed the most frequently.
The algorithm records of the number of times a particular page element was displayed, and the number of times the element resulted in a conversion. However, the algorithm will also adapt to change. If a page element’s conversions begin to decrease, the algorithm will start to adapt and display different variations. The best part of this is that you can set up different variations of page elements one time, and let the computer do the work of figuring out which variations are the most successful. Pretty neat stuff!
Armed with this knowledge, I set out to try a few experiments with it, the result of which is Robo_AB_Tester, a small PHP class library I created which implements the epsilon-greedy algorithm. You can give it a try here.
Robo_AB_Tester tries to abstract away as many implementation details as possible and create a simple interface that is, hopefully, easy to integrate into a PHP based website. Once it is set up, it will:
- Allow you to test multiple elements per page
- Allow you to specify any number (A/B/C/D/E…) variations of each element.
- Detect on page events for the tested elements (e.g clicks, form submits, etc)
- Handle all ajax communication between your web page and Robo_AB_Tester
- Keep track of how many times the elements were displayed
- Keep track of how many times a user interacted with the element
- Autonomously determine the best performing elements
For more details, see the demo page.
Rich snippets are all the rage these days. Ever since Google started
enhancing their search results with these extra tidbits of information,
everyone is rushing to update their web sites with the metadata to
enable them. So what is the benefit of having a “rich” search result for
your site? Good question. Other than giving the search engine user a
little bit of extra bit of detail, I suppose there’s also a subtle
psychological factor that kicks in. Someone might be more inclined to
click on a search engine result that has a 5 star rating and a friendly
face than one that doesn’t. Plus, they’re just plain cool. Who doesn’t
want to add bling to their search results? But this only scratches the
surface. There’s much much more to them than that.
Instant information aggregation: It’s only a matter of semantics
Rich Snippets, as Google calls them, are actually semantic markup. The
idea of marking up some sort of document with meta information for the
benefit of machines is not a new idea. Semantic markup is as old as
information technology its self. For example, a Word document contains
metadata about its author, and a digital photo contains meta data about
the camera it was taken with. You might, for instance, store your
digital snapshots in a photo archiving program which uses this semantic
data to filter your photos by date taken, lens type, flash used, etc.
So, in essence, metadata is data about data.
It’s should be clear, then, how this “data about data” can be extremely
useful to search engines. It can provide a search engine the ability to
derive a semantic meaning from a document’s meta
information rather than having to rely purely on the abstract, human
understandable, concepts within the text of the document. Searches can
become less about keywords in text documents and more about
relationships between semantical data types.
To illustrate this point further, consider the following search: Find
all restaurants with a 3.5 star or better rating on the Las Vegas strip
that specialize in Italian OR Mexican cuisine AND are open after 11 PM
on Sunday nights AND do NOT require reservations. On the
semantic web, rather than a list of links to restaurant web sites that
may or may not match your given criteria, you might get a list of
“restaurant result objects” that DO match exactly
that criteria and never even have to visit the restaurant’s web site.
This is where the real power of semantic data lies. Instant information
This “semantic web”, also, is not a new idea. In fact, Tim Berners-Lee
himself envisioned the world wide web as a kind of “Semantic Network
Model” and even the earliest HTML specifications included the concept of
meta tags, which you are undoubtedly familiar with. Later iterations,
such as XHTML, took this idea a step further. Most notably is the RDFa
specification, which has been around for quite some time.
dynoTable: A JQuery plugin for creating editable tables
A while back I was working on a project that required the GUI
to allow the user to dynamically add, remove and rearrange various form
fields contained in table rows. The tricky part was that the UI needed
to have this functionality for several different types of elements
across several different forms. For instance, one set of fields was for
adding and removing specifications to a product while another set of
fields was for adding images to a product. Thus, I needed a solution
that would be flexible enough to work across virtually any type of form
Naturally, I turned to JQuery. I first took a look around within
JQuery’s plugin ecosystem to see if perhaps there was already a plugin
that might do the job. While I did find a few different plugins for
adding removing form elements, none of them did exactly what I
needed, specifically re-arranging items… So, I was left with either
trying to hack the functionality into an existing plugin, or roll up my
sleeves and write my own. I choose the later option, since JQuery’s
excellent extension mechanism makes writing plugins a fairly
straightforward process. The result is the plugin below, which I call
What the plugin does
DynoTable makes an html table editable. With it you can:
- Add rows
- Remove rows
- Clone rows
Click and Drag to Re-arrange rows (If you have Jquery UI included on
Getting started with dynoTable is a snap. First make sure you have
JQuery, and the dynoTable plugin, included in your page like so:
Track any client side event with google analytics
By: Bob Tantlinger
I’ve recently been doing some work integrating social media events, such
as facebook likes, with google anayltics and was pleased to find that
Google gives you a deep level of control over what you can track. It
occurred to me that since a social media “event” is not really much
different than any other client side event, why not use google analytics
to keep tabs on any event the visitor might trigger.
With just a few lines of code, you can take your analytics a step
further and get some fine grained details about not only your visitors,
but their interaction with your web site. Using the techniques I show
below you can answer questions such as:
- Did the user scroll a section of your page into view?
- Did the user start filling out a form?
- Did the user encounter an error while interacting with your site?
- Did the visitor move their mouse over a particular page element?
These are just few examples off the top of my head for how this could be
useful, but you get the point. The sky is virtually the limit on what
you can track.
Get Tracking with _trackEvent
So, let’s dig in with a quick and dirty example that shows how to detect
if a user mouses over a a specific image on your page. To get started,
- A google anyltics account (Obviously)
- The google tracking code installed in your sites head
- JQuery included in your page
When you include google’s tracking code in your html, it brings in a
global variable named _gat
(Google analytics tracker) . Using this variable, we have a handle by
which we can get all trackers that have been included on the page. Using
the tracker objects, we can push arbitrary events onto the _gaq
(google anyltics queue) to be tracked. They can be anything. Their
meaning is entirely up to you.
After an event has been pushed onto the queue as an event, you can
monitor them under the “Events” section in your google analytics
account. (If you’re the pointy hair type, it’s probably neat idea to set
up goals for your events!)
So, the steps thus far are:
- Decide what arbitrary events you want to track
- Get a handle on all trackers included on the current page with _gat
- Use the tracker to send an event to GA.
In our example, we will present the user with some images of food and
ask which is their favorite. We want to know when a user mouses over an
image, what type of image it was, and which food they select. With this
in mind we might write with some code such as this (Take note of
comments) Read the rest of this entry »
By: Bob Tantlinger
Recently I was tasked with logging social media interaction on a site
utilizing the “buttons” (what do you call those anyway) of Twitter,
Facebook, Google+, LinkedIn, and Pinterest.
We wanted to be able to record not only when a social media button was
clicked, but when an actual share, like, or whatever took place. In
other words, we needed to know that the user actually did the share.
Nothing very difficult. Most of the big players in social media have
handy APIs that let you subscribe to events they fire off when a share
takes place, which makes this fairly straight forward. In a perfect
world it WOULD be easy, but there’s -always- a monkey wrench lurking
around the corner ready to ruin your day. In this case the monkey wrench
was a royal “Pin in the Ass.” I am referring to, of course, Pinterest.
Pinterest is the newest social media fad, so their button is popping up
all over the place at an alarming rate. Everyone is rushing to get their
images pinned to the worlds biggest pin board. But there’s a problem.
While Pinterest’s “Pin it” button works fine, they offer no offical API,
so unlike the other social media services, there’s not much you can do
with the Pin It button. You can stick it on your site, and that’s it.
You cannot track events, such as when a “pin” occurs, or even when
someone simply clicks on the darn thing.
The good news is that Pinterest is working on an API, which should
hopefully be ready soon. Parts of it are apparently in “Read Only” mode http://tijn.bo.lt/pinterest-api
Sadly, until then, the best you can hope for is a hack like the one I
will document below.
Bending Pinterest to your will (Almost)
When you include the Pinterest button on your page like they want you
and a simple link where you want the button to show up:
<a href="http://pinterest.com/pin/create/button/" class="pin-it-button" count-layout="horizontal"><img border="0" src="//assets.pinterest.com/images/PinExt.png" title="Pin It" /></a>
takes the simple link, removes it from your DOM, and replaces it with an
IFRAME. (an embedded html document right in your page where the button
goes) So the pin it button is not actually a button. Rather, it’s a
small html file loaded from Pinterest’s CDN embedded in your page. The
transformed code looks like this:
<iframe scrolling="no" frameborder="0" src="http://pinit-cdn.pinterest.com/pinit.html?url=http%3A%2F%2Fmysite.com&media=http%3A%2F%2Fmysite.com%2Fpic.jpg&description=Neat+Pic&layout=vertical" style="border: medium none; width: 43px; height: 58px;"></iframe>
Because they put it in an IFRAME, it’s like putting a brick wall around
the button. The IFRAME is pointing to
http://pinit-cdn.pinterest.com/pinit.html, which is obviously different
than your domain… Thus, you run up against the browser’s same origin
policy (A security measure browsers implement which ensures scripts from
two different domains can not interact with each other.). So, I was
stuck. I could not get through the IFRAME brick wall, so I decided to go
around it completely.
GoDaddy might not be as familiar name as Google to ordinary internet users, but most webmasters had, of course, heard this name. GoDaddy is currently on of the leaders in webhosting industry, providing various related services, such a website hosting, domain registration, dedicated servers, email plans, etc. Although dominating the market is not something GoDaddy had achieved, it might very well be on their mind.
It has been reported recently, that Google and GoDaddy enter certain form of partnership considering a “WebSite Tonight” feature, offered by GoDaddy. This service is a powerful tool that allows users create a website pretty quickly by using one of the available pre-designed templates, making it look almost “professionally designed”.
Google’s share of WebSite Tonight is offering various add-ons, widgets and tools that might be useful for a website owner and/or visitor. These include customizable search bar, Google Webmaster Tools, SEO-checking tools and more. Submitting website to Google is also made easier, helping webmaster to appear in the listings of world’s leading search engine quickly. Some tools will be available during the website building process; others are incorporated into the website’s control panel.
Well in the past, most rapid drops in a websites search engine rankings were caused by off site factors. Bad links, too many links too fast, too many exact match anchor text links, too many footer or site wide links etc. Recently we are seeing more keyword specific or page level penalties which are turning up to be caused by on site factors. Google is starting to look carefully at websites. They are becoming picky about internal link structure and placement of navigation, content quality and placement, and general over optimization on site is becoming a BAD thing.
We were trying to sort out why a particular website we were working on dropped hard in the rankings. The back link profile was not bad, and the links were not built overnight. The website was old enough 2006, and had been growing organically for several years. We tweaked a few things on site and off site to no avail. We decided to try something sort of outside of the box, we blocked Googlebot. Why block Googlebot? Well it permitted us to see if the problem was off site or on site. Low and behold within 2 weeks, we had lost a ton of long tail traffic, but we recovered all the keywords search results that had been dropping. Which showed us that the back links alone were fine, and strong enough to carry the website in the serps without any content .
Now we know, we have some on site work to do.