Those of us in the marketing industry that freely admit we don’t have all the answers, we can’t exactly predict marketing results, and we truly don’t know our customers well enough to anticipate how they will respond to certain marketing stimuli have my respect and my empathy. Sure, best practices, predictive analytics and experienced-based heuristics lead us in the right direction, but let’s be honest – no one really knows whether the “Buy” button should be yellow or green…until we actually know.
That’s where testing comes in.
Whether going the A/B split route or trying your hand at multivariate testing, there are certain hurdles along the way. These impediments cause marketers to perform faulty tests, come upon inconclusive results repeatedly and run with bad information. Here are five reasons your testing might need a little help.
1. Small Sample Size
You remember statistics classes. I remember statistics classes. Chances are neither of us found them to be much fun.
Stats sticklers and numbers nazis will go off about the critical nature of statistical significance, standard deviation, linear regression and all sorts of other fun phrases that are conversation killers at parties. However, this stuff is actually important. Before you deploy a test, you must have a good idea about what kind of audience size is necessary to give you a valid result.
If your test centers on landing page performance, make sure you have enough traffic. If you are running an email test, make sure your list and the number of people that actually see the test is sizable enough to know when you can claim a winner. Many testing platforms will do the math for you, but here are a couple resources to help calculate sample size before hitting the “go” button on your next marketing test.
2. No Hypothesis
Are you trying only to improve a metric like conversion rate or click rate or sales volume? Or are you trying to understand customer behavior? Many marketers opt for the former approach, and really miss the boat.
Embrace the scientific method. Craft a hypothesis based on how you think your audience will respond to a specific stimulus. As opposed to creating a test based on “I think we can increase click rate if we alter [X]”, develop a test based on statements like, “we can eliminate friction/doubt/dissonance if we alter [Y].” Your aim should not necessarily be to improve a metric or KPI. The goal should be to change customer behavior which in turn improves a metric or KPI.
For more on this, here is an excellent article about creating an A/B test hypothesis.
3. Too Many Variables
I mentioned multivariate testing earlier, but given the number of factors involved, a true multivariate test could literally take months. I believe every organization should adopt a culture of testing, but finding your perfect combination of elements, no matter the medium, can take time.
Those that have substantial audience exposure to test elements over a short period of time can run with a multivariate test. A better approach for programs with smaller sample sizes would be to roll out a series of A/B splits, which allow results to build upon one another in an iterative fashion. Each result informs the next test.
4. Uninvolved, Expiring Audience
This issue is specific to email testing primarily. If your current email program suffers from lagging open, click and click conversion metrics, a simple tweak here or there may not resuscitate a sleepy audience. The best testing environments involve an audience that has momentum (i.e. new or loyal customers) or where momentum can be manufactured quickly (i.e. via advertising or awareness tactics). An A/B split test that can bring Lazarus back from the dead is the equivalent of a biblical miracle.
5. Cracked Foundations
Sometimes a test can produce lackluster results due to issues outside testing variables. We can assume that one little element alteration will solve our problems, when in reality, the floor is rumbling under our feet.
If the premise of your marketing offering lacks promise, or if the page layout is void of harmony and simple usability principles, or if the audience can simply not access content due to technical factors (poor site speed, deliverability, browser compatability, etc.), focus on those primary issues first. Outstanding results from a well constructed marketing test are like window dressing; the foundation takes priority.
Online Marketing Testing Platforms
Now that we have potential testing weaknesses identified, are you looking for a reliable testing platform? Here are a few to consider:
Google Content Experiments (available within Google Analytics)
Visual Website Optimizer