Strategy for marketing

images-19One of the most critical questions a marketer has to answer is what makes customers take action. What makes someone open a marketing email, click on a website and ultimately make a purchase? Rather than just guess and hope for the best, smart companies will use what’s known as A/B or split testing to find out exactly what drives conversions in their marketing campaigns.

What is A/B testing?

When you run an A/B test, you’re comparing two different versions of a campaign — whether it’s a marketing email, a banner ad or just a website page — to see which one is more effective with your target audience. Mohita Nagpal, a marketing specialist and author of a Visual Website Optimizer (VWO) blog post about A/B testing, compared the process to a scientific experiment that requires rigorous testing of a hypothesis.

“Do some background research by understanding your visitors’ behavior using Google Analytics or any other analytics tools,” Nagpal said. “The next step is to construct a hypothesis. An example could be, ‘Adding more links in the footer will reduce the bounce rate.’ Then, test out the hypothesis [by comparing] the original version against this new version without the footer.

In an infographic accompanying her VWO blog post, Nagpal outlined a few basic steps to running a split test: [How to Create an Effective Marketing Plan]

  • Make a plan. Determine your goal, such as improving conversion rates or getting more repeat purchases.
  • Pick a variable. Based on your research, choose an element of the site or campaign’s A version to alter in the B version.
  • Run your test. Roll out the two different versions to your test groups for a period of up to two months and collect data on how many users took action.
  • Analyze the results. If you found low conversions on one or both versions, determine which element — copy, calls to action, images, etc. — may have caused friction or prevented users from following through. This is the element you will need to adjust when you run the final campaign. You should also look at your test as a whole to make sure your results are sound. A poorly constructed test or one with too many variables may produce a misleading outcome.
  • Implement changes, then repeat the test. Running the test again in a few months will either prove that your changes worked, or show that there was another factor affecting your initial results.

 

Mistakes to avoid

Split testing can be a very helpful tool, but if you don’t utilize it properly, you may end up with results that are way off base. For instance, some marketers make the mistake of making versions A and B too different from each other. If you really want to drill down on the specific factors that lead to higher conversion rates, you should only test one element at a time, said Anil Kaul, CEO of intelligent analytics company Absolutdata. Yes, it will take longer, but you’ll get a clearer, more useful data set that can better inform future campaigns.

“If you change your subject line and at the same time you change your CTA [call to action], it’s difficult to determine which one of the parameters contributed to the most conversions,” Kaul told Business News Daily. “By testing one parameter, you get a clear picture of the changes you need to make and which one would be the most optimized [version].”

You should also make sure you’re running your test long enough to get useful results. Kaul noted that, to get an accurate reflection of what will happen when you launch the final campaign, a good A/B test should run for at least seven days. Most times, one week is long enough to reach 95 percent statistical significance, but if you haven’t reached that point, continue to run the test until you do.

“One should only be certain whether option A is better than option B when a certain level of statistical significance has been achieved,” Kaul said. “Testing can only prove to be impactful when you stick to the numbers.

“Stopping a test too early will distort the results, and decisions based on incomplete data are almost always bound to fail,” Nagpal added.

For more information on statistical significance and how to calculate it, visit this blog post on HubSpot.

Tips for split-testing success

Ready to run your A/B test? Here are a few tips to make your experiment go smoothly.

Get the designs right. No matter what variables you’re testing with your experiment, it’s important to make sure both versions have seamless, visually appealing designs. This will rule out overall design as a factor and make sure that end users are focused on the elements you want to test, such as different images or ad copy.

“Make sure that your versions are mobile-ready, functional and provide a great user experience,” said Leeyen Rogers, vice president of marketing at online form builder JotForm. “Buttons should be the right size and easily clickable from any device. Fine print and all text should be readable and the design should make the call to action clear.”

Eliminate “noise.” Nagpal said that “noise” is any outside influence that skews your data. For instance, your company launches a campaign for a free e-book that receives 1,000 downloads in a month, but you realize that one-third of the leads came from a group of students at a certain university. Since these students were not part of your core target audience, they are “noise,” she said.