Split Testing Blog

Why A/B Testing Could Decide a Startup's Success (or Failure)

By Hossein Tayebi | Apr 8, 2014

"is this going to take off?" That's the million (or maybe billion) dollar question every startup is trying to answer, hopefully a positive one.

No entrepreneur or startup wants to spend months creating customer personas, gathering participants and conducting expensive surveys to define a product that can very well fail. Startups often work on an innovative product, and are short on resources until they can prove that the product is worthy enough to attract investors. Putting in a lot of time upfront in planning and designing something that may not work is a waste of valuable resources. Eric Ries, pioneer of the Lean Startup movement, advises startups to minimize the total time taken through the product development loop. He wants startups to build a Minimum Viable Product (MVP) and then test the idea before moving forward.

Split Testing or A/B Testing can help you avoid the heavy upfront planning cost and test your idea at each step as you take your prototype or Minimum Viable Product to a fully functional product.

What is A/B testing Anyway?

A/B testing or split testing is providing your customers two (or more) alternatives and analyzing which one they prefer over the other. You can then make the popular alternative a permanent feature of your product. The A/B testing concept is most widely used by usability engineers to test websites and software interfaces, but is also applicable to a physical product.

For example, let's say you want to create a lunch box for kids and are not completely sure if a picture of Donald Duck would attract more kids or that of Pokémon. In this case, you release two stocks of lunch boxes in the market, one with a picture of Donald Duck and another with a picture of Pokémon. The results, everything else being the same, will tell you which option the target age group prefers and make your choice easy.

Why should you A/B Test?

Failing to test before you mass-produce can lead to heavy losses. Tropicana, America's premium orange juice brand, found this out the hard way – by losing around $33 million and eight months in design and development. In 2009, Tropicana redesigned its juice boxes by tweaking the orange color and changing the box dimensions and cap with a view to increasing sales. However, the redesign prompted poor feedback from customers, some of them even calling it outright ugly.

Had Tropicana run an A/B test in a controlled manner before going all-out, it could have very well avoided the direct product loss and the loss of time and resources that went into mass production of something customers didn't like.

Facebook, on the other hand, runs A/B tests regularly to develop new features or improve the existing ones. One such test was conducted to assess the Facebook account deactivation rate. You might have come across it if you ever tried to leave Facebook. Users were offered two interfaces. The existing interface that asked users if they wanted to deactivate their account included just the text and asked them to choose the reason they wanted to leave out of a few possible options. The alternate design tried to appeal to the users' emotions and showed pictures of friends they would be leaving behind if they deleted their Facebook account.

Facebook found that when presented with pictures of friends, many users changed their decision to deactivate the account. This new design reduced the deactivation rate by nearly seven percent.

For website owners, conversion rate is a key metric. It is important for them to get visitors to sign up for a newsletter, Like their Facebook page, or take whatever action they want them to. For this to happen, the webpage's design and flow are critical. A simple image choice and placement can affect the actions users take and hence the outcomes. An A/B testing approach can help entrepreneurs present different pages to their site visitors and then select the design that gets maximum conversions. According to the Conversion Rate Optimization Report by RedEye, more than fifty percent of companies agree that A/B testing or split testing is helpful in making website related decisions.

Why is A/B testing important for Startups?

The lean startup methodology is focused on what customers want rather than on what the entrepreneurs think is an excellent idea for a product. This innovative method of managing business ensures that you give people what they want, guaranteeing product success. In an Harvard Business Review article, Why the Lean Start-Up Changes Everything, Steve Blanks advises startups to engage major resources only in activities that will lead to a successful product. At the heart of this new concept is defining your hypothesis and testing this hypothesis on real customers.

A/B testing helps you test your hypothesis. Depending on the results, you can then test a new hypothesis and keep doing this until you find something the customers actually like and want (or perhaps need). Therefore, A/B testing helps you test your ideas and avoid loss of resources in the early days of your startup, which is so crucial to long-term success.

Conclusion

If you are planning to launch a product or service, following the lean startup model is definitely a prudent option. Even if you don’t embrace the methodology wholeheartedly, A/B testing your ideas can save you a lot of angst and loss. With A/B testing, you don’t need to leave things to chance nor bank just on the opinions of a few team members. Instead, you can rely on real customers' preferences to create a product that is designed by the customers' feedback.

A/B testing may very well make or break your product, and hence the future prospects of your company.

About The Author

Author
Hossein Tayebi is a software engineer, web development enthusiast and an entrepreneur based in Melbourne, Australia. You can get in touch with him via his , Twitter or blog.

Comments (3)

  • User Avatar
    Tristan Kromer (@TriKro) Apr 15, 2014 04:45 AM

    I would agree that A/B testing is important to a startup but I think you're unintentionally overstating the usefulness to early stages. A/B testing tells you which version is "better." It doesn't tell you if the whole idea is dumb. Most startups move too quickly to A/B testing. Testing things like the color of the button and what picture they should use as their hero shot when the principle hypothesis hasn't been tested in isolation. In your examples, you pick optimizations of orange juice conversion and preventing people from deleting their accounts. In this case the startup needs to forget about asking, "Which container is better?" and ask, "Does anyone want orange juice?" or better yet, "What's wrong will all the OJ out there already that we're going to radically change?" A/B testing can't provide you with a baseline and never ever ever answers the most important question for a startup, "Why are my customers doing that?"

  • User Avatar
    Hossein Tayebi Apr 16, 2014 01:47 AM

    Tristan, I agree with you that changing the colour of a button will hardly help a startup validate their basic assumptions. Having said that, I do see A/B testing more than micro-refinements. In our case, we were trying to find a niche sector within which customers are open to try online mediums as a way of connecting to service providers, with a few candidates in mind. We broke the service providers into various sectors and started A/B (/C/D...) testing potential customers. Each of these versions were different to a great extent. While this was not our only channel, split testing was indeed one of the major means we used to get customer feedback pretty early in the process to validate our guesses. Yes we still went "out of the building" and talked to customers, but our A/B tests were always an essential part of build-measure-learn loop. This practice may be considered beyond A/B testing but at the end of the day, we were dividing our audience into separate groups and measuring their response to different variants presented to them (apple juice or OJ).

  • User Avatar
    Reya@iuniunstudio Apr 24, 2014 02:50 AM

    I agree A/B testing is important at the initial stage of product and service development but I think using the term "A/B testing" before development starts AND during the development may confuse stakeholders about the different process and purposes of what the testing at the beginning and during is for, especially when they have limited budget. I think that's also why Tristan find the article overstating the importance of A/B Testing? It's important to explain the different purposes of the two and I believe the A/B testing you meant BEFORE development is the "Focus group testing" in marketing term and often done by the R&D departments in big organisations. Having a group of participates choosing among a few options you place in front of them at the same time and then you get a rough idea of what people favours among these options. Then there is the "usability test", which would be the "build-measure-learn" part you mentioned, happens during and till the end of development making sure the product is working and gives the result the stakeholder was hoping for. This would be done with each participants individually and the point of this is also to see if he/she find it easy to use and the whole experience is enjoyable enough for more return use. This usability test is equally important if not more important than the focus group testing. Many stakeholders often replace and mistaken the usability test as focus group testing when they serve different purposes and is done differently to get different sort of data. I find that explaining the difference would help clearing their minds and they'll be more likely to spend for both kind of tests.

Add Comment

Share This Story

RECEIVE UPDATES

ABOUT THE BLOG

In this blog, you will find useful information on A/B testing, split testing and conversion rate optimization in general and how you can improve or kickstart your business using these techniques. Our posts are free and we are always interested to hear from you.