I would agree that A/B testing is important to a startup but I think you're unintentionally overstating the usefulness to early stages. A/B testing tells you which version is "better." It doesn't tell you if the whole idea is dumb. Most startups move too quickly to A/B testing. Testing things like the color of the button and what picture they should use as their hero shot when the principle hypothesis hasn't been tested in isolation. In your examples, you pick optimizations of orange juice conversion and preventing people from deleting their accounts. In this case the startup needs to forget about asking, "Which container is better?" and ask, "Does anyone want orange juice?" or better yet, "What's wrong will all the OJ out there already that we're going to radically change?" A/B testing can't provide you with a baseline and never ever ever answers the most important question for a startup, "Why are my customers doing that?"
Tristan, I agree with you that changing the colour of a button will hardly help a startup validate their basic assumptions. Having said that, I do see A/B testing more than micro-refinements. In our case, we were trying to find a niche sector within which customers are open to try online mediums as a way of connecting to service providers, with a few candidates in mind. We broke the service providers into various sectors and started A/B (/C/D...) testing potential customers. Each of these versions were different to a great extent. While this was not our only channel, split testing was indeed one of the major means we used to get customer feedback pretty early in the process to validate our guesses. Yes we still went "out of the building" and talked to customers, but our A/B tests were always an essential part of build-measure-learn loop. This practice may be considered beyond A/B testing but at the end of the day, we were dividing our audience into separate groups and measuring their response to different variants presented to them (apple juice or OJ).
I agree A/B testing is important at the initial stage of product and service development but I think using the term "A/B testing" before development starts AND during the development may confuse stakeholders about the different process and purposes of what the testing at the beginning and during is for, especially when they have limited budget. I think that's also why Tristan find the article overstating the importance of A/B Testing? It's important to explain the different purposes of the two and I believe the A/B testing you meant BEFORE development is the "Focus group testing" in marketing term and often done by the R&D departments in big organisations. Having a group of participates choosing among a few options you place in front of them at the same time and then you get a rough idea of what people favours among these options. Then there is the "usability test", which would be the "build-measure-learn" part you mentioned, happens during and till the end of development making sure the product is working and gives the result the stakeholder was hoping for. This would be done with each participants individually and the point of this is also to see if he/she find it easy to use and the whole experience is enjoyable enough for more return use. This usability test is equally important if not more important than the focus group testing. Many stakeholders often replace and mistaken the usability test as focus group testing when they serve different purposes and is done differently to get different sort of data. I find that explaining the difference would help clearing their minds and they'll be more likely to spend for both kind of tests.
In this blog, you will find useful information on A/B testing, split testing and conversion rate optimization in general and how you can improve or kickstart your business using these techniques. Our posts are free and we are always interested to hear from you.