In the last chapter, I covered the design of Bingo Card Creator’s experiment with DropBox style two-sided referral incentives. The experiment was a crushing failure.

Why write about it, then? Because I think that we have a tendency to highlight our successes, in life and in the industry, and ignore the fact that every epic journey forward is the result of a lot of false-starts, backtracking, and days spent stuck in the mud cursing the heavens and wondering what possessed us to pick this road rather than the other.

The Stats


Complete 10-second survey to read full article!

In the last chapter, I covered the design of Bingo Card Creator’s experiment with DropBox style two-sided referral incentives. The experiment was a crushing failure.

Why write about it, then? Because I think that we have a tendency to highlight our successes, in life and in the industry, and ignore the fact that every epic journey forward is the result of a lot of false-starts, backtracking, and days spent stuck in the mud cursing the heavens and wondering what possessed us to pick this road rather than the other.

The Stats

As you can see from the other essay, invitations required users to go to a distinct page on the site. Despite keeping that page visible (and promoted to users hitting their trial limitations) for over 6 months, only about 3,200 free trial visitors ever ended up seeing it. (For comparison, this would have been out of substantially north of 64,000 trials over the period.)

They sent a total of roughly 200 invitations that were actually accepted (I didn’t track non-accepted ones—d’oh). Of these, only a grand total of 2 ended up buying the software, for a total of $60 of revenue. This means I probably earned less than $10 an hour for my engineering effort in running the experiment.

What Went Wrong?

First, while companies (like DropBox) which optimized heavily for viral referrals make that piece a front-and-center call-to-action during their onboarding process, BCC was very diffident about it. I only surfaced the call-to-action to a tiny percentage of users—those hitting the trial limitations—and even then it was a secondary option, next to the larger CTA to pay for the software. This meant that less than 5% of users ever saw the feature to begin with.

The user experience of the page was, to be honest, rather poor. For non-technical customers like mine, requiring folks to type email addresses was quite a hurdle to them successfully inviting friends. Many other companies use substantially more sophisticated ways to engineer product virality, such as slurping in address books or using e.g. Facebook integration more heavily. I avoided these, partially out of lack of time to implement them and partially because I’m not nearly as aggressive as some companies with regards to the spam-early-spam-often approach to social software distribution.

Was That A Loss For The Business?

Part of the discipline of experimentation is that you’re committing to a portfolio strategy in terms of directions for the business. Sometimes experiments don’t pan out. In the six months this experiment was running, I did probably a dozen things of comparable effort. Many of them failed, though I can’t remember any that failed quite so flamboyantly. Some succeeded. At the end of the day, sales went up over the interval, just like they’ve grown at a reasonable clip for every year I’ve been in business.

Just as a rule of thumb: roughly 75% of A/B tests, the easiest sort of experiment to measure, produce a “null result”: we can’t be statistically confident that there is a difference between A and B. Of the remainder, about half produce a statistically confident result which says that the new version is worse than the old version. So only about one in eight—one in eight!—A/B tests actually succeeds in moving the needle in the right direction. And yet, in aggregate, that is astoundingly valuable for the business. So you need to see that experimentation through, even on bad days.

And, make no mistake, you’ll have bad days. Every consulting client I’ve ever worked with has made significant bets which didn’t pan out. Sometimes they are relatively cheap, such as totally reworking a sales page, which might involve one man-week of effort. Sometimes they have been expensive, such as entire new products (representing hundreds of thousands of dollars of work) which failed in the marketplace.

Part of the beauty of A/B testing is that we can decrease the cost of testing, thus allowing us to run more tests, which allows us to fail more often than we have previously (and, as a happy side effect, succeed more often, too.) If you don’t understand this going in, you’re likely to become discouraged early, both in A/B testing and in the related fields of conversion optimization and other DevMarketing approaches you could be starting on. So understand that you’re going to fail. Remember the failures. Learn from them. Maybe even write about them, occasionally. They’re the table stakes for success.

Price: $9.99 Add to Cart
  • Lifetime guarantee
  • 100% refund
  • Free updates