For the last five years or so I’ve invested heavily in conversion optimization, principally through A/B testing of marketing copy and customer UX. I started like many newbies in A/B testing do, by blindly tweaking button colors and calls to action, often aping ones which I saw reported as successful by other businesses. Sometimes it worked and sometimes it didn’t. I got results worth writing home about frequently enough to bootstrap into a career doing this for consulting clients. Eventually, I became slightly more sophisticated about my testing.

Testing The Right Thing

The big realization that I’ve come to over time was recently crystallized for me in a talk by Jason Cohen: test hypotheses which one has about customers rather than hypotheses which one has about tactical improvements to the business, for example button copy. A hypothesis about a customer is, for example, that customers are non-consumers in your product space due to the perceived switching costs of moving from pen-and-paper solutions to software. You could address this hypothesis by, for example, including messaging like “Get Started in 60 Seconds!” on pages attempting to solicit a conversion. Absent that hypothesis, simply taking a random walk through the universe of all possible H2s is unlikely to be the best possible use of your time.


Complete 10-second survey to read full article!

For the last five years or so I’ve invested heavily in conversion optimization, principally through A/B testing of marketing copy and customer UX. I started like many newbies in A/B testing do, by blindly tweaking button colors and calls to action, often aping ones which I saw reported as successful by other businesses. Sometimes it worked and sometimes it didn’t. I got results worth writing home about frequently enough to bootstrap into a career doing this for consulting clients. Eventually, I became slightly more sophisticated about my testing.

Testing The Right Thing

The big realization that I’ve come to over time was recently crystallized for me in a talk by Jason Cohen: test hypotheses which one has about customers rather than hypotheses which one has about tactical improvements to the business, for example button copy. A hypothesis about a customer is, for example, that customers are non-consumers in your product space due to the perceived switching costs of moving from pen-and-paper solutions to software. You could address this hypothesis by, for example, including messaging like “Get Started in 60 Seconds!” on pages attempting to solicit a conversion. Absent that hypothesis, simply taking a random walk through the universe of all possible H2s is unlikely to be the best possible use of your time.

Big Tests Versus Small Tests

There’s a split among conversion optimization practitioners between folks who advocate testing big, sweeping changes (full-site redesigns, significant functional differences, revamped shopping carts, etc) and those who advocate focusing on smaller tests (headline copy, etc). Many folks who advocate big, sweeping changes say that they’re more likely to get big results (25%+ lifts in conversions, say) and that even if one racks up 5% here and 2% there that isn’t likely to be a “real” result, damn what the numbers say, because of… the principle of charity precludes me from attempting to caricature their point of view here, so let’s leave it to “diverse causes.”

I generally self-identify as someone who runs more cheaper tests rather than fewer high-impact ones. This is partially because I have both seen and individually collected substantial evidence that cheap tests can sometimes be quite high impact (you really don’t want to know how often the headline on the Pricing page matters more than e.g. 10 man-years of effort on product development, as it would be depressing). It is also partially because of reasons specific to the way I run my business: with sharply limited time budgets for doing testing, I was able to sustain weekly testing for years only because I was routinely doing tests which required less than 1 hour of implementation a week. (Ironically, that is perhaps more common among some of the larger users of conversion optimization, due to organizational reasons which cause them to have great difficulty in getting engineering buy-in to a testing strategy.)

The Single Biggest Conversion Optimization Mistake

Far and away the biggest problem with conversions at most savvy software companies is that optimizing them is always “the first thing to do… next week.” Pluck your low-hanging fruit early—probably as soon as you have about 3,000 visitors a month to your website—and get the feeling of getting wins under your belt. This will help you carry forward the momentum to continue doing conversion optimization as one of your strategic activities. You’ll never stop doing it, just like you won’t ever stop improving your product or stop talking to customers. Some weeks more, some weeks less, but these are things you’ll be committed to for the long haul.

These chapters give you some ideas of where to find low-hanging fruit, and a few fun stories to inspire you about more sophisticated things you could potentially try. Don’t be discouraged if they don’t all work out: roughly 3/4th of experiments in my experience fail to move the needle at all, and half of the remainder fail to move it in the right direction. That last experiment out of eight pays for all of the effort, though. (This is another reason why I skew towards doing small experiments frequently rather than doing large experiments infrequently, because a run of large investments which went south would be discouraging for me.)

Price: $9.99 Add to Cart
  • Lifetime guarantee
  • 100% refund
  • Free updates