It’s a known fact: proper conversion rate optimization requires testing. One cannot simply assume that the changes they are about to make will actually work without testing. Why? It would be like thinking someone has been cured of Ebola but no one actually tested this person to be sure if he or she is, in fact, cured.
And before you even start doing testing, you have to conduct conversion research to develop a hypothesis. Otherwise, how in the world will you know what to test?
But first let’s get on the same page as to what a hypothesis is.
Here is a definition I like to use:
“A hypothesis is a proposed statement made on the basis of limited evidence that can be proved or disproved and is used as a starting point for further investigation.”
Every conversion optimization test is based on a hypothesis. Whether a test wins or loses, you’re validating a hypothesis.
How to Formulate Your Hypothesis
I like to think of hypothesis testing as validated learning. And learning leads to insights which leads to the creation of better hypotheses, which in turn creates better results for us.
Therefore, the better your hypothesis the higher the chances that a treatment will work, and result in an uplift of conversions.
So how do you formulate a hypothesis?
With a hypothesis you’re matching identified problems (based on conversion research) with identified solutions while specifying the desired outcome.
For example, let’s say that during your conversion research we have identified a problem: “Visitors aren’t completing the sign-up form. There are too many fields and it’s not clear what fields are required to submit the form. Users also don’t know what happens after the form is submitted.”
Proposed solution: “Let’s re-design the form so it would be easy to fill out. We’ll remove unnecessary fields and make it a two-step process. All required fields will be clearly marked and we’ll include microcopy to improve clarity. Upon submission of the form we’ll have a confirmation page letting the users know what’s next.”
Hypothesis: “By improving the sign-up form to reduce friction and improve overall presentation, people can better understand the purpose of the form, and will result in an increase to the number of warm leads.”
Does this make sense? Nothing mystical or complicated. The whole point of the hypothesis is that we understand:
- which problem we’re solving
- what’s the solution
- which metric we are trying to improve with this test
A simple formula to use is this:
You don’t need to use this exact sentence structure, but your statement should include description of the problem, solution and what you expect to change.
All hypotheses should come from the results of conversion research: heuristic analysis, qualitative and quantitative research.
With a hypothesis created, you’re ready to start testing. But let’s first go over some things you should keep in mind as you’re testing the hypothesis.
Start Testing ASAP
While this may seem like a “no-brainer” it’s something that a lot of people miss out on. Even if you’re not 100% complete with your conversion research, put up a simple test that might help you gain some insight and/or validate your hypothesis.
This is helpful since by the time you’ve completed your research, you already have some actual test results to analyze and draw insights from.
Always Keep a Test Running
Every day without a test is a wasted day. It’s understandable if you don’t always have new design treatments waiting to be tested that are meaningful and based on a well-researched hypothesis. While you’re preparing for the next serious test to go-live, use the time in between to test simple things like call to action copy, button sizes and placements and other similar things that are very easy to set up.
Test a Single Issue at a Time
More important than uplifts is learning. When you’re creating your treatments, it’s best to address one issue at a time. This does not mean a single change, it can have 5-8 changes. By addressing one issue at a time, I’m referring to a category of issues.
Create a test where you address price sensitivity. Another test could address security concerns. Another can be focused on form optimization. Another example is to minimize distraction, to simplify the page.
If your test wins, you know what did it. If it loses, you can learn that the issue you were addressing is not really important on this particular page.
Go For Small Gains Too
Small gains are okay too. Just like compound interest creates more return on more capital, small conversion gains lead to greater advantages as the more optimized the funnel, channel or page becomes. If you increase your conversion rate just 5% each month, that’s 80% growth over a year.
Also, if you have large volumes of traffic small gains can be a lot of absolute dollars.
Think about this: you’re running a test but only getting a 2.1% uplift. Crappy right? But hold on, this does on average 250k transactions per month. Each worth between $5 and $10. So adding 2% to 250k is an additional 5,000 transactions. That’s 60,000 transactions over a year and at least $300,000 increase in revenue. Pretty good pile of cash, right?
End Tests Early by Declaring “No Difference”
This is not to be confused with calling a winner early. Never call a winner early. But there are times when you should end tests early. It’s when you can see the test is going to be “no difference” or very minor uplift (typically less than 1%) or if the test will take a very long time to reach statistical significance to know the final number.
Time is money and so if you have run a test for two or more weeks and
- no statistical significance has been achieved (and sample size too small)
- treatment is just barely beating out control variation (less than 1%)
- it would take many more weeks to reach desired sample size
Then it’s probably a better idea to stop the test and run another test that has a better chance of a significant uplift.
Of course, as I mentioned above small gains add up so if your treatment is only getting 5% more than control then it might be worth it to wait. Typically if it takes more than a month or more to run a test, a 5% lift is not going to be good enough and the time invested in the test is not worth it.
The whole point here is that time=money and you can’t afford to waste time on a test that isn’t going to net you significant gains.
Many CRO consultants jump right in and test without first conducting conversion research. This is a rookie mistake. Don’t make this mistake by conducting proper conversion research. You need to find out where the problems are so you can develop a sound hypothesis to test. Doing conversion research will consist of heuristic analysis, speaking with site visitors/customers and spending time looking at data in Google Analytics.
Will this research take time? You bet. But better to know what you’re testing than to guess at what to test.