Ask the Expert: Stephen Pavlovich from Conversion.com joins us to discuss CRO strategy
The live portion of this Q&A has closed. Please read all the amazing content and answers Stephen has provided below.
Respond to this post with questions, or to just bounce ideas around and get to know Stephen Pavlovich! You can post a question here through Wednesday, Aug. 31, and Stephen will answer your questions Thursday, Sept. 1. Plus, one lucky participant in this discussion will win a free 30-minute consultation from Stephen himself!
Stephen Pavlovich is the CEO of Conversion.com, the UK’s largest conversion optimization agency. Working with clients like Citrix and the Guardian, they’ve optimized websites and apps, and tested pricing, functionality and new products.Learn more at http://conversion.com. You can also find Stephen in the Optiverse under the username @pav.
If you are interested in being featured as an expert for a specific topic, please email email@example.com.
If you have any ideas for future Ask the Expert themes, please post them in the Community Ideas board here.
Is there anything you tell hesistant potential clients about Conversion Optimization that makes them less nervous and more willing to take a chance with testing?
We are currently looking at implementing an address lookup service (PCA Predict) on our cart, but leadership wants to be sure that it will not adversely affect conversion rates before going full-steam ahead. Our initial Optimizely tests were inconclusive. Do you have any suggestions on how to tackle this?
Q: Is there anything you tell hesistant potential clients about Conversion Optimization that makes them less nervous and more willing to take a chance with testing?
A: It depends why they're nervous Is it because of the cost, fear of the unknown, concerns about development or load-time? (The same as with conversion optimisation, understanding a person's motivations and objections is the first step to affecting their behaviour.)
In general, people get that "a-ha!" moment with conversion optimisation when they see the impact across the business.
What would a 10% increase mean to them? For a start, higher revenue and (even) higher profit. Then what? Can they use this profit to acquire more traffic, open up new marketing channels? Can they take the insight and create better adverts? Can they even test pricing and product changes? (The answer is yes to all
Ultimately, they can always "test" testing – they may be willing to run experiments, knowing that the worst-case is limited to a test's duration, and the best-case is transformational.
Q: We are currently looking at implementing an address lookup service (PCA Predict) on our cart, but leadership wants to be sure that it will not adversely affect conversion rates before going full-steam ahead. Our initial Optimizely tests were inconclusive. Do you have any suggestions on how to tackle this?
What do you mean by "inconclusive"? eg did you run a test for 2+ weeks and see no detectable difference over 1000s or at least 100s of transactions? If so, you should be fine to launch. If not, it may be worth rerunning the test.
Ideally, you'd also analyse user behaviour through the checkout: did the addition of PCA Predict affect completion of that step/page? (If you have a tool like ClickTale, it can help massively with funnel and form analysis, as well as watching sessions of users actually interacting with the field.)
Q: In your experience, what's an acceptable statistical significance percentage? Have you ever found yourself in a situation where a low rate was enough to prove a point?
We work to >95% significance (whilst also setting minimums for test duration and number of conversions).
Using a lower % obviously increases the risk of a false positive, and it needs to be justified. That can include:
1. Statistically significant on micro conversions. eg a new product page design might increase the add-to-cart % significantly, but there isn't enough traffic to see significance through to purchase.
2. Testing to avoid CR decreases. eg after 4 weeks and 1000s of transactions, the uplift for a new landing page isn't statistically significant – but customer and internal feedback is positive.
What's the scenario you're looking at?
Q: What would differentiate an optimizer enough that a company like conversion.com would visa-sponsor a candidate? To that extent, what has been your most successful approach to segmenting users?
We're actually a sponsor already. To get a visa, we'd need to justify that the person has the skillset and experience that we can't find within the UK.
For an optimiser, that would mean showing that they excel in at least one – and ideally 2+ – areas of conversion optimisation. That might mean they're awesome at conversion strategy, analytics, copywriting, UX, psychology, etc – or some combination.
Outside of that skillset, we look for two main characteristics: an analytical and creative mindset, and a genuine passion for conversion.
Q: We have a number of commercial revenue streams where stakeholders are risk averse and/or there is a culture of sign off from multiple stakeholders before a test can go live. What is your advice on how to devise a test strategy and approach testing given these challenges? I want to build a strong collaborative testing culture which will demonstrate ROI of a CRO programme, but currently we seem to be chipping away at the smaller issues successfully rather than tackling ones which can bring about improvement on a global scale.
Great question. This applies to 99% of businesses – most are throttled by their (in)ability to test, and the ones that can overcome it will scale exponentially compared to their competition.
At Conversion.com, we've worked with a lot of companies who are risk averse – whether at an individual, company or even industry level (eg if they're in a heavily regulated sector like finance).
There are multiple possible solutions for this:
1. Start small and scale.
A company culture can change your testing strategy. So rather than starting with the biggest opportunities, you instead have to start with a test that could get signed off quickly.
Normally that means it's small enough that it either doesn't require sign-off, or it's easy enough for people to approve. (The challenge is that it'll still need to be impactful if it works – you want to go to them afterwards and say, "If this is what we can do with one quick test, imagine what we could do if...")
eg we work with a large poker website ($###m in annual revenue). They were so risk-averse, the only test we could do at first was to remove content above their registration form. It was fairly easy to get sign-off (we weren't adding any new content), and we could see that this content was distracting users.
Luckily for us, this test increased their conversion rate by >10%. It was a huge lift from a simple change ($m uplift). More importantly, it allowed us to get buy-in from the business to scale up our testing.
2. Start a collaborative process.
If you're struggling to get your tests signed off, look to your stakeholders for ideas. The question we like to ask is, "If you could test anything on our website, what would it be?"
It gets people thinking in the right way. Most people will have some frustrations or questions or "what if"s about their website – and if they can articulate them, it gives you an opportunity to get buy-in to the testing program.
3. Ask forgiveness rather than permission.
This is obviously a last resort if you're brave enough. If a test fails, the impact is limited to the duration of the experiment. Whereas if it's successful, its impact is indefinite – and it can lead to much bigger opportunities. (Note that we can't do this as an agency!)