How do you prioritize what to test first?
We have complied a very long list test ideas and elements to optimize on our site. Now, my question is where do I start? Can you all share best practices for how to efficiently prioritze test ideas?
Solved! Go to Solution.
@JohnH - This is a common area for discussion and many people will have their own ways of doing things. I have a very simple strategy for prioritising my tests.
I add all my test ideas to a spreadsheet and then add two new columns, the first is called 'Impact' then second 'Ease'.
I then add a score out of ten for each. So if an idea has the ability to have a big impact on conversions it gets a score closer to 10 for Impact. In the next column we score how easy it is to implement out of 10, so if it is a simple change we give it a 9-10.
In the final column you simply add the two scores together to create a total score. You can then sort this descending to give you a priority list.
This means that you can pick up those easy to implement big wins early and then move onto the harder to implement or lower impact ideas later.
Hope that helps.
To build upon adzeds good advice, check out this post from Chris Goward on using the PIE framework, (Potential, Importance Ease) to prioritize your testing.
I think you'll find it helpful.
To add some detail to the good replies you've already gotten, let's look at factors to consider when prioritizing tests.
Ease of Implementation vs. Anticipated Impact
As David and Keith responded earlier, assigning values to each experiment idea, for ease of implementation and anticipated impact on your conversion goal, is a best practice.
For veteran testers, I like to recommend trying to assign quantitatve values to your frameworks; this means a test that you may currently define as:
in an ideal world becomes:
Effort: 7 hours of front end development and 2hrs from project management
Impact: 3-9 percent change in add-to-cart with potential revenue per visitor gain of $3.35'.
However, if you're just starting out in testing, it can be difficult to know just how long it will take to implement certain kinds of tests, and even more difficult to estimate what a given test's impact will be on your conversion goal.
Knowing that it can be difficult to estimate effort and difficulty, consider these analyzing your tests using these three factors to help you prioritize them:
1. Categorizing tests by strategic theme
2. Mapping tests by site topography
3. Scoring tests by percieved 'faith to proven principles'
You've identified many test ideas; can you organize them into themes?
Breaking out tests into broad strategic themes can help refine your thinking by using category-specific rationale; different categories of tests have specific considerations that will shape their impact. Some themes tend to have lower effort to value, especially early on, and are worth focusing on.
Ultimately, you'll need to create and prioritize your own strategic themes that are highly relevant to your business, but these three examples are ways to categorize some common, high ROI tests.
- 'Balancing Content vs. Distraction' - many great tests fall into the category of understanding a balance between minimalism of functionality. I've found that simply removing content, which is very easy to do using Optimizely, can be high ROI . The rule of thumb here is that the fewer the available options, the more likely the remaining content will be interacted with. Of course, this has to be balanced with the need to provide a differentiated experience and helpful functionality, but you need to test to determine your optimal balance.
- 'Mapping Business Priority with Visual Priority' - Do the actions you want users to take (your CTAs to purchase, fill out a form, download, search,etc) have prominence in layout and visual style that corresponds to your business priority? Try highlighting experiments that increase prioritization of actions key to your business.
- 'Image, Text, Layout, Colors' - These tests require very little resources to execute using Optimizely, but can often take several iterations or increasing level of test 'drama' to yield significant results. To help you prioritize these tests, consider that making more dramatic changes, and testing a greater combination of factors, can often yield correspondingly greater impact.
Now that you've categorized your tests by strategic theme, try mapping out the topography of your site, by business need.Topography, unlike geography, measures elevation contours on top of 2d borders. Imagine that your site's content is a country; the most important locations, like your product pages and homepage, are mountains. Focus on mountains, not valleys.
- Common Flows: If you're an e-commerce platform, do users most commonly arrive via CPC ad to a landing page, then down to your checkout funnel? Does another portion of users arrive to your homepage from social media and then conduct a search? If you're a lead-gen site, do traffic arriving to a product page generally download a PDF and leave? Try to understand what the most common flows are, and tailor your tests to streamline them. High impact flows are mountains!
- Bottlenecks: You've probably already identified bottlenecks you want to address, as you've already brainstormed a list of tests. Using analytics, are you able to see what the most severe bottlenecks are? Is it bounce rate on a landing page? Checkout funnel abandonment? You'll want to focus on addressing these bottlenecks first.
Consider how well your tests align to proven principles, two of which I mention below. This can be helpful in understanding how likely a given test is to deliver results. There are plenty more principles to absorb in the case studies section and best practices section of our blog, and in the academy.
- Global vs. Local Maximum: Generally speaking, many tests that target only a very specific element or hypothesis can underperform. Why is this? Simply put, it's because you're optimizing for local vs. global maxima. In English, this means you're focusing on one small piece (endlessly making small tweaks to headline text) of a big puzzle (there are 45 different options to select from below that headline and users are overwhelmed). There are many ways to overcome this, including avoiding 'Meek Tweaks' (tests that make only subtle changes) and testing more than one element at a time and rapidly iterating. In conclusion, make bolder changes to achieve bold results.
- Iteration vs. Projects: Some tests may be worth prioritizing simply because they're more likely to be worth continuous testing as opposed to tweaking once and moving on. An example would be prioritizing changes in how you present your value proposition messaging throughout your funnel vs. testing changes to your site color scheme, because you're more likely to want to continue to explore your messaging over the long term (in a changing competitive environment and environment of changing user behavior by device) vs. continually changing the color scheme of your site. The exception to this would be if you're in a time sensitive project, like a redesign - in this case it would be worth prioritizing testing factors for your redesign, because you won't get a chance to later after you've republished a new site! In general though, favor themes that lead you to iteration over projects, as you'll learn and develop more.
Finally, now that your tests have been given more nuanced analysis, you can conduct your prioritization exercise!
Remember that you should revisit your prioritization as you continue to test, because as you learn more and develop your experience your ability to effectively estimate value and impact will improve.
Above all, keep in mind that no amount of analysis and homework can replace the knowledge you'll get by starting to test and iterate. 'Paralysis by Analysis' is a very painful and real possibility for many organizations, and you do not want to fall into that common trap.
Move fast, and don't be afraid to fail; as John Wooden said, "Failure isn't fatal. Failure to change is."
For your development, check out resources in the Community, Knowledge Base, and Academy for technical acumen and winning test examples that will help you get better at estimating test effort and impact.
Hopefully these ideas help you on your optimization journey!!
Strategic Optimization Consultant
@JohnH We take Chris Goward's PIEone step further to include risk, becomine PIRE. What do we mean by risk ? ... for us we work towards very tight growth goals, and as such we factor the potential risk associated with running a test (long enough to get significance) with the idea that we may sacrifice some form of efficiency OR user experience to test our wild hypotheses. Our risk factors act as a negative variable and help us to better vet out test rankings that often share a similar PIE score...
Hey @nolanmargo . It's great to hear you're modifying the standard PIE framework to fit your environment. It sounds like you're taking Risk to mean that a more dramatic test variation would have a higher Risk rating. Am I following? Or, does it mean a test target area (or page) that is more critical to the business would have a higher Risk rating?
If the latter, then I'd wonder if Risk doesn't simply counter-balance Importance? If the former, then you're using PIE a little differently than we do at WiderFunnel, using it to prioritize test ideas rather than test target areas.
Interesting addition. Thanks for sharing!
Though, I'd probably call it PIER myself. Rolls off a tongue a little better
Learn more: http://www.widerfunnel.com/blog