Build Your Organization’s Optimization Culture
I’m Amy Herbertson, a Strategy and Technical Support Representative at Optimizely. I'm hosting an online workshop called “Build Your Organization’s Optimization Culture” as part of our hands-on Optimizely Workshop series. Feel free to post any of your questions here.
When we talk about “Optimization Culture”, we are talking about a data-driven approach to decision-making, fueled by fearless experimentation.
Today, we covered:
- How to identify the maturity level of your Optimization program
- How to set up cultural processes and procedures
- How to build an optimization team
- How to demonstrate the value of your testing program
This guided, interactive learning prompts you to reflect on the current state of your Optimization program: where you’re strong and where you’d like to develop. Ultimately, the goal of this workshop is to bring together everything you’ve learned throughout the workshop series and to answer the question of how to incorporate optimization into your company DNA.
The first step is to identify the current maturity level of your Optimization program.
How to identify the maturity level of your Optimization program
If you’re relatively new to Optimizely, you may be in the interested stage. You may be defining team roles and forming ideation methods. At this point, you may or may not have a defined testing cycle. Most of the tests you’re running are relatively simple and you’re making decisions on basic win/lose/inconclusive results.
And then in the invested stage, you might start to define process: a prioritized roadmap, hypothesis creation, knowledge sharing, and integration with other analytics for powerful interpretation of results.
As you move into the integrated state, all those processes allow for deeper, more insightful management. Your testing program may start to partner strongly with product development to validate new features before release.
And finally, at the strongest level of Optimization maturity, your program is ingrained. You might be testing across business units, and at this point have significant buy in from executives. You’ve hit the pinnacle of data-driven experimentation!
Stop for a second and try to identify where your team lies. What steps can you take to advance towards the next stage?
How to set up cultural processes and procedures
There are opportunities to provide structured socialization of your testing culture in 4 key areas: Ideation, Planning and Prioritization, Development and QA, and Results and Iteration. Below are some examples of different processes and procedures to try.
In this stage, you’re source testing ideas from your analytics, competitive audits, colleagues, etc.
- Submission forms: An idea submission form is exactly what it sounds like: it’s a form or a quick survey employees can access to submit their test idea. If you’d like to get started right away, check out our pre-made form which easy to download or duplicate to your google drive account.
- Contests: Some bigger teams have great success with actual cash/award prizes given to those who submit the most (or best) ideas. For more inspiration, check out Pauline’s (Hotwire) story about how she sources ideas from her company, enabling her to run upwards of 120 tests a year!
- Brainstorms: We have tons of collateral in our Knowledge Base and Blog about this. Here are two cherry-picked favorites: Use Analytics Data to Inform Your Hypothesis and 21 Ideas to brainstorm your best A/B test yet.
Planning and Prioritization:
This is the stage where you’ll want to nail down the purpose of your testing program, a prioritization criteria for tests, and roles and responsibilities.
- Testing Charter: In this central document space, you can outline what your team's process is (ideation, prioritization & planning, development, results and iteration), the team member responsibilities, and the associated templates and goals that you play by. Download our template, here.
- Testing Backlog: At any given time, we suggest you have 30 tests in your testing backlog. Our strongest testing clients have a constant flow of high value tests. You'll want prioritize these tests and create your testing schedule. For more on strategies you can use, whether you’re a small team or a large team with many dependencies, read this knowledge base article.
Development & QA:
This is the step where your team builds, QAs and launches individual experiments. It’s an opportunity to generate excitement and investment.
- Quality Assurance Testing: This is an opportunity to get people talking about your tests! It can be helpful to bring in random team members to validate, kind of like user testing, that variations are functioning as expected before publishing live.
- Test launch announcements: Announce the hypothesis of every test and what variations you’re running against the original.
Results and Iteration:
In this stage, you’re analyzing the results of your experiment and sharing what you’ve learned from winner, loser, and inconclusive tests. This is also a good time to review your experiment backlog and upcoming tests.
- Socialized Results: Though sometimes painful, a definite hallmark of the most successful testing cultures is the ownership of all results sharing, whether they're positive, negative, or inconclusive. A good testing program (running lots of tests), on average, will have about 20% wins. Therefore, it’s imperative, all your tests (win or lose), teach you something valuable.
- Accessible records: As your culture grows and testing happens across the organization, you’ll want a central source of truth and learns easily accessible from past tests.
How to build an Optimization team
There is no one way to define what a testing team looks like. It’s different across companies. Depending on your company size and maturation level, you have have a center of excellence organization, a large team, or perhaps you run the program on your own.
There are 4 core functions someone should cover, regardless of team composition.
You need someone to drive: Ideation, Implementation, Interpretation of Results, and Communications.
How to demonstrate the value of your testing program
In communicating ROI value, you’ll want to quantify the overall value of your program and not just individual experiments that were considered “winners.” Your experiment goals should connect to Optimization goals (perhaps for the quarter) which link up with Business unit goals, and support strategic company initiatives/metrics. For more on this topic, check out Khattabb Khan’s brilliant interactive learning session: Define Testing Goals that influence conversions.
It’s tempting to think that only tests that win are valuable. But that is definitely not the case. Win, Lose, or Inconclusive - your conclusions tell you something valuable about your customer. This is especially true if you go into a test with a strong hypothesis to start with.
There are immediate ROI calculations in testing losing variations. Calculated by:
- a) time saved on developer resources creating redesign and pushing to production
- b) revenue saved by not pushing losing redesign to all users
- c) revenue saved by limiting the time the failing redesign was live by using smart stats engine to see statistical significance
Over time, taking a scientific approach to your tests (jumping back and forth between being a scientist and marketer) you’ll set up more effective and efficient tests that raise the bottom line.
That’s it folks! Hope this helped jog your creative juices!
What are your next steps for building your Organization’s Optimization culture? Let us know in the comments!
Here's the recording:<script src="//fast.wistia.net/assets/external/E-v1.js" async></script>
Additional Resources Mentioned in Interactive Workshop:
Here are the slides I shared:
For those that tuned into this interactive learning, we gave an example of socializing test launch annoucements in a fun and engaging way by encouraging your colleagues to vote on which variation would win!
The results of the real life, anonymous test we showed you are in!
Original Variation: Cartoon images
Variation 1: Replacing with images of real people
As it turned out, the test was INCONCLUSIVE. Surprising for our team. We are re-evaluating our hypothesis for the future and asking ourselves what this might mean about our customers. Our initial hypothesis was that real images would lend credibility to a financial services company. The question that came next: Are our users satisfied with avatar personalities on this platform? If so, why would that be? Perhaps it suggests that there is an anonymity allowed on this social lending platform that is appealing. The question lends itself to further investigation.
(I can now announce that the test ran on www.puddle.com, a social lending website based on trust. Users can contribute/borrow money from a pool of trusted acquitances without a credit score.)
Would love to hear any thoughts on what might be an interesting followup experiment/hypothesis...
Have a great day!