Rich-text Reply

What is your testing documentation structure?

adzeds 07-17-14
Accepted Solution

What is your testing documentation structure?

CRO is one of those areas that is still relatively new which means that universal best practices for what to document before, during and after testing has yet to be established.

 

I was wondering what approches other people take to documenting their testing efforts?

 

I imagine that everyone is doing something different but it may be that we are all doing the same when it comes to documenting our tests and reporting the data from them.

 

If we all share what we are doing then in may inspire others to adopt similar approaches, or we may be able to come up with a 'best practice' strategy right here in Optiverse that we can then preach to others. (Optimizely could feature it on their blog?)

 

Look forward to seeing your views on this.

David Shaw
Level 11

MartijnSch 07-18-14
 

Re: What is your testing documentation structure?

Great start of what could be a really helpful thread. Let me kick off with the data we save in our documentation. It looks like a ton of data in thist list but I think we probably save the least amount of data possible but so we can still make sense of it.

 

  • Name
  • Project: we work on multiple projects but keep 1 document for all of our tests.
  • Template: the 'template' on a project for example blog posts or category pages.
  • Status: Idea, Running, Finished.
  • URL: the URL to our backend we're we can set up or edit the A/B test.
  • Conversion Goals: the metrics we want to focus on for this test.
  • CR% Baseline
  • Visitors Needed: calculated by the sample size calculator based on the CR% Baseline.
  • Visitors Total: the total number of visitors we let the test run on.
  • Days: the number of days the experiment ran.
  • Start date
  • End date
  • Winner: which variant won the test: original, variant 1 or 2.
  • Screenshots: of the original variant, variant 1 and possible variant 2.
  • Hypothesis
  • Result: a short explanation on what the result was and if any other metrics changed.
  • Explanation: if not directly clear from the screenshots a short explanation of what we've changed on the page.

If you have any questions please let me know, happy to answer them.

adzeds 07-21-14
 

Re: What is your testing documentation structure?

Looks like a similar philosophy to what I am doing.

I will post up my structure this afternoon for comparison.
David Shaw
Level 11
MJBeisch 07-21-14
 

Re: What is your testing documentation structure?

My company uses SalesForce for project management, client communication, case documentation, etc., so it was fairly easy to incorporate our testing documentation into the already established processes once we started offering optimization services. Each experiment gets set up as a case and then we have several data points attached to it. On a macro level those data points can then be used to generate dashboards (graphs) that can provide usefull information about the department as a whole. This information goes into SalesForce:

 

Experiment ID - the Optimizely experiment ID

Results URL - the sharable link to the experiment results page

Screenshots - screenshots of each variation

Experiment Status - progress stage of the experiment

  • Setup - experiment is being built
  • QA - experiment is being proofed internally
  • Approval - experiment is being proofed by client
  • Waiting - experiment is apprvoed and ready but not on yet for some reason (waiting on a different experiment to conclude, have to coordinate activation with a code deployment, etc.)
  • Active - experiment is running
  • Temporary Implementation - experiment is complete and winning variation is getting 100% traffic while implementation work is being done

Start Time - date and time when experiment was actvated

End Time - date and time when experiment was turned off

Conclusion - variation performance at time of experiment stoppage, determined by goal analasys and confidence levels

  • Win - a variation performed well enough over baseline that the change was implemented
  • Loss - all variations performed significantly worse than the baseline
  • Neutral - no significant performance difference between baseline and variations

Case Study Viable - whether the experiment could be used in a case study

 

I also have 2 documents that I have built and continue to update as we refine our processes.

 

The 5 "W"s

A basic experiment outline, a tech-spec. Organizes the experiment setup and coordination between team members.

  • Who - audence and traffic allocation setup
  • What - text + image description of desired variations
  • When - expected turn-on date of experiment (AKA setup deadline)
  • Where - URL targeting
  • Why - brief description of purpose behind the experiment

The Optimization Cycle

The core analysis and reporting/summary document. Based off of the scientific method.

  • Question - what the experiment trying to achieve, formulated as a question, ie. "How can we collect more email addresses via newsletter signups?"
  • Hypothesis - theory of current user behavior as it relates to the Question, ie. "The current newsletter signup collection field is glossed over by visitors because it is low on the page and does not stand out. Only a visitor actively looking for it would find it. Signup presentation needs to be more agressive."
  • Prediction - propose a solution to be tested, ie. "Adding a newsletter signup dialog to every point-of-entry page will draw attention to the signup. This behavior may have a side-effect of increasing site abandonment because dialogs are seen as intrusive. Attempt to mitigate this abandonment by only showing the dialog once per site visit."
  • Testing - description of test performed, ie. "A/B split test for all site visitors. A: 50% do not see the dialog. B: 50% see the dialog"
  • Analysis - report results of testing and reccomendation of next steps, ie. "Experiment ran for 2 weeks. Version A generated 196 signups. Version B generated 676 signups. That is over a 300% improvement in signup quantity. Version B also maintained relative parity in revenue generation, actually seeing a 0.5% increase. However, Version B resulted in a 1.6% reduction in order conversion."

Actual documentation is much longer and more thorough. Examples above are simply to give an idea of the voice/tone used.

Matt Beischel - E-Commerce Optimization Specialist CohereOne

Level 2
adzeds 07-21-14
 

Re: What is your testing documentation structure?

Thanks for the information Matt.

Looks like there is some crossover between all structures so far which is great to see.
David Shaw
Level 11
adzeds 07-21-14
 

Re: What is your testing documentation structure?

I have always looked to keep all testing documentation very simple and as short as possible as I believe it is more important to actually spend your time analysing data and building test variations as this is where you will find your improvements in the long-run. I do however believe that correct documentation plays a vital role in building a successful CRO team/project.

 

I currently have 2 documents per test and 1 master document for overseeing projects.

 

The master document is simply a spreadsheet that records the following rows:

Test ID,  Project,  Date Started,  Date Completed,  Hypothesis True/False,  Metric Monitored,  Impact,  Notes,  Implemented?

 

This document allows me a snapshot of what is going on and what happened at certain times.

 

Then for each test I currently have 2 documents, one for pre-test (hypothesis document) and one for post-test (test report). However, in bringing up this topic on Optiverse I have realised that I do not need to have two seperate documents for this as they can be combined to create a single document that simply gets updated at the end of test.

 

Hypothesis Document

- Test ID: Assign an ID to the test to make it easier to locate documentation and for use internally to reference the test.

- Test Name: Give a name to the test to make it easier to reference

- Project: What project does this test belong to. Many people may not need this if working on a small site but if working on large sites with many metrics then you can projects to seperate up your tests.

- Variations: The number of variations in the test.

- Targeting: What pages/sections of the website are to be affected by this testing. 

- Hypothesis: What are you looking to test. This statement should read like a change and a question in one. e.g. "Changing the CTA button background from red to green will increase add-to-basket clicks as the button will havea better contrast with the page?"

- Screenshots: Screenshots of all the variations and the original in this test.

 

Test Report Document

- Test ID: As above

- Test Name: As above

- Hypothesis: As above

- Was The Hypothesis True?:  Did the test prove the hypothesis to be true or false, short explanation of the result.

- Test Result Data: The data from the test. Often this is just a screenshot of the results page or a screenshot of our reporting in excel.

- Lessons Learned: Did we learn anything new about our website or business from this test? If we record hit here then others can see it when looking back through tests. 

- Future Testing: Are there any further tests that need to be run on the back of this test? Recording it here means that when we go back through old tests we can check to make sure we followed up each of these.

 

 

 

 

I can see some good crossover with our documents which is great to see. It also appears that we all agree that keeping the documentation side of  CRO light but informative appears to work well. I always say that your documentation should be strong enough that if someone new joins your team you can simply give them access to your test docs and they can see what you have tested, how it went and what you learned withour hours of having to talk them through it all.

 

Very strong topic so far!

David Shaw
Level 11

Re: What is your testing documentation structure?

Thanks for asking this - I am developing the testing documentation for the company I work for, and this looks very useful. Thanks to everyone who have shared details on their documentation practices. I'll have to remember to come back and share anything I found useful while setting this up.
Stephen Hamilton
Digital Marketing Manager - QIS Packaging
www.qispackaging.com.au
adzeds 07-28-14
 

Re: What is your testing documentation structure?

I am going to try and combine all of the responses we get in this thread to develop a sort of 'best practice' guide that Optimizely can share with their users.

Was thinking of generating some template files and a guide. Do people think this would be useful to the community?
David Shaw
Level 11

Re: What is your testing documentation structure?

Yep, sounds useful.

Stephen Hamilton
Digital Marketing Manager - QIS Packaging
www.qispackaging.com.au
charles 08-08-14
 

Re: What is your testing documentation structure?

we'be been working on an approach which compliments @adzeds , @MartijnSch 's and @MJBeisch 's comprehensive documentation. 

 

one of the challenges we face with constantly optimizing is "keeping all of the balls in the air." when we get slammed even good documentation can get ignored or forgotten. so to try and prevent this from happening we're creating simple scripts/programs to (e.g.) notify us when an a/b test is complete, or when something goes out of bounds.

 

we still have a ways to go, but here are some insights which could be helpful to others:

(note: generally it's useful to do stuff manually for a while. automating things too soon can prevent valuable learning from taking place.)

- try to get data out of silos (or programs which don't play well with others) so they are accessible from other systems 

- documents can be like programs. autofill as much data as possible (i.e. quantitative data from reporting), and automate any prompts/reminders required to get qualitative data or analyze results

- write a script for any task which will be done more than once, and automate it. (see note above.)

- version control (and backup) all the things. it can be useful (and enlightening) to have a snapshot of previous stuff for any given moment in time. 

 

would be keen to know what things you guys are automating/scripting out to save time, reduce erroneous stuff, etc.

 

-charles

 

 

Level 2
Jacob 08-08-14
 

Re: What is your testing documentation structure?

[ Edited ]

Interesting thread. We are currently developing a new documentation structure as well, mainly based on the ideas behind the Javelin Experiment Board (https://experiments.javelin.com/). This works pretty good for us so far since our CRO approach are built on the Lean Startup methodology, i.e. Build, Measure, Learn.

 

 

- Jacob

 

Big Orange Button

Level 2
ben 08-13-14
 

Re: What is your testing documentation structure?

great ideas and I will take some of these ideas for sure. At my company we use Jira and Confluence as our project sharing software so I set up some templates. These not only have much of the info you have mentioned but we have added other info like QA checklists. So far it seems to be working well
Ben Cole
Optimization Co-ordinator at Widerfunnel

ben
Level 2

Re: What is your testing documentation structure?

To chime in my 2 cents here...I've always thought the keys in this area are:

1. Test history/documentation needs to be easily accessible to all stakeholders.
2. It needs to be extremely easy to use.
3. It needs to be easily searchable.
4. It needs to be highly consistent.
5. It should show as much as it tells.

To this end I set up an internal marketing experiments blog at our company that tags posts by test topic as well as by test results. The blog titles include Month/Yr time stamps. The posts themselves follow a predictable format and include large visuals of the test recipes and usually a results matrix.

Periodically I circulate an email communication to relevant parties announcing recent test activity and back-linking everything to the blog. The system get's pretty rave reviews from stakeholders and it's now pretty rare to get a question about what we tested way back when and what the results were. Time commitment is about 20-30 minutes per test.
Amanda 09-22-14
 

Re: What is your testing documentation structure?

@alphanumerritt - I love this idea! It's a great to way to easily share the tests and allow your internal team to be able to visualize the experiements. I am looping our Strategic Optimization Consultants ( @ryanlillis and @Hudson and @wlittlewood ) in to this discussion as well, since I am sure they will enjoy it. 

 

It would be amazing if you could share some of the stories you've posted on your internal blog with the Optiverse as well. We have a board dedicated to testing ideas and successes and I know everyone would be excited to read about your experiences Smiley Happy

Optimizely
Highlighted
HazjierP 09-22-14
 

Re: What is your testing documentation structure?

Hi everyone,

I think this is a really great discussion to have and one that is critical to building a scalable testing organization.

We have discussed various versions of testing roadmaps (generally OneDrive, GoogleDocs, or Excel spreadsheets) with clients and have noticed a few key success factors:

  1. Keep it simple:
    The fewer the questions in the roadmap, the greater the likelihood that teammembers will actually use it. The biggest risk with testing roadmaps is that they become too ambitious and get abandoned.

  2. Make it accessible:
    Everyone on the team should have access to the roadmap and use it as their primary working document. If people are not using your testing roadmap as the main place to enter, schedule, and analyze experiments then it will often falter.

  3. Automate, automate, automate:
    You can cut down on fields by building in formulas. Simple example, instead of marking experiments as completed, I usually only ask for start and end dates, and then the model calculates based off the current date what the status of the experiment is.


With that said, I think there are a number of fields that any good Experiment Roadmap could include:

Designing Experiments:

  • Experiment Name: Use the same name as you would in the Optimizely Editor, to keep it simple to find experiments. Also, add a hyperlink to the Experiment Name cell which links to your Optimizely experiment. This way, you can click on any experiment name and jump right into results or editing it.
  • Experiment Description: Describe the variations and underlying reasoning.
  • Experiment Hypothesis: Describe why you are running the experiment. For example: We believe that a large portion of the user base is presently not converting because their underlying concerns regarding quality of product and service are not answered; therefore, by building a clearer value proposition and highlighting it at the top of the home page, we believe that more users will see the value of our products and services, agree with our underlying reasons, and convert.
  • Experiment Essentials: Targeted Pages, Domains, Devices, Segments
  • Effort and Impact: 1-5 values for the difficulty of implementing the test within the Optimizely Editor and the Impact you expect to generate

Analyzing Experiments:

  • Baseline %: Primary goal, and current baseline
  • Target %: Minimum desired outcome
  • Variations: # of variations
  • Start and End Date: Actual start and end date of experiment
  • Sample Size: What sample the experiment needs to reach significant results. 
  • Traffic Allocation: The traffic on the tested pages and the % you are allocating to the experiment.

Reviewing Experiments:

  • Result: % result of winning variation, or best performing losing variation
  • Implemented: Is the test live on the site
  • Hypothesis Review: How do the test results relate to your original hypothesis
  • Learnings: What have you learned from this experiment that you can apply to future content / experiments?
  • Surprises: Has any aspect of your experiment outcome surprised you? Are there any areas you want to research in the future?


These are a lot of fields. But briefly, I think any good experiment roadmap should succeed at the following:

  1. Have a good overview:, you should know how the experiments in your roadmap relate to the ones in the editor
  2. Clearly describe hypotheses: testing without hypotheses rarely has impact
  3. Prioritize experiments: we prefer Effort versus Impact, but there are other equally valid approaches
  4. Track baselines and necessary sample sizes: this is absolutely critical. There are many experiments that will never reach statistical validity, which you can spot before running the experiment. Suppose you are making a microscopic design change that, best case, lifts your results by 2% relatively (so from 3% to 3.06%), with a sample size of 10,000 per day, it would take almost 7 months to be statistically significant. Check your traffic, your baselines, and sample sizes before running tests. Ask yourself if you are running the best version of your experiment or if you are merely testing for testings sake.


Happy testing!

 

-Hazjier

Optimizely
ShaneHale 07-29-15
 

Re: Webinar Question for Shane Hale of DMV.org

Hello David... excellent question.


We rely heavily on Asana (https://asana.com/product), a web-based Task Management system to manage our testing queue. This allows us to document each test hypothesis, go back and forth on design, QA, link to Optimizely experiements/results. It truely is a home base for the team. It looks something like this:

 

Screen Shot 2015-07-29 at 2.26.55 PM.png

 

As we build, develop, test and report on an experiement it moves up the ladder. Once complete the task is closed out and the resuls are recorded in Jive. Jive is a intenal social network to publish our findings to the entire company.

 

Does this help?

Amanda 10-09-15
 

Re: What is your testing documentation structure?

I still love this thread so much -- it's very valuable! As @charles and @HazjierP mentioned, the documentation process will be so much easier if you can automate portions of the flow. I wanted to surface a recent conversation in the Community that discusses project management tools that integrate with Optimizely. What do you think of these tools? Are there any project management tools or internal systems that you've built or seen to automate the documentation process? Tell us here. 

Optimizely