Upcoming webinar on how to get more from your Optimizely results. What do you want to learn?

by Optimizely ‎08-13-2014 August 13, 2014 - edited ‎10-01-2014 October 1, 2014

Analyzing and interpreting your A/B testing results can be a daunting task. We’ve recruited experts Hudson Arnold (Optimizely’s Optimization Strategist) and Arthur Suermondt (Optimizely’s Results Engineer) to host a webinar and help you turn your data into action. Specifically, they'll provide a product walkthrough of the new Results Page and demonstrate best practices for analyzing results to inform an iterative testing strategy. 

 

Your feedback will help ensure that the content provided is beneficial. What we'd like to know is:

 

  • What is your biggest struggle when analyzing your test results?
  • What results page feature would you like to cover in more detail?
  • What are you hoping to get out of this webinar? 

Check out the slides, presentation, and follow up discussion here.

 

Let me know if you have any questions! 



Comments
by
‎08-13-2014 August 13, 2014

Looks like a great idea for a Webinar.

 

I think one of the things that I would look to put across in the webinar is the difference between a 'winning' test and a test that has 'won'.

 

I know the optimizely results page highlights a variation and labels it as a winning test making it appear as though it can never be beaten and offer the 'Launch' button. I would be worried that some people see that and assume that they have a result in their test because it is 'winning'.

 

Maybe you could offer people some assistance on how they can validate whether a 'winning' test can be declared a 'winner'... Statistical significnace, adequate sample size, etc.

 

 

Level 11
by ben
‎08-14-2014 August 14, 2014

I would like to hear more about sample size for determining a winning variation. 

 

Also I would like to hear about tests that have a short life span like doing a redirect test on a email campaign blast that redirects to two different landing pages. Is there an automatic feature in Optimizely to switch over to the winning variation at 3am?

 

Also I would like to hear about the effect of changing the timescale of  test results and how this effect determining a winner.

ben
Level 2
by cfuda
‎08-21-2014 August 21, 2014

Great webinar idea Smiley Happy

I definitely would like to know your typical steps in analyzing test results. It's always good to compare how I do these steps vs. how you would.

I would really appreciate if you showed some case studies of websites with lower volumes of traffic and not just large enterprise clients.

Also I'd like to see some case studies of interpretations and applications of the results.  As we know sometimes tests don't turn out the way we would expect them too.  I'd like to see how people interpretted this unexpected data and applied it either onto the website or faciliated the creation of a new test. 

Thank you so much.

 

Level 1
by
‎08-21-2014 August 21, 2014 - edited ‎08-21-2014 August 21, 2014

Mostly, I'm looking forward to having the latest layout and features explained in detail. Our testing over the years has vacillated between periods of intense SO testing and periods of User Testing. This product has change often during that time.

Although I love the progress being made, coming back from a small hiatus to a changed interface can be a little daunting.

 

As for things I'd like to cover: 

In the last year, I've made several usability suggestions/requests. Many have appeared in this latest iteration. 

One that still causes me to struggle is the color consistency of Original vs Variation in the results charts. I find that from test to test, the 2 basic colors (orange and blue) seem to trade places and I'm often interpretting the results as opposite from their acutal fail/win status.

 

I actually had the same issue with Google Analytics when I set up the segments to evaluate Optimizely test variations. Ultimately, I discovered that is was simply the order in which I added segments that determined the assigned color.

 

Is there a setting or method for determining how the the variations wil be assigned color?

 

Level 7
by nabha
‎08-28-2014 August 28, 2014

Like others, my questions are about sample size, when a test has really one (and how fara away it might be from winning) — actually, those might be the points that you are already planning to cover!

 

"Iteration strategy" sounds like a good idea as well. I have a lot of potential tests to run but no clear knowledge of which to run first, if I should test large changes on a small percentage first, etc.

Level 2
by Optimizely
‎08-28-2014 August 28, 2014 - edited ‎08-28-2014 August 28, 2014

Hi All, 

 

Thank you for your thoughtful input!

 

We're excited to share some best practices in getting better insights from your experiment results, I'm happy to report that we'll be addressing most of the topics that you've requested in this forum. 

 

Some of the suggestions you've made that we'll adress in the webinar:

 

  • "I'm looking forward to having the latest layout and features explained in detail"
  • "I definitely would like to know your typical steps in analyzing test results. It's always good to compare how I do these steps vs. how you would."
  • "I would like to hear about the effect of changing the timescale of  test results and how this effect determining a winner.
  • "I think one of the things that I would look to put across in the webinar is the difference between a 'winning' test and a test that has 'won'."

We're going to use a sample data set as a case study, giving us an applied environment to explore how to best use results page features (some of which are completely brand-new) and draw more fundamental and more actionable insights into your testing program. 

 

We appreciate your participation in the Optiverse community, and couldn't be more excited to incorporate your voice. 

 

Looking forward to connecting (Next week! Get it on your calendars!),

 

Hudson 

Strategic Consultant

Optimizely

Optimizely
by pribeiro
‎09-02-2014 September 2, 2014

Hi, 

 

I work in a company with start up DNA, which means that everything runs super fast in here. 

 

I would like to know how can I make the most of the results in terms of speed. Interpretate fast and share the essential results faster.

 

Looking foward for this webinar too,

 

Pedro RIbeiro.

 

 

 

Level 1