What's your advice for web/mobile optimization newcomers?

by Optimizely ‎06-16-2014 June 16, 2014 - edited ‎10-08-2015 October 8, 2015

What do you wish you knew when you were first getting started with web or mobile optimization? Imagine your friend is just beginning their journey with Optimizely. What would you tell them?

 

Update: we turned all the amazing advice from this thread into an ebook called the "Optimization Survival Guide." You can download it here

 

 

Comments
by SpencerPad Level 2
‎06-16-2014 June 16, 2014

What I would have liked to know when I began A/B testing is that smaller tests are easier to manage, easier to set up and easier to make a conclusion from.

When I began I ran some tests that were aggressive. We would run multi page, multivariate tests which seemed great in principle but inevitably lead to difficulties. As a new tester, setting up a multivariate test that ran through a funnel proved difficult. Often a click goal or some kind of tracking we later realized we needed would not be implemented and the test data would be much harder to discern a result from. Because the test had so many factors and so much depth, the data was not easy to draw a conclusion from.

What I learned is its easier to break the tests down into the smallest possible pieces. They can run faster with less traffic, the result is much more clear and the implementation takes less time and has less of a marign for error.

 

Don't make a test more complex unless there is a very specific need for it.

Simple tests can still be complex enough:

 

optimizely screenshot.jpg

Level 2
by CROmetrics Level 2
‎06-16-2014 June 16, 2014

If someone had told me the importance of copy and messaging, that would have been a really good way to get started with Optimization and get some wins.  I feel like everyone seems to want to try red vs green buttons because they read about it in a blog post somewhere, but it's really about trying to help site visitors solve their problems or find what they're looking for, and that is often accomplished with messaging, starting with the headline.

Level 2
by jemblin Level 2
‎06-16-2014 June 16, 2014

I think the key to a good optimization strategy is to first understand the business you are trying to optimize for, so if someone can run you through the brand, the competitive landscape and how they position themselves before you start then that is really beneficial. I say this because it is something I wish I had known to include, as one of the first set of tests I recommended went against everything the brand wanted from their website, even though we felt we could increase conversions and revenue by implementing them!

Level 2
by
‎06-17-2014 June 17, 2014

Talk to your visitors


In the world of CRO everyone has their own opinions about what to test, what works and what does not work. Management usually have a vision for their website that they think will help them to achieve all those targets they have scribbled up on the whiteboards of their meeting rooms. In reality however, we are all wrong. The people who truly know what needs to be improved on your website are your real world visitors, the ones that are looking to achieve THEIR goals on your website and not the goals laid down by management.

 

By engaging with your visitors you can gain insightful feedback about the pain points they suffer, whether that be issues with the website, lack of understanding about what you offer or quesions that are left unanswered in your copy.

 

Tools such as Qualaroo allow you to start obtaining this feedback in a matter of minutes allowing you to prioritise your testing and begin generating hypothesis that solve real customer problems rather than percieved problems discussed in boardrooms. 

 

Always start your CRO efforts by working out how you can make your website better for your visitors and not for your boss, this will undoubtly keep the boss happier further down the line when more visitors are achieving their goals on the website, generating more revenue for the boss!

 

A good CRO will put the visitor first!

 

qualaroo-screenshot.png

Level 11
by ptalari3
‎06-17-2014 June 17, 2014
An area you can improve is the test management side of the site. We would love the ability to have someone on our team manage and report on the tests while others simply view the reports.
Level 1
by zemaniac
‎06-17-2014 June 17, 2014 - edited ‎06-17-2014 June 17, 2014

I think that a concept that a A/B testing noobie must understand (and make its own) is the "Chance to beat the baseline". Without that, he'd better do something else and save his time.


It is very important, especially if Statistics is not exactly his field of expertise. Focusing on the error interval more than on the result itself: that's what every kind of science is about. A/B testing is not coin-flipping.

 

This friend of mine needs to understand how to read the results before even starting to think about the experiment. We gather the data to analyze them, not just to show them off. Wait for your "Chance to beat the baseline" to be stable and never stop an experiment as soon as you reach the 5% confidence level.

 

baseline.PNG

Level 2
by MrBennyBees
‎06-17-2014 June 17, 2014

Here's a pretty simple piece of advice, albeit a bit obvious (though often overlooked!)

 

Understand what you're measuring, why you're measuring it, and interpreting the results correctly. A/B testing software is quite powerful and fascinating to use, but the "cool" factor of making changes and testing variations wears off when you don't have actionable goals/measurement tools in place to gauge the performance.

 

Also, knowing yourself is important for interpreting the results. Does your page get 10 hits a day, 100? 1000? These "analytics" of scale matter significantly, and if you don't have many visitors, you may have to run tests for a longer period of time. If you have thousands of visitors per day, then proceed with caution as your tests can have significant effects (positive AND negative) on your revenue, browsing experience, and overall performance. I would suggest getting one's feet wet with a small, low risk test, and then graduating to a "big picture perspective" while making attempts to really move the needle, e-book downloads, products and revenue, or whatever you're looking to increase.

 

 

 

Level 2
by timgregory
‎06-17-2014 June 17, 2014

My number 1 tip... get the organisation to buy into a culture of formal testing and experimentation.

Everything else you do will be that much easier.

Level 2
by Rhys
‎06-17-2014 June 17, 2014

My tip - which I only followed about a month after starting would be, assuming you are an ecommerce site,  to get a good handle on your page conversion metrics and setup a sample size calculator (use Evan Miller's website to get the calculations) spreadsheet so you know  how long you need to run tests and how you are going to call your winners.  

 

I did this by running A/A tests on our main funnel pages for several weeks (alongside running some early tests) so we could capture good conversion stats for the main transactions we would use to decide winners. Then recorded these page by page with corresponding baseline conversion %age and the sample size needed to call a winner based on a relative lift in conversion using Minimum Detectable Effect (again check Evan Millers pages for the numbers).

 

This has given us a good basis for a sound testing methodology that we can defend when ask to justify our winning/losing tests. Also this approach allowed us to create a calednar of tests so that we know reasonabely well when tests will start & finish so that planning and scheduling tests becomes easier.

 

Good luck with your testing.

 

Level 2
by dfoley
‎06-17-2014 June 17, 2014

Always run your experiement for a full week. Even if you see a winner sooner. You need a full weeks worth of data to really show correct results.

Level 2
by Kiosk Level 1
‎06-17-2014 June 17, 2014

Make sure to setup experiments with mutiple goals. You need more than just engagement to understand the changes in user behavior from your test.Screen Shot 2014-06-17 at 9.18.40 AM.png

Level 1
by kbiglione
‎06-17-2014 June 17, 2014
Be patient! Especially for a new site with relatively low traffic. Avoid the temptation to declare winners early. It can take a while to get to a statistically significant result.
Level 1
by gbesen
‎06-17-2014 June 17, 2014
Don't A/B test, do A/A/B/N test. Meaning always include a variation that is identical to control variation to avoid false positives or negatives. This is easier than calculating sample size for each goal which vary a lot in terms of conversion rate. When you have a lot of goals to track you need to be sure you don't get false positives / negatives for each goal. Don't be fooled by confidence level indicator.
Level 2
by eamonhoolihan Level 2
‎06-17-2014 June 17, 2014

Do your homework before deciding what you'll test next. Don't just rely on gut instinct, although this is a valid way to add to your pool of test ideas. Use analytics, user testing, heatmaps and survey tools to understand your visitors and wisely prioritise tests, otherwise you'll not see enough winning variations. A high ratio of winning variations improves ROI from your testing programme and keeps clients motivated.

by crmoptimizely
‎06-17-2014 June 17, 2014

Setup as many goals as you need - don't be shy Smiley Happy. Also, if the same link/button is on two places in the same page, setup different goals for each to see which button is resulting in what.Screen Shot 2014-06-17 at 12.23.49 PM.png

by Adomatica
‎06-17-2014 June 17, 2014

Start with small test. Some will be winners. Some will be losers. The important part is just get started and learn the tool(s). Expose as much of your organization to testing - get more buy in by running tests that other suggest. Before starting a test, think about what you will do. Actionable tests trump interesting tests.

Level 2
by mikechild
‎06-17-2014 June 17, 2014

My beginner tip is to create experiments that test a written hypothesis. Your hypothesis should be able to answer the question "why did we run this experiment". Even if your hypothesis turns out to be wrong, you can and should learn from it.

Level 1
by Optimizely
‎06-17-2014 June 17, 2014

All of these tips are so helpful! In fact, I'd love for some folks to elaborate further on their adivce so our friend can hit the ground running with her first A/B test. 

 

@CROmetrics - Your comment about copy/messaging being crucial for A/B testing is really interesting. Can you highlight a specific win or example of a project that you worked on that had impactful results? I’d love to hear more about a piece of copy that worked well vs. a piece that did not. 

@timgregory  - You’re right! Getting the organization to buy into the testing culture is huge! Do you have any tips or stories that you can share about how you did this within your organization?

@crmoptimizely  and @Kiosk  - Goals are definitely important and necessary for gathering actionable insight. Can you provide some additional information on how you choose the right goals for your test?

Optimizely
by SlamMan
‎06-17-2014 June 17, 2014 - edited ‎06-17-2014 June 17, 2014

Test everything you can.  You'll be surprised at what gives you a lift and what actually hurts your conversions when you thought it would help.  I can't tell you how many times I've been disappointed by a test that I thought was going to have a huge impact and surprised by a small test.  In my ecommmerce experience, eliminating distraction is the key.

 

Screen Shot 2014-06-17 at 3.29.00 PM.png

Level 2
by Optimizely
‎06-17-2014 June 17, 2014 - edited ‎06-17-2014 June 17, 2014

Awesome @SlamMan ! Can you tell us about what you removed specfically to eliminate distraction? Did you remove banners within the checkout flow or simplify the product descriptions? 

Optimizely
by SlamMan
‎06-17-2014 June 17, 2014

Hi  @Amanda .  On that test I was inspired by Amazon's final checkout page that has little to nothing on the page to distract you from completing checkout.  On the winning version I eliminated the menu bar, search box and a couple other links from the header and I completely removed the footer from the bottom.  As you can see the conversion lift was fairly significant.

Level 2
by crmoptimizely
‎06-17-2014 June 17, 2014

Amanda - Goals are basically anything that is click-able on the page. In our case, we may have one or two buttons/links that we are wanting the user to click on (call to action). However, there are also many other links that users can click, which shows that users are engaged. 

 

Basically, have primary goals (eg. call to action) and also have goals for engagement which may give you insight into what exact users are interested in before they head to click on the "call to action".

by CROmetrics Level 2
‎06-17-2014 June 17, 2014

@Amanda The one that comes to mind is an old client Schedulicity.  At the time, they were trying to attract more service professionals to their platform, and the headline read "Finally, a Better Way to Schedule!".  That was not all that compelling, so after trying a number of different ideas, we settled on "End Scheduling Hassles" which had a pretty significant conversion rate increase because the hassles are really the problem that a hair stylist is in pain around and Schedulicity's solution addressed directly.

Level 2
by Khattaab
‎06-17-2014 June 17, 2014

An effective testing program takes a 360-degree view of your user touch points; consider both quantitative and qualitative data. The quantitative inputs from your analytics dashboard will inform you of where to test, but will offer limited context of why users are behaving as they are. Qualitative inputs such as heat maps, surveys, user testing, product roadmaps, and competitive analysis bring the voice of the customer to life.

Level 5
by vincentbarr
‎06-17-2014 June 17, 2014

Get started. 

Level 1
by siherron Level 2
‎06-18-2014 June 18, 2014

Not so much a piece of advice, more like tips Smiley Happy

 

  • Question! tell me what you think about me.

A/B testing is easiest when you have a question you want answered. A good question, when answered, will likely spawn other questions and that's what you want, iteration on a test. Iteration provides more value than one off tactical experiments.

 

Where do we get your questions from though, right? Start looking at your analytics, look at the highest trafficked pages, highest Exit pages, Bounces, and start to split these down by Traffic Source. If that makes you nervous, then just ask your visitors. Several folks in comments above have mentioned the use of tools like Qualaroo, but there are stacks of free survey tools and it's pretty straightforward to design a timed 'pop-in' or light-box in Optimizely and only show it to a sub-segment of your audience... the tools are right in front of you Smiley Happy

 

  • Go big or go home.

It's definitely not always the case, but if it's your first time testing you need to know that the bigger and more grandiose the changes you make to your site/page the more likely you are to come out with some kind of learning or insight about your audience. 

 

  • Failure is not an option...

 Er, actually it is. Even with Statistically tied results, a test designers nemesis, you're learning something right? and lets be honest, no matter how much of an "expert" you think you are, it's unlikely you're going to get it right 100% of the time. Test, Learn, Rinse, Repeat.

 

...holy crap, I nearly forgot!!!!

 

Measurement. This rides side by side with my first point. Have clear goals and measure. Aim at solid Conversion Goals important to whatever it is you're doing on your website... sounds like a given, it's not. There are tons of well known brands out there who don't know why they have a website.

 

Obligatory random picture so that I can get an additional entry... BOOM!

 

Elephants and Rainbows: Sentiment Analysis

 

Level 2
by timgregory
‎06-18-2014 June 18, 2014

@AmandaS we're still in the very early days in our transition towards an optimization culture, but what I did is ran about 5 quick tests in the first 2 weeks of getting Optimizely set up, and then gave a presentation for all the technical and product teams showing what worked, what didn't, what insights were gained, and some ideas for future tests leading on from the initial ones.

 

It got a lot of people excited about the possibilities, and I overheard one of the developers saying to the other "you see, I told you that the fixed bar was a good idea!" so it was clear that the team understood that they could test things in future rather than trying to simply convince others that they knew the best way to do something.

 

Level 2
by MeganBush Level 2
‎06-18-2014 June 18, 2014

 

Expect to spend a lot of time analyzing the results to make new hypotheses and assumptions about your customers. A test win or loss can tell you a lot more than just "this feature works" or "this feature doesn't" and is only worth the effort if you learn something new about your customers.

Level 2
by keith_lovgren Level 2
‎06-18-2014 June 18, 2014

When you’re new to conversion rate testing the best piece of advice I can give you is to have patience – with the process, the results, the platform you’re using, but most of all, patience with yourself.

 

The pressure will seem enormous at times.  Most likely though, despite what it may seem, it’s you who is creating most of this stress.

 

No doubt you’ve heard and read stories about fantastic gains that have come about from conversion rate optimization. Many of them are true.

 

What you probably haven’t read about, are the CRO tests that caused conversion rates to drop.  Tests that despite strong research, hypothesis, and implementation cause conversion rates to plummet.

 

This happens.

 

At Opticon, the Optimizely conversion conference, the very respected Hiten Shah told the audience that, in his experience, only one in five tests produce meaningful gains.

 

As long as you have meaningful takeaways after the test is complete you’ll make your business stronger.

 

So stay with it and have patience. Your business/career/client will be better for it.

 

Different forms of conversion rate testing have been going on for hundreds of years.

 

From open markets, where peddlers would place colorful fruit closest to the pathway and turnips in the back because they found the fruit would cause pedestrians to stop and browse.

 

To department and grocery stores who change layout and product placement on a weekly basis to generate as much revenue per hour as possible.

 

But in the online world conversion rate optimization is still in its infancy. Take a look at this snippet from Brian Massey’s excellent blog -

 

 “I did a quick survey of sites selling plastic surgery and cosmetic surgery who are spending at least $500 per month on search advertising.

 

Of 2,958 domains, only 33 had some form of split testing software installed, such as Optimizely. That’s just 1.1% of these domains. Furthermore, we know that some portion of these testing are not actually using the software they have installed.”

 

So stay with it my friend. You’re on an exciting journey.

 

The commenters in this thread have all offered you excellent advice. Take a moment and book mark it for use later.

 

For great offline reading, I recommend, “You Should Test That” by Chris Goward. Don’t worry its not stuffy or boring but it does contain all the nuts and bolts of a good testing program. And it will get you excited.

 

you-should-test-that.jpg

 

 

So have at it and happy testing!

 

 

 

 

 

 

 

by drieggs
‎06-18-2014 June 18, 2014

Don't forget that all other variables must be controlled!

Try and make the test groups as random as possible. If you think about it, with good testing all results are good. Even if it turns out that your customers don't like the new feature you are testing, you have still learned something very valuable and can use that in your next build cycle. So the worst thing you can do is gather testing information that isn't accurate. It can prevent a team from pivoting, or cause them to pivot too early. This is one of the largest wastes of time while testing; figuring out down the line that one of your tests was actually scewed by erroneous variables. The team becomes discouraged, losses faith in the testing system, and potentially has to throw away a lot of work. Don't forget the basics of testing!! I have a background in Psychology from college, and I found reviewing the basics of what makes a valid and reliable test to be extremely helpful while devising how to do split and cohort testing with regards to my product. 

third-variable.jpg

Iterate the build-measure-learn feedback loop as frequently as possible!

You can alleviate the dangers of a poorly implemented test by testing frequently. Also, the more you test the better you understand the whole process, and the more you will learn about what your customer wants. Because understanding what the customer wants is paramount in creating a product, you will bring more value to the project the more frequently you complete the build-measure-learn loop

 

**A lot of extremely valuable information on testing can be gained from Eric Ries' "The Lean Startup." If you haven't read it already, I would highly recommend reading it.

Level 1
by Optimizely
‎06-18-2014 June 18, 2014 - edited ‎06-18-2014 June 18, 2014

If I could go back in time and schedule a meeting with myself as I planned my first web optimization experiment, I would say something like this:

 

START SIMPLE!

BUILD A FOUNDATION FOR LONG-TERM SUCCESS

 

For a little background, the first test I planned was a multivariate test (3x2, 8 variations) across a multipage experiment context (landing page, homepage, interstitial zip code input, coupon redemption page), that had a scheduled iteration at the midpoint of a 16 week seasonal promotional campaign with hundreds of thousands of dollars of media in market supporting it. I used a multivariate regression analysis to determine which factor we tested was the most impactful.

 

Are you still with me???!?!

 

That makes two of us - please feel free to go tell that company's CMO how brilliant I am! 

 

That test was the first step this company had ever run, and it was doomed to fail due to three crucial factors:

  1. It was scheduled as a project, not a process.
  2. The tight expected margins of the promotional campaign (the company was already on the fence about the ROI of this kind of campaign) set us starting our testing journey at the bottom of a tall cliff named 'ROI'.
  3. The complicated build and tightly defined timeline set an inappropriate expectation that testing was a slow, expensive process that takes months to effect change. 

Each of them could have been addressed by starting simply, with a vision towards the long term. 

 

As my father would say,

"I Just Don't Want You to Make The Same Mistakes I Did!"

Optimizely
by joaocorreia
‎06-19-2014 June 19, 2014 - edited ‎06-19-2014 June 19, 2014

Don't stop tests as soon as they are statistically significant, you are probably making a Type I or Type II error.


Type I - Say there is an effect when there is not
Type II - Say there is not an effect when there is

 

Set your testing program up to success

 

  • Step 1 - Define a primary goal. Click on a button, view of a certain page, etc.

  • Step 2 - Benchmark your primary goal CTR or CVR using Optimizely doing a test with no variation.

  • Step 3 - All your testing should revolve around MDE, Minimum Detectable Effect. The MDE is the effect you expect to obtain with the variant you are about to test.  

  • Step 4 - Determine the required sample size using Evan Miller calculator. Choose your statistical power and significance level (how sure you want to be) and input: baseline CTR/CVR and MDE.

    You now have the required sample size by branch. For 2 branches (control and variation), multiply the number by two. Thats the amount of visitors you need. If you stop the test before you reach the sample size, you have a higher chance of making a type I or type II error.

  • Step 5 - Ask yourself:  Do I really expect this MDE with this variation? Can I have this volume of traffic on the page that I'm testing? 

    YES: Go ahead and test. Stop the test only when you reach the required sample size. If you see you didnt reach the MDE but are close and results are consistent, re-calculate sample size and extend your test. If you failed to reach MDE its fine. The variant is not causing the expected effect, in fact it could be, but it may be too small to be detectable. 

    NO: You don't have enough traffic for that MDE. Re-think the test and make a bigger change on the variation, one that you would expect to cause a higher MDE. Re-calculate required sample size. Do you have traffic? Iterate, no matter how much traffic you have you can always test.

 

If you really want to be even more rigourous, test the same 3 times, were the results consistent? You got yourself a win.

Level 1
by SarahV
‎06-20-2014 June 20, 2014

It is important to start with small tests with limited versions and a strong Hypothesis.  The most success I have seen is with imagery and pricing.  It is not all so bad to get null results.  It just means the customer did not care either way

Level 2
by marknewcomer Level 1
‎06-23-2014 June 23, 2014

Start simple. There is often pressure to make tests too complex too fast. Prove you can do a simple test and create your process from there. Then with a working progress you can begin to build out a testing roadmap that gets more complex and more ambitious over time (channels, messages, etc).

 

 

Level 1
by jbaldwin
‎06-24-2014 June 24, 2014

Start with identifying the core purpose of your website. In my opinion, there are three types: 1. Informational (think Wikipedia); 2. Entertainment (think youtube or pbskids.org); 3. Lead Generation/eCommerce (think amazon or any SaaS company out there). 

 

Next, start testing multiple variable clusters on your most trafficked pages. As you begin to affect change and learn what elements, manipulated in certain ways have an effect on your visitors' browsing habits, begin to whittle down your tests to single factorial tests so you can gain greater clarity into which specific elements have the greatest impact on encouraging the type of behavior that impacts your bottom line.

Level 1
by timatblend
‎06-26-2014 June 26, 2014

Start small. That's my advice. 

 

Change one CTA button. Maybe it says "Go"

 

Why not try "Download"?

 

Test it and you'll get your answer.

 

Personally little tweaks like that have helped different companies our agency works for increase conversion rates exponetially. It's not about making some extravagant design and copy changes, it's the little things. And the best thing you can do is just start testing. Don't hestitate and overthink it, just do it.

 

Experiment. Fail. Learn. Repeat.  

Level 1
by webdesigner007
‎06-26-2014 June 26, 2014

Don't expect that every test will increase revenue or conversions.

Finding out what the users don't like is just as important as finding out what they like better.

 

If a variation starts to lose, don't panic and switch it off before it has had a chance to run it's course and before you have enough data to make an informed decision.

by cro-nerd
‎06-26-2014 June 26, 2014

 

Often, we tend to micromanage with several optmization tests, and miss the basics of optimization strategies. What works well with Optimizely is that the setting up the experimenting process is absolutely great. So, I see this tool as a mere workspace, than a process flow tool.

 

I would love to see the tool evolving in to an optimization strategy tool - where one can document all the ideas, prioritize them based on ROI for the business, move it to experimentation workspace, do historical comparison or segment wise comparison on what works and what doesn't, and eventually monitor ROI after various testing.

 

Level 1
by davidgarfield
‎06-26-2014 June 26, 2014

Something interesting I brought up with our email marketer as we split test features on landing pages and a good piece of sound advice would be directed at a best practice when reading your results/stats within optimizely.  

If you are starting a new split test on a page that's had a decent amount of traffic prior to starting the test, (or you have a business website with repeat visits as a large portion of your traffic), you'll need to consider that it can take a week or so before your actual split test results are not biasable.  

Why?  Well if you think about it, if your regular visitor has visited a page, and now on their return they are seeing either one of two versions you've just started testing, a few things can skew those results.  One example to consider: They've already seen the content and possibly already peformed the call to action you are testing (yet now you are considering that visit as a negative amongst your totals even though it shouldn't be counted at all).  

If you think on this alone, there are other similiar variances that can occur with return visitors that are being counted as new/unique visitors like this which means on the whole you should allow your statistics to normalize over a few weeks before determining a winner so you have enough mix of new unique visits to the page vs return visitors that are being counted as 'new' outside of Optimizely's awareness of the scope.

 

Hope that helps some fellow marketers out there.  Also i'll look darn cool in the office here with those headphones and everyone here will praise Optimizely for being so cool for giving them to me.   Smiley Wink 


by ShawnFD
‎06-26-2014 June 26, 2014

I've learned a few things since I began testing a few months ago:

 

  1. Less is more
    My first test was trying different variations of a block of text on the homepage. I figured that since the homepage was heavily trafficked, the test would fly by. So, I figured I'd try with two different key words, bold/unbolded, and exclamation point vs. period. Of course, that meant 8 variations (Each set of key words unbolded with a period, each set bolded with a period, each set unbolded with an exclamation, etc.). I started the test and watched the results roll in, and I was happy. After a few days, I looked and saw that the "Chance to beat original" wasn't getting very high very quickly for any of the variations, so I looked up a calculator to see about how long it'd take. Turns out the test would take about two years to complete, lol. O ... k, maybe I was a bit too optimistic. The truth is that most tests should be standard A/B tests, and you should simply iterate from there. Don't necessarily test a bunch of small changes on a page, test one version vs. a drastically different version, and then iterate from there. You'll get actionable results more quickly.
  2. Just because Optimizely says the test is done doesn't mean it's done.
    I did an A/B test on our homepage and let it run over the weekend. I checked in on Sunday, and yay! It'd already had a conclusive, green-lit result that my revised homepage had a 98% chance to beat the original with a 60% lift after two days of testing and 7,000 participants! When I got into work on Monday, however, it dropped to 93% chance to beat (no longer green-lit) and 40% lift. Ok, so I figured I should let it keep going. Well, it's still going today, and while the revised homepage it still winning, it's down to 89% chance to beat and 19% lift. The trick is that just because Optimizely gives you the green-light, that doesn't mean the test is conclusive. If you have different conversion rates on different days of the week, it might be useful to run every test for a full week, regardless of whether Optimizely gives you a seal of approval. 

    Here's my graph for that test:

    HomepageRevision.PNG
  3. Make sure you aren't mixing tests.
    By all means, run multiple tests at once to maximize your efficiency, but make sure you aren't running two tests on the same customer paths/funnels. If you're running a hopepage test, for instance, that pretty much precludes any other tests. But if you're running a test on a payment page, you can run it for one product, and then run a different test on a different product's flow.
Level 1
by brycehays
‎06-26-2014 June 26, 2014

Always run your experiments to statistal relevancy (95%+ confidence) over the course of at least 1 week to ensure results translate to production.

 

Screen Shot 2014-06-26 at 1.09.38 PM.png

Level 1
by YT_CR
‎06-26-2014 June 26, 2014

A few pointers I've learned over the years:

  • Make sure you don't test multiple factors at once - for example, changing a button's copy and creative treatment - they should be tested independently
  • If you get results that you didn't expect, dig around to see if there were any factors you did not account for (for example, another group within the company added a promo offer during the test)
  • Even if you get results that disprove your hypothesis, don't get discouraged, even a negative outcome is a learning
  • Test, test, test before launching an experiment!
Level 1
by JPVaughan
‎06-26-2014 June 26, 2014

Don't conclude too early. Make sure you have 'statistical confidence' by using a calculator and double check! 

 

Screenshot 2014-06-26 20.09.48.png

Level 1
by dmleong
‎06-26-2014 June 26, 2014

I would have loved to have been able to scope down why we were doing testing and what the real goal of testing was. If users sign up too quickly, they have a greater chance of becoming inactive users. But if they spend more time reading about our product. they are more likely to upgrade later on. A real conversion for us changed from "sign ups" to "upgrades". 

Level 1
by Betabrand
‎06-26-2014 June 26, 2014

Make sure to pay attention to your margins of error.  If they overlap (or are even close), the statistical significance of your test is suspect, even if it's a 99% "chance to beat baseline".

 

For instance, the 2nd variation below has a 99% chance to beat the baseline, but if you look at the margins of error after the Conversion Rate column, they overlap.   That is the 'winning' conversion rate minus it's margin of error is lower than the 'losing' conversion rate plus it's margin of error ( 56.84+0.48 = 57.32, and 57.63-0.47= 57.16):

 

Screen Shot 2014-06-06 at 2.48.57 PM.png

 

The test above was run for weeks and had thousands of visitors.

 

Now margins of error can mean a variety of things depending on how they are defined, but a rule of thumb is that if they overlap (or conservatively, even if they have less than a full margin of error between them), you may not have statistical significance to your test, regardless of a high "chance to beat baseline" or a variation marked as a winner.

Level 2
by rockumizely
‎06-26-2014 June 26, 2014

I wish I knew that changing the traffic throttles during a live test would cause the data/reporting to mess up. We originally thought we could change the traffic throttles in any test and all the data/reporting would not be affected. It turns out we need to stop the test, copy it, then start it again with new traffic throttles.

 

Ideally it would be nice to allow for traffic throttle changes without affecting the data or starting a new test.

 

optimizely traffic allocation.jpg

Level 1
by laurenwebjet
‎06-26-2014 June 26, 2014

The best tip I can give is to check what your competitors are doing differently to you, then trial that. For example if they have the same functionality but a different layout, then trial that layout on your site. 

 

Keep your eye on others who are doing well and see if your customers respond to similar sorts of colours, layouts and content (but obviously keeping within the look and feel of your brand)

 

Also ensure tests run for at least a couple of weeks- we've seen tests that started off positive and even were statistically significant, but then reversed and ended up being negative. 

Level 1
by chrissonn
‎06-26-2014 June 26, 2014

Advice: Better idea generation for tests.

 

Many can think that Optimizely users in a business know enough about Digital and their product to have all the ideas on what they would like to test (and most ideas will be awesome, don't get me wrong).

 

But you would be missing out on many different perspectives:

  • Web developers will have a view of what they feel would work better in the online purchase path.
  • Product managers will have ideas focused on how to validate what will make their product better.
  • Sales people will have ideas around what features will sell the product better, or sell more.
  • Designers will have ideas on how layouts and colour can influence the experience.
  • Content writers will have ideas on what text should be put forward, above the fold, or highlighted.
  • Interns will have new ideas not biased by working for too long for the same business.
  • Senior management will want to test ways forward to influence their vision of where the business should go towards.
  • The list can go on, based on where you work and who you involve.

 

The main learning from this is that you, as the Optimizely platform user / owner, don't need ideas, you can get them from all over your business, and turn it into a competition.

Gamify the experience, and all the above people will be queuing to give you ideas to improve your website, your business.

Level 1
by ReneeDoegar
‎06-27-2014 June 27, 2014

If I were speaking to someone just starting out with mobile optimised testing, I would suggest two things:

 

1) understand your visitor behaviour before you begin. How many visits do you usually get on your pages? What is the standard conversion on your page? It is far better to run a test going in knowing about how long you expect it to take to get the results you need and what figure your untested, control pages were at before you started.

 

2) learn the nuances of statistical relevance. When I was very new to testing, I thought 20 conversions sounded like it beat 8 conversions. Optimizely is great at supplying the statistical likelihood of a winning test for you, but if you don't have their assesement to hand or you are testing something outside that system, there is a helpful online A/B testing tool that I always have bookmarked. It also helps to send this link around to educate potentially confused colleages (who don't understand why something that appears to be a winning test may not yet be so).

 

I use this one:

 

http://www.usereffect.com/split-test-calculator

Screen shot 2014-06-27 at 10.10.19.png

 

As a final point, I think it's useful to join communities of other testers where you can find them, and learn from other people's successes.

Level 1
by Christoph
‎06-27-2014 June 27, 2014

Always wait for A/B test results to be statistically significant - results can change over time, and there is nothing worse than taking the wrong decision just because you're too impacient. Optimizely has this very comfortable feature incorporated in the test results - just use it!

 

TestOptimizely.jpg

Level 1