Rich-text Reply

Reviewing optimizely experiments, process, and code approvals

cmore 09-02-15

Reviewing optimizely experiments, process, and code approvals

Hi Optimizely users!

 

Given that many websites and engineering teams have a strict process for making changes to websites, how do you handle your Optimizely code given that it bypasses all the existing process?

 

For example, here's how some website teams do changes to websites:

 

1) A new page or a change to an existing page

2) File a ticket

3) Get a designer/developer assigned

4) Create wireframes/UX/mockups or working html demo of new page

5) Get copy/design approved

6) Developer creates final product

7) Owner does functional testing

8) QA does regression, functional and quality testing

9) Page change is launched

10) Ticket closed

 

Given that Optimizely injects JavaScript directly into a page and can change pretty much anything on a page and doesn't have the checks/balances of the above example process, how to do ensure engineering and product teams are:

 

* Informed on upcoming tests on their platform

* Comfortable with the test itself

* That the code is not going to break the experience even for edge cases

* The code is good enough quality for testing and doesn't have obvious flaws

 

Have you found a good process or tool to do ensure the code we are injecting into websites isn't going to break or degrade the experience outside of what is being tested?

 

Who has ownership over if a test is going to be ran on a website?

 

Thanks!

Chris

Level 1

Hudson 09-16-15
 

Re: Reviewing optimizely experiments, process, and code approvals

Hi Chris, 

 

Thanks for reaching out to the Optiverse Community. 

 

You're exactly right in being curious about the proper governance one should use when running an optimization program 'on top' of their 'normal' site environment. 

 

What's more, you started framing the answer yourself in your question!

(Socrates must have had something with his method...)

 

As you said,

 

"Given that Optimizely injects JavaScript directly into a page and can change pretty much anything on a page and doesn't have the checks/balances of the above example process, how to do ensure engineering and product teams are:

 

* Informed on upcoming tests on their platform

* Comfortable with the test itself

* That the code is not going to break the experience even for edge cases

* The code is good enough quality for testing and doesn't have obvious flaws"

 

In short, as you were getting at, the answer is to 

  • Ensure the optimization program through Optimizely has the right checks & balances; establish a process, with documentation and agreement from the relevant stakeholders that mitigates risks while still allowing for the efficiency and felxibility that testing with Optimizely can provide. 
  • Bring those stakeholders (Product & Engineering, in your case it seems) into the fold, to help you better articulate what the right checks and balances are given the dynamics of optimization you mentioned ('pretty much can change anything on a page'). 

 

I think you even mentioned the right checks and balances to incorporate;

"...

5) Get copy/design approved

...

7) Owner does functional testing

8) QA does regression, functional and quality testing

..."

 

Further, by using rigorous test scoping documentation, and always socializing the tests that you plan on running, you can further keep quality high and organizational misunderstanding mitigated. 

 

RE: Who should own?

 

The answer varies, particularly by organization size and structure. Often, there's a key team member with cross-disciplinary skillset (marketing, analytics, U/X, front end development, etc.) who can own all stages of the testing program thereselves - this is especially common in 'younger' testing orgs, that have yet to devote teams to the process. 

 

Larger teams have representation from multiple teams all contributing ideas, design and development executing tests, analytics reporting and declaring winners, and a program manager overseeing the processand providing input. 

 

The best way really depends - how is your org currently structured? 

 

Let us know how this response sounds - would you like more detail?

 

Cheers,

Hudson

 

Strategy Consultant 

Optimizely

Optimizely
cmore 09-16-15
 

Re: Reviewing optimizely experiments, process, and code approvals

Thanks for the reply, Hudson and it is well aligned to what we are doing now.

 

Do you have any experience using specific tools and process for code reviews? We have been copy/pasting the code out of Optimizely and putting into a gist (http://gist.github.com/), and CC'ing specific web developers on the code, getting feedback, refactoring code, and finally getting a approval on the code. Then we copy/paste the code from gist back into Optimizely, do a final functional test, and kick it off.

 

I wish there was a better integration ability with Optimizely and a code review/comments/history platform or if that ability was built into Optimizely. Being able to have a conversation and have change history on code or experiment settings would really help keeping everything in context.

 

Thanks,

Chris

Level 1
Pauline 09-16-15
 

Re: Reviewing optimizely experiments, process, and code approvals

Here is how it works for us:

 

Implementation process

- Optimizely tests are part of our sprint planning and they belong to eahc's team bakclog.

- Same engineers that write "real" code write optimizely experiments

- Same QA process for optimizely tests than any other stories

- Start optimizely test with only small % of traffic and monitor results (different browsers and traffic source) for a couple days before switching to 100%

 

Communication with outside stakeholders

- the product manager is responsible for contacting directly the stakeholders before they starts (more as an FYI than a permission)

- Wiki page available with all the tests that are running so anyone who is not sure why the banner is green instead of blue can check if it is a running test

- Send bi-weekly updates with new tests to a large distribution list to make sure everyone is aware of what is running

 

I agree it would be good to have a discussion forum within optimizely + history of the edits/changes!

 

Hope it helps

Product Manager
Level 2
MartijnSch 09-20-15
 

Re: Reviewing optimizely experiments, process, and code approvals

Hey Chris,

Cool question, let's see if I can give some clarification on how we run this at The Next Web. The CRO team is responsible for the test that we run on TNW so that also makes them the owner in case something breaks because of the test, this rarely happens. What we do to make sure that this never happens is that for very complex tests we ask our (front-end) developers to look over the code to see if something could be coded better.

Then they'll make sure that the variation code is using the right selectors and that they're are as specific as possible. In the past we've used more general selectors for our tests but sometimes you'll see that you don't know that specific elements on other parts of the site might be used there as well.

As all the code we write is temporary anyway (max 8 days), we just don't want it to break but besides that it might be ugly if needed, as long as it won't slow down the site. If a variation proves to be the winner we'll integrate this directly in to the code base anyway making it so that the code will be completely rewritten anyway.

Hope this gives some insights,

Martijn.
vkartha 09-21-15
 

Re: Reviewing optimizely experiments, process, and code approvals

The overall approach seems consistent with what we have in our organization as well. We follow the below approach:
1. Approve A/B test candidates with broader audience
2. Add approved tests into Kanban board
3. Setup experiment in QA environment
4. QA reviews and approves test
5. Migrate to production

Since we don't have an optimization team per say, we have setup a cross functional 5-member A/B Kanban team:
1. Scrum Master - Drive the kanban process
2. Dev Lead - Certify the Optimizely code changes to ensure no negative impact to production or existing functionality
3. QA - Dedicated QA to test all experiments
4. Product Manager - Prioritizes and approves A/B test candidates before bringing it to the team. So this team only reviews approved tests
5. Optimizely expert - developer who sets up tests in QA and Production, and helps monitor results

The key for us was to ensure that we identified the right resources for each of the above role, who understand the underlying application. This helps us reduce unnecessary churn when testing medium to complex experiments
Level 1
juliofarfan 09-25-15
 

Re: Reviewing optimizely experiments, process, and code approvals

This is the check list we are running on every test: 

  1. AB Test developer configures the test. 
  2. Test is set up to view only by people with Test cookie
  3. Test url is shared with the people involved on the test (url gets like this mydomaincom/?optimizely_xYOUREXPERIMENTID=1)
  4. Test is approved by the people involved
  5. Email of test activation is sent to the involved people
  6. Involved people test again now in live site.