Large Data Discrepancies, Revenue Tracking and Potential Bugs
Is anyone else having issues with Optimizely revenue tracking? I have two separate issues for two separate clients. I'm currently running an A/B test for an ecommerce client that has some large data discrepancies.
While Google Analytics and Optimizely record things differently, the metrics we track should move in the same direction. If my variation's conversion rate or avg. revenue / visitor increases in Optimizely, I should be seeing the same directional change in Google Analytics.
For this one client, Optimizely is reporting that the variation's avg. revenue / visitor decreased 10% and conversion rate decreased 6%. Google Analytics on the other hand, is reporting a ~1.5% increase in avg. revenue per unique visitor and a slight uptick in conversion rate. I'm seeing total revenue discrepancies of over ~$80k in Optimizely versus Google Analytics (Optimizely is reporting 35% more revenue and 15% more unique visitors). There is also an "unknown" mystery browser that is being included in my test results that has recorded $1,500 in revenue for the original but nothing for the variation with ~200 visits to each, even though I'm only targeting modern desktop browsers. Why is Optimizely including "unknown" browsers in test results if I have only specified that the test should run on certain browsers (Chrome, FF, IE11-9, Safari)? I've run dozens of A/B tests and I never seen a discrepancy this large. This test is still running and has ~10,000 visitors to each variation
While doing QA on a different client's eCommerce site to get them setup for the first time, I was creating fake eCommerce transactions on a staging server. Every 4th or 5th transaction on the variation was not recording revenue in the Optimizely interface even though I saw a debugging tracking beacon firing off each time. There's no reason why Optimizely shouldn't have captured these transactions in my test environment. I thought I was crazy and then reset the test data and ran this process again with the same result.
I'm very suspicious of the data quality I'm getting and wanted to see if anyone else has had any issues or has found a way to reconcile some of the data they are getting.
I think that I keep seeing differences when the tests are getting large amounts of mobile visits so not sure if maybe there is an issue in that segment of users.