- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
Known Issue: Mobile results are over counting visitors
Known Issue: Mobile results are over counting visitors
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report
We just became aware of two issues related to the way Optimizely calculates results for iOS experiments.
- In some cases, iOS experiments are currently over-reporting the number of visitors in an experiment.
- In a very few cases, iOS experiments may be over-reporting the number of conversions for an experiment.
We are releasing an updated SDK to resolve this problem, and we will email you as soon as it’s available. In the meantime and in the interest of transparency, we have included below a detailed account of the issues and how we plan to prevent this from happening again. Please subscribe to this post to keep updated with the status of this issue.
Issue 1 - Over-reporting the number or visitors in iOS experiments
Optimizely should only report visitors as users who have actually seen the change you've made in your experiment. As of Optimizely SDK version 1.0.58 released on Nov 17, 2014, Optimizely’s iOS SDK is treating all users who open your app as visitors to your active experiments. As an example, this means that if you’re running an experiment on the third screen of your app, but the user never sees that screen, they will still be counted as a visitor in your experiment.
Issue 2 - Overcounting conversions with custom events
Custom events implemented with Optimizely's trackEvent method should only report a conversion if someone has actually seen the change that you made to your experiment. We found an issue where a custom event will still be reported as a conversion even if a user has not seen a change made in your experiment.
Impact of these two issues on your results
The net impact of these two issues is that Optimizely will have been more conservative about declaring a winner or loser and declaring a test statistically significant. Because these bugs reduce the overall conversation rates in experiments equally, it makes it harder to detect a statistically significant difference. If you have had an experiment with statistically significant results, the winner or loser will not change. In some cases, Optimizely was overly conservative and tests that did not declare a winner or loser may have actually had a winner or loser.
Session and Retention Goals
Session and retention goals were introduced as a beta feature starting in Optimizely’s iOS SDK version 1.0.58. The declarations regarding a winner or loser are statistically invalid for session and retention goals, and we will be fixing them in our next release.
We are actively working on these two solutions:
- Investigating retroactively updating existing experiments that were affected
- Creating new APIs in the SDK that will make it crystal clear to developers exactly when a user is placed into an experiment
We sincerely apologize for these issues and inaccuracies. At Optimizely we take these issues very seriously and we are working hard to repair and maintain their integrity.
If you have any questions, comments, or concerns, please reply to this post or send us a private message. You can also email our Senior Product Manager of Mobile at suneet@optimizely.com.
Re: Known Issue: Mobile results are over counting visitors
[ Edited ]- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report
- In some cases, iOS experiments are over-reporting the number of visitors in an experiment.
- In very few cases, iOS experiments may be over-reporting the number of conversions for an experiment.
[[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(experimentDidGetViewed:) name: OptimizelyExperimentViewedNoti fication object:nil];
- Investigating retroactively updating existing experiments that were affected
- Creating new APIs in the SDK that will make it crystal clear to developers exactly when a user is placed into an experiment