How do you test the impact of a new website feature (on business metrics) which requires a user to opt into using it? I’m assuming that A/B testing isn’t an option here due to potential bias in results. Thanks!
It’s hard to say without knowing more about the feature, but can you have the users opt in, but in one case they get the new feature and in the other case they’ll get nothing?
This is genius
This doesn’t solve the biggest problem that those who opt in could be enormously different than those who don’t.
Let’s ask Snapchat with that UI update 😬
Are you trying to estimate the effect of rolling this out as mandatory for all users (average treatment effect) or how users who opt in would respond (local average treatment effect)? If the former you have to deal with people opting in being fundamentally different than those who don’t (selection bias). You could do some kind of population adjustment with a propensity score, but are in trouble if selection is mostly driven by unobservables. If the latter, just take those who opt in and run the standard A/B test design on those users. This assumes there is no test control interference between opt in and non opt in users like network effects.
This is roughly right
@e12, @RandForest Any good sources to read around experiment design and proper evaluation?
Thanks for all the responses everyone! Great ideas! To add some more details, I’m trying to answer 2 questions: 1. How much lift does this feature cause?(opt-in is voluntary) 2. When rolled out site wide what is the net lift we could expect? (opt-in still voluntary) Summarizing my understanding from discussion above, for #2 we could just do a usual A/B test. And site wide impact = % users opted-in * lift per opted-in user (both come from test) For #1 however, I think a pre-post on opted in users is better. And there are no network effects. Does that sound right?
Yeah sounds good. If it ends up being too much effort, #1 doesn’t sound like it is worth answering if lift per opted in user is high enough from #2.
Do a normal A/B test, where B is just the opt in experience. If like 1% of people opt in then you’re probably screwed, but if a decent amount do opt in, then you can get a directional sense for how things are going to trend if fully rolled out. Besides, wouldn’t that be the experience if you do decide to launch it? Or in launch will there be no opt in?
To be sure, if the final rollout of the feature will ALWAYS require opt in then your experiment is really straightforward.