@daniel Did you A/B test the A/B testing feature before rolling out? π
@manuel i did not!! π
@daniel was wondering about the way it presents the improvement from A to B. For instance, we had a change that is a 20% improvement but it says it is "slightly better" and shows it as something like a 1.7% increase (i.e., 1.7% out of 100% possible conversion, even though it's 20% better than it's counterpart).
@thinktapwork oh I see. The difference is in absolute percent points right now, I might want to change that
@daniel I think so. I'm used to seeing it relative in other tools. But so far looks really nice and is much better than calculating manually.
Once you've got it all worked out would love to be able to add additional samples (if we're testing 4 scenarios, for instance).
@thinktapwork relative makes much more sense! Iβll add that to the list, thanks! π
A/B testing is live right now, but weβre labeling it beta β the UI is not as refined as weβd like it. Please give copious feedback π©΅