AB Testing

A/B testing is comparing two versions of either a webpage, email campaign or an aspect in a scenario to evaluate which performs best. With the different variants shown to your customers, you can determine which version is the most effective, with data-backed evidence. Exponea offers A/B testing in scenarios, web layers and experiments. A customer is assigned a variant that will be shown to them instantly when reaching an A/B split node in a scenario or when matching the conditions for displaying a web layer/experiment/email campaign.

There are two types of A/B tests - Automatic winner distribution or Custom distribution. These allow you to either let Exponea choose the more effective variant which will be automatically run for most of the audience or you can manually specify the distribution for the probability of occurrence for each variant.

Automatic winner distribution

πŸ“˜

Automatic winner distribution is available in scenarios and email campaigns.

Automatic winner distribution tests your A/B test variants just on a small share of your target audience, determines which variant is more effective and then automatically sends only the winning variant to the rest of the audience.

You can use this, for example, to easily test different subject lines or CTA buttons in your email or any other channel of communication including SMS, retargeting, webhooks and others.

🚧

If the success rate of the two variants is equivalent, the remainder of the campaign will be sent in the original ratio.

The winning variant is chosen based on a metric and timeframe (usually hours) selected in the Winner determination. The metric can be conversion rate, click rate, open rate, or any custom metric (event). The winning variant is visible in the interface after the initial test.

Automatic traffic distribution

πŸ“˜

Automatic traffic distribution is available in experiments and web layers.

Automatic traffic distribution tests you variants and a share of your users and recognizes which variant achieves the goal better. Then it shows the preferable variant to most of the audience. However, it still continues to test the other variant on a small share of users and if it starts to perform better, the distribution will automatically be re-evaluated and the other variant can become the preferable one.

Custom traffic distribution

πŸ“˜

Custom traffic distribution is available in scenarios, web layers, experiments, and email campaigns.

Custom traffic distribution allows you to manually specify to what percentage of your total audience will each variant be shown. If you want to evaluate the success of your A/B test you can either go to an auto-evaluation dashboard or you can do it manually with the A/B test basic evaluation guide.

Calculating statistical significance

You can calculate the performance and statistical significance of your A/B test with our A/B Test Significance Calculator.

Control group in web layers

The control group by default does not show any variant. If there are any custom conditions specified in the JS code, it is important to create a custom control group that will have the same conditions despite the fact it doesn't show an actual web layer. This is important because otherwise, the compared groups would not be homogenous and the evaluation would be inaccurate.

Auto-evaluation dashboard

A/B tests also contain an auto-evaluation dashboard. In the report, there are events that are within the evaluation window and in the uplifts they are compared to the average performance of all variants from the sample.

Technical notes about A/B tests

🚧

You will not be able to edit any variant that has already been visited by a customer.

  • When going through the same A/B split in the scenario, a customer is assigned the same variant as before.
  • In web layers, each customer is assigned the variant in the event β€œa/b test” and will have the same variant in subsequent visits.
  • It is possible for a customer to be assigned to more variants if she visited the website from different browsers or devices. In that case, the system does not know it is the same customer until she is identified (through a login for example) and hence can assign different variants on a visit.
  • For correct A/B test evaluation, all groups should be identical in conditions except for the variant.
  • Web layers offer automatic A/B testing, which allows preference of the best performing variant, based on the chosen goal.

πŸ“˜

A/B test event attributes

You can read on a/b test event attributes in the System events article.

Updated about a month ago


AB Testing


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.


We rely on cookies

to optimize our communication and to enhance your customer experience. By clicking on the Accept and Close button, you agree to the collection of cookies. You can also adjust your preferences by clicking on Manage Preferences. For more information please see our Privacy policy.

Manage cookies
Accept & close

Cookies preferences

Accept & close
Back