A/B Testing

A/B testing is comparing two versions of either a webpage, email campaign or an aspect in a scenario to evaluate which performs best. With the different variants shown to your customers, you can determine which version is the most effective, with data-backed evidence. Exponea offers A/B testing in scenarios, email campaigns, web layers and experiments. A customer is assigned a variant that will be shown to them instantly when reaching an A/B split node in a scenario or when matching the conditions for displaying a web layer/experiment/email campaign.

Using A/B testing in Exponea

There are two types of A/B tests - Automatic winner/traffic distribution or Custom distribution. These allow you to either let Exponea choose the more effective variant which will be automatically run for most of the audience or you can manually specify the distribution for the probability of occurrence for each variant.

Automatic winner distribution

📘

Automatic winner distribution is available in scenarios and email campaigns.

Automatic winner distribution tests your A/B test variants just on a small share of your target audience, determines which variant is more effective and then automatically sends only the winning variant to the rest of the audience.

You can use this, for example, to easily test different subject lines or CTA buttons in your email or any other channel of communication including SMS, retargeting, webhooks and others.

🚧

If the success rate of the two variants is equivalent, the remainder of the campaign will be sent in the original ratio.

The winning variant is chosen based on a metric and timeframe (usually hours) selected in the Winner determination. The metric can be conversion rate, click rate, open rate, or any custom metric (event). The winning variant is visible in the interface after the initial test.

Automatic traffic distribution

📘

Automatic traffic distribution is available in experiments and web layers.

Automatic traffic distribution tests you variants and a share of your users and recognizes which variant achieves the goal better. Then it shows the preferable variant to most of the audience. However, it still continues to test the other variant on a small share of users and if it starts to perform better, the distribution will automatically be re-evaluated and the other variant can become the preferable one.

Custom traffic distribution

📘

Custom traffic distribution is available in scenarios, web layers, experiments, and email campaigns.

Custom traffic distribution allows you to manually specify to what percentage of your total audience will each variant be shown. If you want to evaluate the success of your A/B test you can either go to an auto-evaluation dashboard or you can do it manually with the A/B test basic evaluation guide.

Calculating statistical significance

You can calculate the performance and statistical significance of your A/B test with our A/B Test Significance Calculator.

Control group in web layers

The control group by default does not show any variant. If there are any custom conditions specified in the JS code, it is important to create a custom control group that will have the same conditions despite the fact it doesn't show an actual web layer. This is important because otherwise, the compared groups would not be homogenous and the evaluation would be inaccurate.

Auto-evaluation dashboard

A/B tests also contain an auto-evaluation dashboard. In the report, there are events that are within the evaluation window and in the uplifts they are compared to the average performance of all variants from the sample.

Technical notes about A/B tests

🚧

You will not be able to edit any variant that has already been visited by a customer.

  • When going through the same A/B split in the scenario, a customer is assigned the same variant as before.
  • In web layers, each customer is assigned the variant in the event “a/b test” and will have the same variant in subsequent visits.
  • It is possible for a customer to be assigned to more variants if she visited the website from different browsers or devices. In that case, the system does not know it is the same customer until she is identified (through a login for example) and hence can assign different variants on a visit.
  • For correct A/B test evaluation, all groups should be identical in conditions except for the variant.
  • Web layers offer automatic A/B testing, which allows preference of the best performing variant, based on the chosen goal.

📘

A/B test event attributes

You can read on a/b test event attributes in the System events article.

Example use cases for A/B testing

Time of sending

Identifying the best day and time to send campaigns can significantly boost your open rates. When tested it can increase open rates by up to 20%. If you’re sending at 7AM each morning, so are many other brands. What happens when we send at lunchtime, or after the working day? For each brand, the ideal send time is going to vary. However, it is a very impactful element to test.

With Exponea, you have the option to personalize email send times for every individual customer with just one click. You can optimize predictions based on open-rates or click-through rates according to your goals.

Ideal frequency

The amount of messages that you are sending to customers is also important. In Exponea you can easily set up frequency management policies that determine how many messages a customer from a particular segment can get, and by doing so you can avoid spamming and overwhelming your customers. Test various frequency policies and see what works best for your brand. Remember less is sometimes more!

Improving email open rates

The factors that have an impact on open rates are quite limited, and so they are a good place to start. In essence, there are just 4 things to test with regards to open rates:

  • Subject lines
  • Pre-headers
  • Sender name
  • Time of sending

Subject lines:

A good subject line can increase opens by as much as 15% to 30%. The role of the subject line is crucial. It’s about grabbing recipients’ attention in a very busy inbox with many other brands all crying out “Open me.” For this reason, subject lines are not subtle. In addition to grabbing attention, it needs to communicate the number 1, stand-out reason, why your customers should open this email.

Pre-headers:

Because we all mostly view emails on our mobile devices, the pre-header has become a very important part of email anatomy. In our inbox view, it sits under the subject line and acts almost like a secondary subject line. It can have 2 distinct purposes.

1. It can boost the impact of the subject line with an additional supporting message.

Subject Line: “Footwear lovers, get ready”

Pre-header: “We’ve added over 100 gorgeous new shoes!”

2. It can add a secondary reason to open the email.

Subject Line: “Footwear lovers, get ready”

Pre-header: “Plus, check out these statement dresses!”

If you have more than one content piece in the email, the second option can capture the interest of more recipients (which will drive more opens).

Sender Name:

Slight amends to the sender's name (and not the email address) can be very significant. The sender name is your normal brand name. When tested, however, certain variants of the brand name can drive up open rates. You need to make sure we don’t confuse recipients, but variants of the sender name worth testing include:

“Brand X – New in”

“Brand X – Exclusives”

“Brand X – The Big Sale”

“Brand X – The Style Edit”

Improving content

Beyond optimizing open rates, a good testing plan will also move towards the content of the campaign. Here there are many more factors to test. However, it is possible to increase click activity by 25% when you learn how to present your content. Some of the most impactful tests are:

  • Buttons (CTAs) versus no buttons
  • Links versus buttons
  • “Shop new” versus other CTA vocabulary
  • Price versus no price
  • Stylized banners versus product photography
  • Images with supporting text versus image only
  • More products versus fewer products
  • Recommendations versus no recommendations
  • Using abandoned cart items in your newsletter versus not using them
  • Browsed item category “New in” versus no browsed item category content
  • Clearance items versus no clearance items (people still love bargains!)

Improving conversion rates

Here are some ideas you can try to improve your conversion rates.

  • Landing on the product page versus landing on the category page (please note the actual product clicked needs to be at the top left of the category view page)
  • Adding delivery options on the product pages versus not adding delivery options on the product pages
  • Category pages sorted by new in
  • Category pages sorted by best sellers

A/B Testing Best Practices

  • Make sure you’re following best practices regarding email deliverability. There won’t be much to test if your emails don’t make it to the inbox.
  • Test one email element at a time. To test subject lines AND the use of pre-headers at the same time is not going to inform you as to which test element had the most impact.
  • However, it is possible to come up with several testing concepts for one test element. E.g. with subject lines, you can test several at once. Or CTA language, again you can test several in one test.
  • Be careful with winning subject lines, pre-headers and even sender names in the long run. Overuse of these tends to erode performance over time.
  • When setting up your A/B testing in Exponea, you need to set the functionality to focus on one metric only in order to identify the most successful campaign. So, for subject lines, that needs to be open rates. For campaign content ideas, that is most likely to be click-through rate. However, it could also be conversion rates. When testing "price shown versus price not shown", the price shown generated fewer clicks but higher conversions. However, testing results will vary for each brand. Test for your own results to ensure the best outcome.
  • You will need to allow a window of time to allow the test recipients to open and engage in the email before deploying the most successful campaign to the hold out group. The longer the window of time, the better the odds are that the campaign deployed is actually the best performing one. Our recommendation is a 4 hour wait.
  • To gain confidence in testing and the benefits it can bring, start with simple tests before doing more ambitious ones. The tests for opens are a great place to start.
  • Plan your content tests in advance in order for your content team to arrive at the test variants you want to run.

Updated about a month ago


A/B Testing


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.


We rely on cookies

to optimize our communication and to enhance your customer experience. By clicking on the Accept and Close button, you agree to the collection of cookies. You can also adjust your preferences by clicking on Manage Preferences. For more information please see our Privacy policy.

Manage cookies
Accept & close

Cookies preferences

Accept & close
Back