Predictions templates

The predictions module provides the following pre-defined templates.

  • Purchase prediction
  • Churn prediction
  • In-session prediction
  • Optimal send time prediction
  • Optimal send prediction

Purchase prediction

The purpose of Purchase prediction is to identify customers with the highest probability of making a purchase and accordingly adjust the marketing budget across different segments to increase overall marketing performance.

Steps to execute:

  • Create a prediction model
  • Evaluate the model’s performance
  • Create segmentation based on the model
  • Create an AB test to test out different approaches for each of the segment
  • Evaluate and run

Create a prediction model

To create the prediction, go to Analyses > Predictions > + New prediction > Purchase prediction`.

This pre-defined template only requires that you specify the time frame for which you want to predict the future purchases and an event that represents purchase in your project. In the example screenshot below, we chose one month and the purchase event. This means that the algorithm will consider the last two months - the month before the last to generate features and the last month to generate the targets.

Once you are finished, click on Save and then Start. This launches the calculation. You can keep its progress in the Results tab.

The process usually takes between 20 minutes to a few hours, depending on the amount of data it needs to process in a given project.

Check the model’s performance

Before using a model, you always need to evaluate its performance. As explained in the Model quality evaluation section you can make your evaluation based on multiple different metrics. We suggest using AUC for this particular model. If the performance seems satisfactory, continue to the next step. If not, it means that no strong predictions could be derived from the available training data. In such cases, try using a different time frame or a custom template where custom features can be chosen instead.

We also recommend going through the resulting decision tree and checking the result of the individual nodes. This can give you not only business insights into probabilities of different groups of customers taking certain actions but also a way of verifying your model. If a certain result seems improbable, you probably fell into the overfitting trap and need to adjust the model.

Create segmentation based on the model

The result of Purchase prediction is stored for each customer in a customer property called the same as a prediction: in this case Purchase prediction [1 month]. The prediction is recalculated with the most up-to-date data every time its value is requested (in the same way as aggregates or segmentation). The result is a value between 0 and 1 which expresses the probability that a customer will make a purchase next month.

To narrow down the data points, create segmentation that will divide customers into a number of segments that you can later use. Your chosen segments will vary according to your need. In the example below, we created the following segments:

  • High probability
  • Medium probability
  • Low probability

If the customers are too unevenly distributed among the segments and would be useless for the campaign, you need to adjust the segmentation so the segments are more even. To do so you should set at least 2 thresholds which will divide the customers as in the picture above.

The best way to find thresholds that will produce usable results is to create a simple report wherein rows the prediction is picked and the grouping is set to none. The report has two metrics both count of customers but one modified as ‘Column total %’ and the second as ‘Running total %’.

From the report table, we can identify the thresholds, and based on our preferences choose either large segments or only a percentage of customers in the highest segment.

After saving the segmentation we can continue to the next step.

Create an AB test

After the last step, you know to which segment each customer belongs to but you do not know what is the most effective approach to the segments. To find out, create an AB test, dividing customers into 4 groups:

  • Control group
  • Small spend
  • Medium spend
  • High spend

Every group will be connected to a specific retargeting audience, using the retargeting node (its budget will be set in the ads manager). After every retargeting node an add event node should be appended for evaluation purposes. For purchase prediction it should consist of the following 4 properties:

  • Campaign_name - the name of the campaign current campaign (should be unique)
  • Prediction_name - the name of the prediction that was used
  • Prediction_value - current predicted value for the customer
  • Variant - AB test variant

As a result, you will have 12 segments (3 prediction segments x 4 audiences). Next, you should evaluate what works best for each.

Evaluate and run for all customers

After running the scenario for a number of days you can evaluate the following:

  1. Does the model have an impact on the effectiveness of the campaign? This should be done by comparing the performance of the prediction segments with the control group.
  2. Identify what works for each segment. You can do this by comparing the uplift in the performance of a particular segment and the amount of money that was spent to achieve it.

For example, in the screenshot below we can conclude that the model gave us insights into the behavior of different groups of customers and the amount that should be spent on them during the campaign. Firstly, few resources should be used for the High spend segment, the amount spent does not seem to have a significant influence as the group is easy to activate from the onset. Secondly, most of the budget should be spent on the Medium spend segment because increasing the amount seems to significantly increase the likelihood of a purchase. Lastly, the Low spent segment should not be invested in because increased investment does not transform into more purchases.

To understand whether the difference between a particular variant and the control group is statistically significant, we recommend using our bayesian calculator.

Based on the evaluation, the final adjustment of the campaign can be made so that real-time predictions will drive the optimal budget planning for retargeting.

Churn prediction

The purpose of the “Churn prediction” is to proactively identify customers who are likely to churn (stop using your product) and to win them over before they do so. Specifically, the prediction determines whether a customer who bought something during a defined previous period is likely to purchase something in the defined future period as well. If a customer is unlikely to make a purchase in the defined future timeframe, they are deemed as having a high probability of churning.

Steps to execute:

  • Create a prediction model
  • Evaluate the model’s performance
  • Create segmentation based on the model
  • Create an AB test to figure out what works the best for every segment
  • Evaluate and run

Create a prediction model

To create the prediction, go to Analyses > Predictions > + New prediction > Churn prediction`.

This pre-defined template only requires that you specify the time frame for which you want to predict the future purchases and an event that represents purchase in your project. In the example screenshot below, we chose one month and the purchase event. This means that the algorithm will consider the last two months - the month before the last to generate features and the last month to generate the targets.

Once you are finished, click on Save and then Start. This launches the calculation. You can keep its progress in the Results tab.

The process usually takes between 20 minutes to a few hours, depending on the amount of data it needs to process in a given project.

Check its performance

Before using a model, you always need to evaluate its performance. As explained in the Model quality evaluation section you can make your evaluation based on multiple different metrics. We suggest using AUC for this particular model. If the performance seems satisfactory, continue to the next step. If not, it means that no strong predictions could be derived from the available training data. In such cases, try using a different time frame or a custom template where custom features can be chosen instead.

We also recommend going through the resulting decision tree and checking the result of the individual nodes. This can give you not only business insights into probabilities of different groups of customers taking certain actions but also a way of verifying your model. If a certain result seems improbable, you probably fell into the overfitting trap and need to adjust the model.

The result of Churn prediction is stored for each customer in a customer property called the same as a prediction: in this case Churn prediction [1 month]. The prediction is recalculated with the most up-to-date data every time its value is requested (in the same way as aggregates or segmentation). The result is a value between 0 and 1 which expresses the probability that a customer will make a purchase next month.

Create an AB test

After the last step, you know what is the probability that certain customers will not make another purchase. The next step is to identify customers whom you consider as those with a high probability of churn. What is meant by a high probability of churn varies in each prediction depending on the distribution of this probability. Therefore, it is recommended that you check the distribution in a simple report where the prediction will be in the rows and the number of customers (usually count(customer)) as the metric.

Once, you have chosen users whom you consider as very likely to churn, you need to AB test whether incentivizing them to make another purchase increases the likelihood of them actually doing so. To do this, you take the probable churners and divide them into two groups:

  • Variant
  • Control group

You will try to incentivize the Variant group to make a purchase while the control group will not be targeted with any campaigns. This will allow you to compare whether your purchase-incentivizing campaign had any effect.

An example screenshot below shows how the scenario is likely to look like once you are finished. In the example, the scenario only considers customers who were active (made a purchase) in the last 30 days and who have a very high churn probability (in this case, above 0,8). The AB test further splits the remaining customer into the Variant and Control group. To evaluate the performance of the model we track the information for a sample of the customers (10k) that have lower churn probability to compare the probability of the churn in the future.

As a result, we can create segmentation with 3 segments that will help us to evaluate the campaign’s effectiveness.

Evaluate and run for all relevant customers

After running the scenario for a number of days you can evaluate the following:

  1. Does the model have an impact on the effectiveness of the campaign? This should be done by comparing the control group with the customers in the lower churn probability group with the added prediction event.
  2. Does the incentive work? You can do this by comparing the Control group with the Variant.

To understand whether the difference in the AB test between the variant and the control group is statistically significant, we recommend using our bayesian calculator.

Based on the evaluation, the final adjustment of the campaign can be made so that real-time predictions will win over and incentivize some of the previously probable churners to make a purchase.

In-session prediction

The purpose of the In-session prediction is to identify which customers are very likely to make a purchase during their ongoing session and to target them immediately with a campaign/web layer to nudge them to really make the purchase.

Steps to execute:

Create a prediction model

  • Evaluate the model’s performance
  • Create segmentation based on the model
  • Create an AB test to figure out what works the best for every segment
  • Evaluate and run

Create a prediction model

To create the prediction, go to Analyses > Predictions > + New prediction > In-session prediction`.

This pre-defined template requires you only to specify the time frame for which you want to predict the future purchases and an event that represents purchase in your project. In the example screenshot below, we chose one month and the purchase event. This means that the algorithm will consider the last two months - the month before the last to generate features and the last month to generate the targets.

Once you are finished, click on Save and then Start. This launches the calculation. You can keep its progress in the Results tab.

The process usually takes between 20 minutes to a few hours, depending on the amount of data it needs to process in a given project.

Check its performance

Before using a model, you always need to evaluate its performance. As explained in the Model quality evaluation section you can make your evaluation based on multiple different metrics. We suggest using AUC for this particular model. If the performance seems satisfactory, continue to the next step. If not, it means that no strong predictions could be derived from the available training data. In such cases, try using a different time frame or a custom template where custom features can be chosen instead.

We also recommend going through the resulting decision tree and checking the result of the individual nodes. This can give you not only business insights into probabilities of different groups of customers taking certain actions but also a way of verifying your model. If a certain result seems improbable, you probably fell into the overfitting trap and need to adjust the model.

Create segmentation based on the model

The result of Purchase prediction is stored for each customer in a customer property called the same as a prediction: in this case In-session prediction [1 month]. The prediction is recalculated with the most up-to-date data every time its value is requested (in the same way as aggregates or segmentation). The result is a value between 0 and 1 which expresses the probability that a customer will make a purchase next month.

To narrow down the range into manageable buckets, create a segmentation that will divide customers into a number of segments that you can later use. Your chosen segments will vary according to your need. In the example below, we created the following segments:

  • High probability
  • Medium probability
  • Low probability

If the customers are too unevenly distributed among the segments and would be useless for the campaign, you need to adjust the segmentation so the segments are more even. To do so you should set at least 2 thresholds which will divide the customers as in the picture above.

The best way to find thresholds that will produce usable results is to create a simple report wherein rows the prediction is picked and the grouping is set to none. The report has two metrics both count of customers but one modified as ‘Column total %’ and the second as ‘Running total %’.

From the report table, we can read the thresholds based on our preferences. Trying to get equally large segments or having only some percentage of customers in the highest segment.

After saving the segmentation we can continue to the next step.

Create an AB test

After the last step, you know to which segment each customer belongs to but you do not know what is the most effective approach to each. To find out, create an AB test, dividing customers into 2 groups:

  • Variant
  • Control group

You will show a web layer/campaign to the Variant group while the control group will not be targeted with anything. This will allow you to compare whether your purchase-incentivizing campaign had any effect.

There are multiple types of web layers which you can use. Whichever you decide on, you should create 3 (or more) alternates of the same prediction - one for each segment as in the screenshot below.

You can choose the segment for the web layer in Settings > Audience > Customer filter. Then you specify the desired segment from the In-purchase prediction. In the screenshot below, we choose the high probability segment for this particular web layer.

As a result, in this example, we would have 6 segments (3 prediction segments x 2 variants) based on which it will be possible to evaluate what works best for each segment.

📘

You can be very creative about the type of web layer which you choose. However, its message should be such that it will incentivize a quick purchase.

Evaluate and run for all relevant customers

After running the scenario for a number of days you can evaluate the following:

  1. Does the model have an impact on the effectiveness of the campaign? This should be done by comparing the control group with the customers in the lower churn probability group with the added prediction even.
  2. What works for each segment? You can find this out by comparing the uplift in the performance among different probability segments of the Variant and the Control group. For example, in the example screenshot below, the high-probability segment does not need any special discounts because there is no uplift in the conversion. However, the discounts seem to be achieving their purpose for the medium and low probability segments. To decide whether you want to use the discount for these segments, calculate your resulting profit margins.

To understand whether the difference in the AB test between the variant and the control group is statistically significant, we recommend using our bayesian calculator.

Based on the evaluation, the final adjustment of the campaign can be made so that real-time predictions will decide whether a web layer is displayed to a certain customer.

Optimal send time

Based on your customers' past behavior, the optimal send time feature allows you to send out campaign emails automatically to individual customers at a time, rounded to an hour, when they are most likely to click/open your email.

In order for the feature to work correctly, you should have already run numerous campaigns so that you already have multiple opened and clicked values in the status attribute of the campaign event collected for most of your customers. To read about the difference between the two values go to the System events article.

Optimal send time as a prediction

You can calculate optimal send time using the Optimal send time prediction template.

As with other predictions - the output will be stored as an attribute for every customer and can be used later in reports or scenarios. You need to specify information in the following three steps:

  1. How much data should be considered?

Select a timeframe in the past according to which all campaign events relevant for the analysis will be retrieved. The longer timeframe you specify the more accurate the result will be.

  1. What should be the default send-time?

Choose the default time for sending the email to customers for whom the personalized email time cannot be calculated due to the lack of collected data. 0 value corresponds to midnight. E.g. your usual send time.

  1. What campaign events and attributes should be considered?

Specify which events and attributes should be used in the analysis. You should fill this information according to the screenshot below so that the optimal send time is calculated for your customers to 'open' your email and then 'click' values.

After you click Save and Start, the calculation process takes only a few seconds. You can check the result immediately in any of your customers' profiles or you can create a report to see the optimal time values. The results tab won’t be populated for this type of prediction.

Optimal send time as an integrated feature in Scenarios and Campaigns

Choosing optimal send time as a prediction template that executes a model that populates customer profiles with an attribute indicating the best email time for each customer, is not the only way how the optimal send time can be used. It can be also found in Scenarios and Email campaigns.

While the logic for calculating optimal time in both options is the same - the difference is that the option in the scenario is picked automatically by the scenario whereas the customer property has to be added to the scenario manually. You can read more about the functionality in Scenarios and in Email campaigns.

Updated about a month ago


Predictions templates


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.


We rely on cookies

to optimize our communication and to enhance your customer experience. By clicking on the Accept and Close button, you agree to the collection of cookies. You can also adjust your preferences by clicking on Manage Preferences. For more information please see our Privacy policy.

Manage cookies
Accept & close

Cookies preferences

Accept & close
Back