Running an A/B Test


A/B Tests enable data scientists to test multiple deployed models simultaneously, in order to check which one works best.

To run an A/B Test, visit the “Deployed Models” page (below). To get to this page, refer to these instructions )

image1

Now, select two or more successfully deployed models, and then click the “Run A/B Test” button on the top right-hand corner.

This opens the “A/B Testing Run” page (below)

image2

This page enables you to run an A/B Test on two or more deployed models.

  1. Specify a name for the A/B Test

  2. Specify a routing strategy by entering the weights for each model in the test

    • The weight of a model indicates the probability of the model being hit by a request.

    • Weights must add up to 100. The UI automatically does this for you

    • For example, if there are two models in your test, and their weights are set to 80 and 20, this means that the first model will be hit approximately 80% of the time, and the second will be hit approximately 20% of the time. In other words, out of every 100 requests, approximately 80 will go to the first model, and approximately 20 to the second.

What do you want to do next?