How to change the A/B test distribution?

Modified on Mon, 10 Nov at 12:24 PM


Running and adjusting A/B tests in Pathmonk is a key part of ongoing optimization. These tests let you compare how different personalization setups perform and make confident, data-driven decisions based on real visitor behavior.


Read more about how to check your Pathmonk results. 


1. Access your current A/B test



Go to the Accelerate Growth section (top menu) and open Performance. Here you can:

  • View your current A/B test

  • Check how long it has been running

  • See your existing traffic split


From this same screen, you can also stop a test or create a new one when needed.




2. Stopping an A/B test


If you decide to stop a test, simply click Stop on the Current Performance Test.

If you don't create a new A/B test, Pathmonk will run by default on 95% of your traffic to maximize exposure.




3. Creating a new A/B test



To launch a new test, click New A/B Test, then follow these steps:

  1. Name your test

  2. Add notes (use this field to record details)

  3. Confirm the start date 

  4. Set your traffic distribution. Select the percentage of visitors who will see Pathmonk’s microexperiences. The remaining visitors will automatically become your control group.


Your traffic allocation determines how much of your audience is exposed to personalization versus your standard site experience.




Best practices for reliable results


  • Let your test run for at least 1–2 months.
    This gives Pathmonk enough data to reach statistical significance and ensures results are stable rather than influenced by short-term fluctuations.

  • Choose your split wisely.
    Depending on traffic volume, you might adjust your test to 75/25, 60/40, or similar ratios.

    • On high-traffic pages, a 50/50 split gives fast, reliable comparisons.

    • On lower-traffic pages, a small control group (e.g., 5%) might not gather enough data to produce statistically valid results.

  • Avoid frequent changes.
    Stopping and restarting tests too often resets data collection and can lead to misleading conclusions.


Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons

Feedback sent

We appreciate your effort and will try to fix the article