What is a Pathmonk A/B test?

Modified on Mon, 10 Nov at 12:09 PM


An A/B test is an experimentation method that compares two versions of the same element (version A and version B) to determine which one performs better. Users are randomly split into two groups, and each group is shown one of the two versions. Key metrics such as conversions, clicks, or opens are then measured to identify which version delivers the best results.


How A/B testing works in Pathmonk


In Pathmonk, A/B testing is based on splitting your website traffic into two groups:

  • A percentage of visitors who see Pathmonk's microexperiences

  • A percentage of visitors who do not see them (the control group)


You can configure this split according to your needs. For example:

  • 50% see Pathmonk and 50% do not (commonly used at the beginning to get a fair comparison with equal traffic).

  • Up to 95% see Pathmonk and 5% do not (often used once Pathmonk has already proven effective, just to maintain a small control group).


For the visitors in the Pathmonk-enabled group, microexperiences are displayed to guide them through the conversion journey and increase the likelihood that they convert.


Learn more about how to check your A/B testing results.




⚠️ Important: Pathmonk tracks 100% of your traffic


One question we often get is: “If microexperiences are only shown to 50% of my traffic, why does Pathmonk track all of it?” or "why does my subscription needs to cover all my traffic?"


The answer is simple: Pathmonk needs to analyze all visitor behavior, not just the group that sees microexperiences.


Our technology studies every interaction on your site to understand patterns, intent signals, and buying journeys. This full-picture data is what allows the AI to:

  • Identify which visitors are ready to convert

  • Detect what behaviors lead to drop-offs

  • Optimize microexperiences based on real behavior, not a limited sample


The microexperiences are what change, not the tracking. Pathmonk’s tracking script runs across your entire website to ensure the data is complete and statistically valid. If we only analyzed the portion of traffic that sees Pathmonk, the results would be biased and incomplete.


That’s why your plan is based on total traffic, because the quality of personalization depends on analyzing all visitors, even those who don’t see microexperiences.


When a visitor converts, Pathmonk records whether that conversion came from someone who saw Pathmonk or from someone in the control group. This allows you to clearly compare performance between both groups.





How Pathmonk ensures 100% random assignment


As soon as a new visitor enters your website, Pathmonk first assigns the user to a group (with or without Pathmonk) before showing any micro-experiences or making any visual changes.


This means:

  • No Pathmonk elements or microexperiences are shown before the user is randomly assigned.

  • The assignment is based solely on the traffic split configuration (e.g., 50/50, 95/5), not on user behavior or characteristics.

  • The process guarantees that the split between groups is 100% random and unbiased.


Once the visitor has been assigned to a group:

  • If they are in the Pathmonk group, they will see the micro-experiences during their journey.

  • If they are in the control group, they will not see any Pathmonk micro-experiences and will browse the site as usual.


This ensures that the only difference between the two groups is the presence of Pathmonk micro-experiences, which is essential for statistical confidence. You can then compare the performance of both groups in the Buying Journey report to confirm the uplift in conversions that Pathmonk contributes.




Recommendations for Pathmonk's A/B tests


To ensure reliable results:

  • It is recommended to run each A/B test for at least 1–2 months before making major changes, especially at the beginning.

  • Once the data confirms that Pathmonk is increasing conversions, you can raise the percentage of visitors who see Pathmonk to maximize the impact on your overall conversions.

  • Continue monitoring each A/B test over time. The ideal test duration depends on your traffic volume:

    • Higher traffic = faster sample collection and quicker, more robust insights

    • Lower traffic = longer test duration needed to reach statistical confidence.

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons

Feedback sent

We appreciate your effort and will try to fix the article