Skip to main content
Free Tool ← All Free Tools

A/B Test Significance Calculator (Free + How to Read Your Results)

Paste in your numbers and know in seconds whether your test result is real — or just noise.

Enter your test data

Control (A) — Original

Variant (B) — Your change

Results

Enter data on the left to see results

Control CR

Variant CR

Relative Uplift

Confidence

Z-score

P-value

How to read these results

Confidence level

The probability the difference is real and not random. 95%+ is the minimum threshold; 99%+ for high-stakes decisions like pricing changes.

Relative uplift

How much better the variant performs vs. control, as a % of control. A 5% CR → 5.5% CR is a 10% relative uplift — not a 0.5% uplift.

P-value

Probability of seeing this result by chance if there were no real difference. Below 0.05 = significant. This tool uses a two-tailed test.

How to use this calculator

Four steps from your analytics dashboard to a confident decision.

1

Pull your data

Open your analytics tool and note visitors and conversions for the test period — separately for control and variant.

2

Enter control (A)

Control is your original version. Enter the total visitors and how many completed your goal (purchase, signup, click, etc.).

3

Enter variant (B)

Variant is the version you changed. Same fields — visitors who saw it and conversions it produced.

4

Read the verdict

95% confidence or above means you can safely declare a winner. Below 95%, collect more data before deciding.

Frequently asked questions

What p-value means my A/B test is significant?

A p-value below 0.05 means there is less than a 5% probability the result is due to chance — this is the 95% confidence threshold used as the industry standard. For higher-stakes decisions (large campaigns, pricing changes) use a 0.01 threshold (99% confidence).

Should I use a one-tailed or two-tailed test?

This calculator uses a two-tailed test, which checks for a difference in either direction. Two-tailed is the correct default for A/B testing because your variant could perform better or worse than control. Only use one-tailed if you have a strong, pre-registered hypothesis about direction.

How many conversions do I need for a valid A/B test?

As a rule of thumb, aim for at least 100 conversions per variant before reading results. For very low conversion rates (under 1%), target 200–300 per variant. Use the sample size calculator to plan exact numbers before you start.

What does relative uplift mean?

Relative uplift is how much better the variant performed compared to the control, expressed as a percentage of the control rate. If control converts at 5% and variant at 5.5%, the relative uplift is 10% — not 0.5%. Always report relative uplift, not absolute difference.

Stop guessing.
Start testing.

Your first experiment — A/B test, popup, form, or heatmap — can be live in under 2 minutes. No developers, no contracts, no risk.

Free plan forever
Setup in 2 minutes
No developers needed