← Glossary

A/B Test (Split Test)

Comparing two variants of an ad, page or offer head-to-head with traffic split between them, then picking the winner statistically.

An A/B test is a controlled experiment where you serve two (or more) variants of something to comparable audiences and measure which one performs better on a defined metric. The point isn't 'which variant feels better' — it's whether the difference is large enough to be statistically real, not noise.

How to run one properly

  • Change one variable at a time. Two changes at once and you can't tell which caused the lift.
  • Define success in advance: which metric, what threshold, how long the test runs. Otherwise you'll cherry-pick.
  • Run until you have statistical significance — usually 95% confidence. A 5% lift on 200 conversions is noise; the same lift on 2,000 conversions is real.
  • Match audiences. Don't compare a 1% lookalike vs a 10% lookalike — that's two changes.
  • For ad creative: 7-day minimum, often 14 days, so you cover weekly weekday/weekend cycles

Sample-size reality

Most small-account A/B tests are statistically meaningless because the sample is too small. If you're getting 30 conversions a week per variant, you'll need months to detect anything below a 30% lift. The honest answer is often: don't run a formal test, run more variants and let the algorithm reallocate. DCO (dynamic creative) is the lower-volume substitute for proper A/B.

Skip the math. Let an agent watch your numbers.

nordenagent runs Meta Ads, analytics, and self-marketing posts with this stuff already wired up. You approve, we ship.