Logo

Getting A/B Testing Right

A/B testing (also called split testing) is a structured way to compare two versions of something-like a landing page, email subject line, or ad creative, or a cloud computing sign-up flow-to see which one drives more of the outcome you care about (conversions, revenue, sign-ups, etc.). This guide shows you exactly how to plan, run, and scale A/B tests the right way, avoid common pitfalls, and turn “random acts of optimisation” into a repeatable growth engine.

What is A/B Testing?

A/B testing is a controlled marketing experiment where you show two versions of a thing-A (control) and B (challenger)-to similarly composed audiences, at the same time, and measure which one performs better on a primary metric (e.g., conversions, CTR, revenue per visitor).

You then run the test long enough to reach statistical confidence and make a decision: ship B (if it wins), keep A (if B loses), or iterate (if inconclusive).

Why A/B Testing Matters

Core Concepts & Definitions

When (and When Not) to Use A/B Tests

Use A/B testing when:

Avoid or delay A/B testing when:

The 9-Step A/B Testing Framework

What is AB Testing Computing Australia Group

1. Discover & Prioritise Opportunities

Mine insights from:
Prioritisation frameworks
Framework Inputs When to Use
PIE (Potential, Importance, Ease) Expected uplift, traffic value, dev/design effort Quick triage across many ideas
ICE (Impact, Confidence, Effort) Business impact, evidence quality, effort Roadmap debates
PXL Detailed checklist on specificity, evidence strength, proximity to conversion Mature programs

2. Define the Experiment

3. Estimate Sample Size & Runtime

Practical rule of thumb: run in full weekly cycles (e.g., 14 or 21 days) to capture weekday/weekend behaviour.

4. Design Variants the Right Way

5. QA Before Launch

6. Launch & Randomise Fairly

7. Run to Completion

8. Analyse & Decide

9. Roll Out, Monitor & Iterate

What to Test: High-Impact Ideas for Web, Email, and Ads

Website / Landing Pages

Email

Paid Ads (Search & Social)

Statistics Without the Jargon

You don’t need a PhD-just a few working rules:

Tooling & Implementation Tips

Governance, Ethics & SEO Safeguards

Common Pitfalls (and How to Avoid Them)

1. Testing trivial tweaks (colour micro-changes) on low traffic → no learnings.

2. Stopping early on a spike.

3. Multiple changes per variant without clarity on what drove the result.

4. Dirty data: Duplicate events, bot traffic, internal visits.

5. SRM (sample ratio mismatch).

6. Declaring victory on a micro-metric (CTR) that doesn’t move revenue.

7. No post-deployment verification.

Troubleshooting: Why Your Tests Aren’t “Winning”

Jargon Buster

Multivariate testing – Also called multi-variable testing, it is the method of testing different versions of multiple variables on your website at the same time.

Call to action – A prompt on your website to guide the visitor to take the next action. Examples are – Buy Now, Read more, Click here.

Landing page – A page that a visitor lands on by clicking a link from a search result, ad or email etc., generally created specifically for a marketing campaign.

FAQ

As long as needed to reach the pre-calculated sample size and cover at least one full weekly cycle (preferably two). Don’t stop early because a dashboard turned green.
That’s multivariate testing. It’s powerful but requires high traffic to isolate interactions. For most teams, serial single-change A/B tests are faster and clearer.
Refine your hypothesis, increase MDE, or target a higher-intent audience (e.g., cart visitors). “No difference” is still a useful learning.
They’re complementary. Use A/B to identify broadly better experiences; use personalisation to tailor for specific segments once you know the winning patterns.
Fix clarity and friction closest to conversion: a clearer value proposition, a stronger CTA, or reducing form fields.