A/B marketing is a controlled comparison of two versions (A and B) of a marketing asset to see which performs better on a single, predefined metric — for example, click-through rate or conversion rate. In practice this means showing variant A to one group and variant B to another, then measuring which produces the desired outcome. This basic definition follows standard experimentation guidance (see CXL).
Use A/B testing when you want a clear, quantitative answer about a single change: headline, email subject-line, CTA wording or colour, or a simple layout tweak on a pricing or landing page. A/B tests are best for single-variable decisions where you can isolate cause and effect.
Contrast this with multivariate testing (multiple elements varied together) and qualitative research. Multivariate tests try many combinations but need far larger traffic to be reliable. Qualitative UX research explains "why" but won't give the statistical answer A/B testing provides — combine methods for better hypotheses (NNGroup).
Practical examples: subject-line A/B tests for email campaigns, two landing-page variants for lead capture, or swapping two pricing-display layouts to see which drives purchase intent (HubSpot guides show similar use cases).
Use this repeatable framework when planning tests or briefing a freelancer. Keep the brief short, specific and measurable.
Write a one-line hypothesis: "If we [change X], then [metric Y] will [increase/decrease] because [reason]." Base X and the reason on analytics or qualitative research — CXL recommends forming hypotheses from data or user research first.
Keep changes minimal. Test one primary change per experiment so you can attribute impact. NNGroup warns that large, compound changes reduce learnings; if you must test a bigger redesign, treat it as a different experiment with a clear rollout plan.
Pick one primary metric (conversion rate, click-through rate, revenue per visitor) and 1–2 guardrail metrics (bounce rate, average order value). Use an online sample-size calculator to determine minimum visitors per variant and include that figure in the brief — HubSpot's procedural guidance recommends predefining sample and duration.
Keep targeting stable, avoid making site or campaign changes mid-test, and honour the pre-specified duration. Decide in advance whether you'll use a fixed-duration or power-based stopping rule — CXL best practices emphasise avoiding early stopping.
Analyse against your pre-specified thresholds (e.g., 95% statistical significance and practical minimum improvement). If results are inconclusive, document what you learned and outline the next hypothesis rather than treating the test as a failure.
Choose tools by channel:
Important tool change: Google Optimize was sunset in 2023. If your workflow relied on it, plan to migrate to a current experimentation tool and revalidate any existing experiments (see Google support note).
Practical email tip: test subject lines against a holdout or sample, then let the tool select a winner and roll out the winner to the remainder of the list to balance learning and performance.
Further reading on email best practice and conversion-focused campaigns is available in Swaplance's guide to leveraging email marketing for B2B lead generation.
Top pitfalls to avoid:
Ethical notes: respect user consent, privacy laws (GDPR) and avoid manipulative experiments that degrade user trust. NNGroup warns against using testing to exploit users; keep tests aligned with good UX.
When you're ready to outsource, post a Swaplance brief that mirrors the test brief template but adds clear deliverables and acceptance criteria. A concise Swaplance job description should include hypothesis, primary metric, target segment, expected sample size or duration, access/ tracking requirements and the required deliverables.
Ask candidates for a short plan (1–2 pages) that covers experiment design, sample-size approach, tracking plan and timeline. Prefer freelancers who reference past tests or offer a small paid pilot if you're uncertain.
Specify deliverables and acceptance criteria upfront: a test set-up and QA checklist, raw and summarised results with statistical notes, a one-page recommendation on rollout, and any A/B test artefacts (variants and tracking snippets). For help with proposals, see Swaplance's guide on crafting a winning freelance proposal.