#1 Data Analytics Program in India
₹2,499₹1,499Enroll Now
Module 8
6 min read

A/B Test Design

Run simple experiments to test what works

What You'll Learn

  • What A/B testing is
  • How to set up a test
  • Sample size basics
  • Common mistakes to avoid

What is A/B Testing?

Simple definition: Show version A to some users, version B to others. See which wins!

Example:

  • Version A: Blue "Buy Now" button
  • Version B: Red "Buy Now" button
  • Which gets more clicks?

Common uses:

  • Website buttons and colors
  • Email subject lines
  • Product features
  • Pricing options

Why Random Assignment?

The key rule: Randomly assign users to A or B

Why this matters:

āŒ Bad: Show A on Monday, B on Tuesday (Monday people might be different from Tuesday people!)

āœ“ Good: Randomly split Monday visitors into A and B (Now groups are the same!)

Bottom line: Random = fair comparison

Simple A/B Test Steps

Step 1: Pick what to test Example: Button color

Step 2: Decide what success means Example: More clicks

Step 3: Calculate how many people you need Use online calculator (see below!)

Step 4: Randomly show A or B to users

Step 5: Wait until you have enough data

Step 6: Check if B is significantly better (p < 0.05)

Step 7: Pick the winner!

Sample Size - How Many People?

Don't guess! Use a calculator:

  • optimizely.com/sample-size-calculator
  • evanmiller.org/ab-testing/sample-size.html

You need to know:

  1. Current rate: 10% click now
  2. Target improvement: Want 12% (2% increase)
  3. Confidence: Use 95% (standard)

Calculator tells you: Need 3,000 people per group

Pro tip: More people = more reliable results!

Common Beginner Mistakes

Mistake 1: Stopping too early āŒ "We have 100 clicks, let's check!" āœ“ Wait for your calculated sample size!

Mistake 2: Testing during different times āŒ Version A on Monday, B on Friday āœ“ Run both versions at the same time!

Mistake 3: Changing things mid-test āŒ Tweaking version B while test runs āœ“ No changes once test starts!

Mistake 4: Not being random āŒ "Show A to new users, B to returning users" āœ“ Randomly assign everyone!

Mistake 5: Testing too many things Start with ONE change at a time!

Real Example

Scenario: E-commerce checkout button

Current: Green button → 15% checkout Test: Orange button → ???

Setup:

  • Use calculator: Need 8,500 people per version
  • Run for 2 weeks (500 visitors/day)
  • Track: Checkout completion rate

Result:

  • Green: 15.0% (8500 people)
  • Orange: 16.5% (8500 people)
  • p-value: 0.02

Conclusion: Orange wins! (p < 0.05 = significant) Switch to orange button!

Next Steps

Learn about Metric Selection!

Tip: Good experiment design prevents analysis headaches!