Skip to main content
Run controlled experiments on your AI assistant’s conversation strategies. Test different greetings, tones, upselling approaches, and more to find what converts best.
A/B Testing is available on Pro Plus.

How It Works

  1. Create a test — define two variants (A and B) with different settings
  2. Traffic is split automatically — each visitor gets a consistent experience across sessions, so they always see the same variant
  3. Track results — compare conversion rates, average order values, and engagement metrics
  4. Pick a winner — end the test and apply the winning variant

Test Types

TypeWhat You’re Testing
GreetingDifferent opening messages
ToneFriendly vs professional communication style
UpsellingSubtle vs proactive accessory suggestions
CustomAny other conversation variable

Creating a Test

FieldDescription
NameInternal name (e.g. “Greeting Test - March 2026”)
TypeWhat aspect you’re testing
Variant AControl — your current approach
Variant BChallenger — the new approach to test
Traffic SplitPercentage of traffic seeing each variant (default 50/50)

Results

Each test tracks:
  • Impressions — Conversations per variant
  • Conversions — Bookings per variant
  • Conversion Rate — Percentage comparison
  • Statistical confidence — Whether the difference is meaningful

Best Practices

  • Test one thing at a time — changing greeting AND tone makes it impossible to know what worked
  • Run tests for at least 2 weeks — small sample sizes give misleading results
  • Don’t peek and stop early — let the test reach statistical significance
  • Document your learnings — build institutional knowledge about what your customers respond to