Run controlled experiments on your AI assistant’s conversation strategies. Test different greetings, tones, upselling approaches, and more to find what converts best.
A/B Testing is available on Pro Plus.
How It Works
- Create a test — define two variants (A and B) with different settings
- Traffic is split automatically — each visitor gets a consistent experience across sessions, so they always see the same variant
- Track results — compare conversion rates, average order values, and engagement metrics
- Pick a winner — end the test and apply the winning variant
Test Types
| Type | What You’re Testing |
|---|
| Greeting | Different opening messages |
| Tone | Friendly vs professional communication style |
| Upselling | Subtle vs proactive accessory suggestions |
| Custom | Any other conversation variable |
Creating a Test
| Field | Description |
|---|
| Name | Internal name (e.g. “Greeting Test - March 2026”) |
| Type | What aspect you’re testing |
| Variant A | Control — your current approach |
| Variant B | Challenger — the new approach to test |
| Traffic Split | Percentage of traffic seeing each variant (default 50/50) |
Results
Each test tracks:
- Impressions — Conversations per variant
- Conversions — Bookings per variant
- Conversion Rate — Percentage comparison
- Statistical confidence — Whether the difference is meaningful
Best Practices
- Test one thing at a time — changing greeting AND tone makes it impossible to know what worked
- Run tests for at least 2 weeks — small sample sizes give misleading results
- Don’t peek and stop early — let the test reach statistical significance
- Document your learnings — build institutional knowledge about what your customers respond to