Glossary · CSAT
What is CSAT?
CSAT (Customer Satisfaction Score) measures how satisfied a customer is with a specific interaction, product, or overall experience. Typically collected via post-interaction surveys asking customers to rate their satisfaction on a 1–5 or 1–10 scale, CSAT is reported as the percentage of respondents rating 4 or 5 (or 9 or 10) — the “satisfied” or “highly satisfied” group. CSAT is tactical (individual interaction quality) in contrast to NPS which is strategic (brand loyalty). AI-powered contact centers now measure CSAT on 100% of interactions through predictive analysis instead of the traditional 5–15% survey response rate.
CSAT formula
Standard survey-based CSAT:
CSAT = (Number of "satisfied" responses / Total responses) × 100
Where “satisfied” is typically a 4 or 5 on a 5-point scale, or 9 or 10 on a 10-point scale.
Some organizations use a straight average:
Mean CSAT = Sum of all ratings / Number of responses
Both are valid; stick with one methodology to enable time-series comparison.
Typical CSAT questions
Transactional (post-interaction):
- “How satisfied were you with the support you received today?”
- “Rate your recent experience with our team.”
Relationship (periodic):
- “Overall, how satisfied are you with [Company]?”
- “How likely are you to recommend us?” (this is actually NPS)
Specific:
- “How satisfied were you with the product?”
- “How satisfied are you with the delivery experience?”
One question, answered immediately after the event, yields the best data.
Why CSAT matters
- Customer retention — unhappy customers leave. CSAT predicts churn.
- Revenue — satisfied customers spend more, refer more, renew more.
- Operational quality — low CSAT surfaces systemic issues.
- Agent performance — CSAT segmented by agent identifies coaching needs.
- Product feedback — CSAT by product category flags quality issues.
CSAT benchmarks
Typical CSAT by industry (percent satisfied, 4+ on a 5-point scale):
| Industry | Typical CSAT |
|---|---|
| Software / SaaS | 85–90% |
| Retail / ecommerce | 80–90% |
| Hospitality | 85–95% |
| Financial services | 80–88% |
| Healthcare | 75–85% |
| Telecom / cable | 65–75% (classically lower) |
| Government | 65–78% |
| Utilities | 70–80% |
Best-in-class organizations in each category exceed these benchmarks. Sustained CSAT above 90% in any industry is a real competitive moat.
The survey response problem
Traditional survey-based CSAT has structural problems:
- Response rate is 5–15% on average. The 85–95% who don’t respond are silent.
- Responders are not random. Extremes respond (very happy and very unhappy); the middle stays silent.
- Survey fatigue. Response rates decline over time.
- Recency bias. Customers remember recent experiences more vividly.
- Channel bias. Email surveys skew older; in-app surveys skew younger; phone IVR surveys skew older + less tech-savvy.
The result is CSAT based on a biased minority voice.
AI-based CSAT (predictive CSAT)
Modern AI-powered contact centers measure CSAT on 100% of interactions without surveys. Signal inputs:
- Sentiment analysis — tone of voice, language patterns
- Frustration detection — specific markers (sighs, interruptions, complaint language)
- Satisfaction language — thank-you phrases, positive affirmations
- Interaction flow — escalations, callbacks, hold time
- Resolution language — “that solves my problem” vs. “I’m still confused”
The AI produces a predictive CSAT score for every interaction that correlates with survey CSAT where surveys are available. Coverage jumps from 10% to 100%.
DialPhone’s AI Interaction Analytics delivers predictive CSAT on every interaction across voice and all digital channels.
Acting on CSAT data
The data is useful only if you act on it:
- Low-CSAT calls flagged for supervisor review within minutes, not days
- Callbacks scheduled for unhappy customers before they churn
- Trend analysis — which issue types drive lowest CSAT?
- Agent coaching — which reps have consistently low CSAT and why?
- Feedback loops to product — which product/feature issues show up in low-CSAT calls?
- Real-time intervention — AI flags mid-call frustration so supervisors can barge
CSAT vs. NPS vs. CES
Three common customer metrics:
| Metric | Question | Measures | Best for |
|---|---|---|---|
| CSAT | How satisfied were you with this? | Specific interaction quality | Transactional, tactical |
| NPS | Would you recommend us? | Overall relationship / loyalty | Strategic, brand-level |
| CES (Customer Effort Score) | How easy was it? | Friction in the experience | Self-service, support optimization |
Use CSAT for interaction-level measurement. Use NPS for relationship-level. Use CES when investigating friction.
Segmenting CSAT
Average CSAT hides problems. Segment by:
- Agent — identify coaching and recognition targets
- Issue type — find categorical satisfaction gaps
- Channel — does voice outperform chat?
- Product or feature — what’s driving low satisfaction?
- Customer segment — new customers often have different CSAT patterns
- Time of day or shift — catch staffing quality variance
- Region — regional cultural differences in rating
- Interaction duration — how does CSAT scale with wait time?
DialPhone CSAT features
- Predictive CSAT on 100% of interactions via AI Interaction Analytics — no survey bias
- Post-interaction survey automation — SMS or email surveys sent automatically after calls
- Real-time frustration detection — supervisor alerts during live calls
- CSAT segmentation — by agent, issue, channel, customer segment
- Root-cause CSAT analysis — automatic correlation with call drivers
- Integration with CRM — CSAT scores written to Salesforce / HubSpot contact records
Example
A 250-agent B2B SaaS customer success operation had 12% survey response rate and reported CSAT of 89%. After deploying DialPhone Professional with AI Interaction Analytics:
- Predictive CSAT measured on 100% of interactions showed actual CSAT of 76%
- The 24% non-responders were disproportionately frustrated
- Root-cause analysis identified 3 onboarding failures causing 40% of low-CSAT calls
- Fixing those 3 issues lifted measured CSAT from 76% to 84% in 90 days
- Churn on affected customer segment dropped 18% year-over-year
The survey-only approach was masking a real problem. Measuring 100% uncovered it.