Customer data privacy
Customer conversations are never used to train shared foundation models. Opt-in fine-tuning keeps data isolated per workspace.
AI is built into DialPhone across calls, messages, meetings, and the contact center. Our responsibility is to use AI in ways that improve customer outcomes without compromising privacy, fairness, or customer control.
Customer conversations are never used to train shared foundation models. Opt-in fine-tuning keeps data isolated per workspace.
AI recommendations are reviewable by humans. Agents approve auto-drafted SMS. Clinicians approve AI-summarized notes. No irreversible AI-only decisions in critical workflows.
PHI, PII, and PCI tokens are detected and redacted or tokenized before model processing. Raw sensitive data never crosses the model boundary for shared features.
Model cards disclose capabilities, limitations, training data scope, and known failure modes for every AI feature. Bias-testing results published.
Every AI action (drafted SMS, transfer decision, summarization) is logged with model version, inputs (tokenized), and the human who approved or overrode.
Any customer can disable any AI feature. No AI is forced for billing or service eligibility. Opt-out does not reduce service SLAs.