Zum Hauptinhalt springen

Article 3 min read

What really builds trust in AI-powered experiences?

Trust is hard-earned and easily lost—especially as AI powers more of the service experience.

Béatrice Moissinac

AI Security - Principal Security Engineer at Zendesk

Zuletzt aktualisiert: August 28, 2025

AI is transforming customer and employee service, but too often it’s being deployed without the policies and safeguards needed to sustain trust. And when trust is broken, doubt spreads quickly: customers lose confidence, agents disengage, and leaders hesitate to scale.

On the one hand, users increasingly see the value AI brings. In fact, 61% of customers now expect it to deliver more personalized service. Yet one wrong answer or one confusing interaction can quickly erode this confidence. Before you know it, momentum stalls.

Change is happening quickly and organizations will fall behind if they aren’t taking the time to understand and mitigate potential AI risks. Too many still think that a disclaimer or generic privacy pop-up will do the trick—it won’t. To design for trust, you must build safeguards into every layer of the service experience.

What is AI governance?

AI governance refers to the policies and practices that guide the responsible, ethical, and safe deployment and use of AI systems. This includes protecting customer data, giving customers control over when and how they use AI, providing for transparency and explainability, and staying updated with global regulatory developments.

Establishing a strong AI governance structure is a critical first step. However, according to Gartner, only 20% of companies currently have one in place today, while 65% are still in the early stages of planning. Lack of governance is one of the biggest risks to AI adoption—and the biggest barrier to customer confidence.

Without immediate and meaningful progress here, the gap between what users expect and what companies deliver will only widen. And it will slow teams’ ability to deploy AI that’s truly in service of customers and employees.

Read more about the AI trust gap in Zendesk’s recent report.

Trust by design: 5 ways to deploy AI responsibly

Trust is earned when users have confidence in AI’s foundations—everything that makes it feel safe, helpful, and reliable to those using it. When evaluating AI solutions, leaders should ask: is the system safeguarding customer data? Is it protected against misuse? Can agents stay in the loop?

At Zendesk, we believe the best way to tackle risk is to understand it—and to design for trust from the ground up. We’ve woven five core governance principles into every layer of our AI to ensure users feel respected, protected, and in control. These include:

  1. Transparency: Customers and agents should always know when AI is being used, how it reached a decision, and what information it relied on. Zendesk labels AI outputs so users always know when AI is involved.
  2. Control: Users must be able to stay in the loop, review outputs, and override or disable AI as needed. Zendesk customers can turn generative AI features on gradually through the Admin Center, and agents can edit or reject AI-generated replies before sending.
  3. Security: AI must be resilient to threats, manipulation, and harmful behavior. Zendesk has built-in safeguards like prompt shielding, RAG grounding, and isolated test environments to keep AI responses accurate, secure, and compliant.
  4. Privacy: Data should always be protected, owned, and used responsibly. With Advanced Data Privacy and Protection (ADPP), Zendesk makes it possible to redact or delete sensitive data. Self-managed encryption keys and customizable consent flows add further control.
  5. Grounded knowledge: AI must be tied to accurate, real-time knowledge sources to deliver trustworthy results. Zendesk AI uses retrieval-augmented generation (RAG) to ground answers in each customer’s own knowledge base, ensuring responses reflect what the business truly supports.

Closing the trust gap

With tools like Quality Assurance to monitor AI performance, AI agents that follow your procedures, and ADPP for advanced privacy controls, Zendesk equips companies to adopt AI responsibly—while reinforcing the trust that makes adoption sustainable.

Because the future of AI-powered service won’t be defined by how powerful the tech is, but by how much customers trust it to deliver.

Béatrice Moissinac

AI Security - Principal Security Engineer at Zendesk

Béatrice is the Principal Security Engineer for AI Security at Zendesk, where she focuses on applying AI research to cybersecurity, trust, and safety—exploring how to build AI applications and products securely and responsibly. She previously held positions at IBM, Credit Suisse, and Okta and has two Master's degrees and a PhD in Computer Science. In her free time, Béatrice enjoys long-distance running, camping and backpacking, and building Legos. Her favorite algorithm is Dynamic Time Warping.

share_the_story

Ähnliche Beiträge

Report
5 min read

Closing the AI Trust Gap: Building confidence through responsible innovation

AI is transforming how companies support their customers, but delivering on its promise requires more than…

Reports
1 min read

Three ways AI will reshape HR by 2028

The future of employee service is evolving—and the next three years will be consequential for HR…

Reports
6 min read

Zendesk AI Effect: Real Brands, Real Results, Outsized Impact

The best brands aren’t just adopting AI—they’re using it to deliver faster service, increase efficiency, and…

Report: How AI is transforming the economics of customer service

Customer expectations are higher than ever. They demand fast, seamless, and personalized support—24/7. Traditional customer service…