Whether you are creating or setting Whether it’s an AI policy or re-evaluating your company’s approach to trust, maintaining customer trust may become increasingly challenging with the unpredictability of generative AI in the picture. We spoke with Deloitte’s Michael Bondar, director and head of corporate trust, and Shardul Vikram, chief technology officer and head of data and AI at SAP Industries and CX, about how businesses can maintain trust in the age of AI.
Organizations benefit from trust
First, Bondar said, each organization needs to define trust as it applies to their specific needs and customers. Deloitte offers tools for this, such as a “trust domain” system, which is found in some Deloitte Downloadable Frameworks.
Organizations want their customers to trust them, but people involved in the trust debate often hesitate when asked what exactly trust means, he said. Trusted companies show better financial performance, better stock returns and higher customer loyalty, Deloitte has found.
“And we saw that almost 80% of employees feel motivated to work for a reliable employer,” Bondar said.
Vikram defined trust as the confidence that an organization will act in the best interests of its customers.
When thinking about trust, customers will be asking themselves, “What is the uptime of these services?” Vikram said. “Are these services secure? Can I trust this particular partner to keep my data safe, ensuring that it complies with local and global regulations?”
Deloitte found that trust “starts with a combination of competence and intent, meaning the organization is capable and trustworthy of delivering on its promises,” Bondar said. “But also the rationale, the motivation, the reason behind those actions are aligned with the values (and) expectations of the various stakeholders, and there’s humanity and transparency embedded in those actions.”
Why might it be difficult for organizations to increase trust? Bondar attributed this to “geopolitical instability,” “socioeconomic pressures,” and “fears” around new technologies.
Generative AI could undermine trust if customers aren't informed about its use
Generative AI is a top priority when it comes to new technologies. If you're going to use generative AI, it needs to be robust and reliable so as not to reduce trust, Bondar noted.
“Privacy is key,” he said. “Consumer privacy must be respected and customer data must be used for and only for its intended purpose.”
This includes every stage of AI use, from the initial data collection when training large language models to providing consumers with the ability to opt out of having their data used by AI in any way.
In fact, training generative AI and identifying its shortcomings may be the right time to remove outdated or irrelevant data, Vikram says.
WATCH: Microsoft Delayed the launch of the AI-powered recall featureLooking for more community feedback
He suggested the following methods to maintain customer trust while implementing AI:
- Provide training to employees on how to use AI safely. Focus on wargaming and media literacy. Be mindful of your organization’s beliefs about data trustworthiness.
- When developing or working with a generative AI model, require consent for data processing and/or intellectual property compliance.
- Watermark AI content and train employees recognize AI metadata if possible.
- Provide a full understanding of your AI models and capabilities by openly describing how you use it.
- Create a trust center. A trust center is a “digital and visual liaison layer between an organization and its customers where you educate, (and) share the latest threats, the latest practices, (and) the latest use cases, which we’ve seen do wonders when done right,” Bondar said.
CRM companies are likely already following rules—such as the California Privacy Rights Act, General Data Protection Regulation of the European Union And SEC Cyber Disclosure Rules — it could also impact how they use customer data and AI.
How SAP Builds Trust in Generative AI Products
“At SAP, we have a DevOps team, an infrastructure team, a security team, a compliance team that are deeply embedded in every product team,” Vikram said. “That ensures that every time we make a product decision, every time we make an architectural decision, we think about trust as something from the very beginning, not as an afterthought.”
SAP implements trust by creating relationships between teams and by developing and enforcing a company ethics policy.
“We have a policy that we can’t ship anything until it’s cleared by the ethics committee,” Vikram said. “It’s cleared by the quality controllers… It’s cleared by the security peers. So it’s really adding a layer of process on top of the operational stuff, and the two of them, when combined, actually help us operationalize trust or ensure it.”
When SAP implements its own generative AI products, the same policies apply.
SAP has released several generative AI products, including the CX AI Toolkit for CRM, which can write and rewrite content, automate some tasks, and analyze enterprise data. The CX AI Toolkit will always show its sources when you ask it for information, Vikram said; it’s one way SAP is trying to build trust with its customers who use its AI products.
How to Reliably Embed Generative AI in Your Organization
Overall, companies need to include generative AI and reliability in their key performance indicators.
“With the advent of AI, especially generative AI, there are additional key performance indicators or metrics that customers are looking for, like: How do we increase the trust, transparency, and accountability of the results that we get from a generative AI system?” Vikram said. “These systems are by default or by definition non-deterministic with high accuracy.
“And now, in order to use these specific capabilities in my enterprise applications, in my revenue centers, I need to have a basic level of trust. At the very least, what do we do to minimize hallucinations or bring in the right ideas?”
According to Vikram, senior executives are eager to try AI, but they want to start with a few specific use cases at a time. The speed at which new AI products are emerging can conflict with this desire for a measured approach. Concerns about hallucinations or poor quality content are common. Generative AI for for example, the performance of legal tasks shows “all-pervasiveness” cases of errors.
But organizations want to try AI, Vikram said. “I’ve been building AI applications for the last 15 years, and there’s never been this. There’s never been this growing appetite, and not just an appetite to learn more, but to do more with it.”