Generative AI (GenAI) adoption is skyrocketing, but are leaders putting too much trust in it?

The new Data and AI Impact Report found decision-makers trust GenAI three times as much as traditional machine learning, even though machine learning is more mathematically explainable.

Just because GenAI seems trustworthy doesn’t mean it is. The study calls the difference between how trustworthy AI is perceived to be and how responsibly it’s implemented the “trust dilemma.” Organizations that solve this dilemma unlock measurable ROI and transformational impact; those that don’t risk errors, misinformation and missed opportunities.

How can leaders turn GenAI into an opportunity rather than a risk? Here are three questions to guide the way.

Why do we trust GenAI?

Our propensity to trust GenAI isn’t only due to the technology itself; it’s also shaped by human psychology. Leaders who understand why trust runs high can guide adoption responsibly and avoid costly mistakes.

What factors contribute to an implicit sense of trust in GenAI or, more specifically, GenAI systems based on large language models (LLMs)? Here are four aspects to consider.

  1. Human-like interactivity: We’re biased toward systems that feel intuitive and conversational. The more human GenAI seems, the more we assume its outputs are correct, regardless of its actual reliability or accuracy. This could drive widespread adoption of systems that are fundamentally opaque or flawed.
  2. Ease of use: GenAI feels, and is, useful because of the low barrier to entry. Its fast, tailored responses make complex tasks feel effortless. But that ease of use can mask real gaps in reliability, especially when we accept its outputs without digging deeper.
  3. Confidence effect: GenAI outputs are stated with an implicit level of confidence that sound like the truth. In other words, they can confidently produce incorrect responses. In areas where users lack subject matter expertise, it’s easy to be fooled or misled. GenAI isn’t designed to deliver truth or knowledge – it’s designed to generate statistically probable information. That distinction matters.
  4. Illusion of control: Interactivity gives the impression that we’re steering the system, but control is often an illusion. Good prompt hygiene will influence the system’s output. But if users don’t understand how the underlying model works, that very responsiveness leads to misplaced confidence in the output.

Often, trust in GenAI is more about perception than reality. Business leaders must go farther; they need awareness, skepticism and a clear understanding of GenAI’s capabilities to deliver sustainable value.

Should we trust GenAI?

The short answer: No. GenAI itself can’t be fully trusted.

On my Pondering AI podcast, Andriy Burkov, PhD in Artificial Intelligence and author of The Hundred-Page Language Models Book, called large language models (LLMs) “useful liars” capable of producing remarkable outputs, but not inherently trustworthy. As noted, LLMs can (and will) produce “hallucinations”: outputs that sound reasonable but are wrong or fabricated. A hallucination isn’t a bug; it’s a feature of how the foundational models work.

The Data and AI Impact Report underscores this caution. While respondents report high trust in GenAI, 62% are concerned about data privacy, 57% about transparency and 56% about ethical use.

Trust in GenAI comes from the people and guardrails built around it. That’s why AI literacy is essential. It equips teams to spot errors, interpret outputs critically and design applications that integrate GenAI effectively. Without it, even the most sophisticated models can become a source of risk rather than insight.

How can we build real trust with GenAI?

Building trust in GenAI isn’t about believing the system blindly; it’s about deploying practices and systems that make its use reliable. Without these foundations, organizations risk errors, data leaks, misaligned decisions, regulatory exposure and hidden bias.

Organizations can build trust with GenAI through three main approaches:

  1. Foster AI literacy. Employees need to understand what GenAI can and cannot do, where it is likely to fail and how to interact with it responsibly. Ongoing training and guidance will help teams apply the right level of skepticism, validate outcomes and make informed decisions.
  2. Define the right use cases for GenAI. Despite the hype around GenAI, it is not the right tool for every business goal and is rarely the complete answer on its own. Early problem framing is critical to ensuring organizations apply GenAI effectively. Thoughtful planning ensures solutions are aligned, reduces errors and prevents wasted effort.
  3. Embed responsible AI practices. Responsible AI provides governance, structural safeguards and technical protections, including explainability, oversight and ethical safeguards, needed to ensure AI is used appropriately. Only about 25% of organizations currently have central teams managing AI ethics, fairness, data quality, monitoring and bias detection. Those that do are better equipped to anticipate failures, align with their risk appetite and gain a stronger ROI on AI investments.

By combining literacy, thoughtful planning and responsible AI practices, organizations can build trust in AI while ensuring the technology serves the business, its people and its customers responsibly.

Turning risks into opportunities

Real trust in GenAI doesn’t come from how human-like it seems or how effortlessly it generates content; it comes from the integrity of the systems, practices and people guiding its use.

Investing in AI literacy and embedding responsible AI practices will provide GenAI with the knowledge, structures and oversight needed to turn potential risks into real opportunities and deliver lasting impact.

To find out more about how to build trust with AI, read the full Data and AI Impact Report now.




Source link


administrator