AI Uncertainty and Trust: Why Saying ‘I Don’t Know’ Matters

AI uncertainty and trust
Picture of Audrey Kerchner

Audrey Kerchner

Chief Strategist, Inkyma

“A little solace for the Anti-AI crowd. The greatest weakness of AI is its inability to say ‘I don’t know.’ Our ability to admit what we don’t know will always give humans an advantage.”
—Mark Cuban, Business Insider

We’ve been trained to see AI’s inability to admit uncertainty as a fatal flaw—but what if that same flaw is something we struggle with as humans too? Most people won’t say “I don’t know” in a meeting, during training, or even in casual conversation—not because they lack intelligence, but because they fear judgment. That’s why AI often feels like a safer place to ask “stupid” questions. And it’s also why businesses need to rethink how AI uncertainty and trust in models can create cultures of confidence, not just accuracy.

Managing AI uncertainty and trust in models means three things: (1) allowing AI systems to admit when they don’t know, (2) designing interfaces that show confidence levels or cite uncertainty clearly, and (3) using AI to support, not replace, human judgment. These strategies increase trust, accelerate learning, and encourage safer curiosity inside teams.

KEY TAKEAWAYS:

  • AI uncertainty and trust in models should be treated as a core design feature, not an afterthought.
  • People avoid asking questions when they fear judgment—AI can fill that gap with private, on-demand support.
  • AI tools that express uncertainty (via scores, citations, or fallback responses) are more likely to be trusted and adopted.
  • Safe learning environments—powered by AI—reduce onboarding time, boost employee confidence, and improve accuracy.
  • Business leaders who embrace AI not just for answers, but for safety, create cultures of continuous learning.

This is something we have implemented on behalf of clients. We’ll show you how one mid-sized company empowered a new manager to ramp up faster—without fear of embarrassment—by using an internal AI assistant trained to say “I don’t know” when appropriate.

Mark Cuban Was Half Right: AI Can’t Say “I Don’t Know”… Yet

Cuban’s critique resonates because it captures a fundamental discomfort with artificial intelligence: we expect machines to either know everything or admit when they don’t. In practice, they often do neither. AI tools today are trained to provide a response—any response—regardless of certainty. That’s where problems begin.

But Cuban’s point skips a subtle truth. The issue isn’t just that AI can’t say “I don’t know.” It’s that humans often won’t.

The Real Problem: Humans Rarely Say “I Don’t Know” Either

In a conference room, on a sales call, or during onboarding, few people volunteer their ignorance. Saying “I don’t know” can feel like professional self-sabotage. People stay silent to avoid looking unqualified, disengaged, or out of their depth.

This reluctance creates hidden inefficiencies. Questions go unasked. Learning slows. Misunderstandings multiply. Ironically, it’s this very dynamic that makes AI so appealing to users—it feels like a judgment-free zone. You can ask the “dumb” question and get an answer without losing face.

This is a missed opportunity in most workplaces: creating spaces, digital or human, where curiosity isn’t penalized.

A 2022 global survey indicated that 15% of employees felt reluctant to share views at work due to concerns about negative repercussions—further supporting the link between psychological risk and knowledge-sharing.

AI Is Learning to Say It—and That’s a Good Thing

While early AI models defaulted to confidence, newer systems are beginning to account for uncertainty. Many models now offer probability-based scoring or include fallback responses when confidence is low. Retrieval-augmented generation (RAG) methods, for example, allow AI to pull from defined data sources, and gracefully admit when something falls outside scope.

Teaching AI to express doubt isn’t a sign of weakness. It’s a step toward trust.

When an AI tool says, “I’m not sure,” or cites the limits of its training data, users gain clarity. And with clarity comes better decision-making.

This evolution in AI uncertainty and trust in models is moving fast—and smart companies are paying attention.

Business Case: Creating a Safe, Private Sandbox for Learning

At Inkyma, we worked with a mid-sized services company navigating rapid expansion. A new sales manager joined the team, bringing strong leadership experience but little context for the company’s internal tools or marketing jargon.

She didn’t know what an “MQL” was. She wasn’t fluent in the CRM’s naming conventions. But rather than slow her team down—or risk judgment by asking senior leadership basic questions—she used a private AI assistant trained on internal documentation.

Inkyma created an internal AI agent, built with security and scope control, gave her definitions, examples, and links to training modules. She onboarded faster, avoided guesswork, and didn’t feel embarrassed in the process.

This didn’t replace the value of her human manager. Instead, it made their interactions more productive. The AI became a low-friction layer that made learning private, fast, and safe.

Companies using secure, AI-powered onboarding platforms report a 53%–54% reduction in onboarding time, helping new hires ramp up faster while accessing contextualized, just-in-time help.

Why This Matters for Leadership and Culture

In many organizations, knowledge gaps stay hidden. Team members avoid asking questions. They nod along in meetings. They make assumptions that lead to costly mistakes.

AI—when implemented thoughtfully—becomes a judgment-free layer for curiosity. It democratizes access to information. It supports new hires, cross-functional collaboration, and executive decision-making.

But none of this works if the AI isn’t designed with transparency. Without AI uncertainty and trust in models, that layer can do more harm than good.

By teaching AI when to answer and when to defer, leaders build tools that genuinely support their teams.

AI Uncertainty and Trust in Models

Let’s zero in on the core concept. AI uncertainty and trust in models is a leadership issue. Trust comes when users understand what the AI knows, and more importantly, what it doesn’t.

Here’s how trust is built:

  • Confidence Scores: Show how sure the AI is, so users can weigh their next steps.
  • Source Citations: Let users trace answers back to data, improving credibility.
  • Fallback Responses: Teach AI when to say, “That’s beyond my scope.”

New research on AI systems shows that models citing high-quality and diverse sources have a more favorable trust profile among users. For example, a 2025 citation pattern study found that OpenAI’s language models cite high-quality sources 96.2% of the time, reinforcing the value of comprehensive, visible citations in gaining user confidence.

Close the Loop: The Real Advantage Is Psychological Safety

Mark Cuban is right: humans can admit “I don’t know.” But often, they don’t. The real competitive advantage isn’t just being capable of admitting it—it’s creating systems where it’s safe to do so.

That’s the opportunity AI presents. When designed with human psychology in mind, AI becomes more than a tool—it becomes a safe space for growth, clarity, and confidence.

And in high-stakes business environments, that kind of trust isn’t a feature. It’s a strategy.

Take Action Today

If you’re integrating AI into your organization—or thinking about it—consider this: trust doesn’t come from always having the answer. It comes from knowing when to pause, reflect, and admit what isn’t known.

Inkyma helps companies build AI tools that empower curiosity, protect decision-making, and create a safer path to learning. Let’s design AI that supports your team—not replaces it.

Schedule a Strategy Session to explore how we can build intelligent, trustworthy AI systems tailored to your business needs.

Why is it important for AI systems to express uncertainty?

Expressing uncertainty helps users understand the confidence level of AI-generated outputs. When AI models acknowledge limitations or signal low confidence, it prevents blind trust, reduces the risk of errors, and promotes better human oversight.

Can acknowledging uncertainty in AI reduce user reliance on incorrect outputs?

Yes. When AI transparently communicates its uncertainty, users are more likely to double-check responses or seek additional input. This reduces over-reliance on AI and encourages users to participate more actively in the decision-making process.

How can businesses train their internal AI systems to say “I don’t know”?

Businesses can implement fallback mechanisms, set thresholds for confidence scores, and use retrieval-augmented generation techniques. These help AI recognize when it lacks sufficient context or data, allowing it to respond with uncertainty rather than generating inaccurate answers.

Share This Blog Post