Home
  • Contact
  • Work
  • Insights
  • About
  • Contact
  • LinkedIn
  • Instagram
  • YouTube
  • Creative Pool

© 02.03.04 – Dusted Design Partners (trading as “Dusted” in the UK) is part of Dusted Group Limited. The Dusted name, Dusted logo and “D.” device are registered trademarks (in the UK) of Dusted Design Partners.

  • Privacy policy
  • Cookies policy
Accreditation
Home
April 9, 2026 • 5 min read

The AI trust gap. Why more people use AI but fewer people trust it.

78% of Americans now use AI-powered tools. Yet 55% believe AI will do more harm than good in their daily lives. Only 21% trust AI-generated information most of the time.

In short: Usage is rising. Trust is falling.

For any brand integrating AI, this is not a technology gap. It is a trust gap. How AI shows up in your product, how it is explained, and how much control users feel they have will shape whether it strengthens or weakens your relationship with them.

The paradox nobody is talking about.

AI adoption is accelerating across work and everyday life.

21% of American workers now use AI for at least part of their role, up from 16% in 2024. Two thirds say they are more proficient with AI tools than they were a year ago.

At the same time, trust is moving in the opposite direction.

Pew Research has tracked this shift over five years. In 2021, 37% of Americans said they felt more concerned than excited about AI. By 2025, that number had reached 50%. A steady rise in scepticism as AI becomes more visible and more embedded.

Quinnipiac’s April 2025 poll sharpens the picture. 55% of Americans believe AI will do more harm than good in their daily lives. Only 34% believe it will do more good. When it comes to trust, 76% say they trust AI-generated information hardly ever or only some of the time. Just 21% trust it most or almost all of the time.

As one Quinnipiac researcher put it, the contradiction is clear: People are adopting AI, but doing so with hesitation rather than confidence.

Usage is being driven by necessity and convenience. Not belief.

This pattern is not limited to the US. KPMG’s global study shows 72% of people worldwide accept or approve of AI, while the US sits lower at 54%. The UK and Europe follow a similar direction. Adoption increases. Concern rises alongside it.

For brands operating across markets, this is not a regional issue. It is a shared shift in sentiment.

This is deeper than tech scepticism.

The concern is not just about accuracy or reliability. It runs deeper:

  • 53% of Americans believe AI will worsen people’s ability to think creatively.
  • 50% believe it will harm the ability to form meaningful relationships.
  • 64% think AI will do more harm than good in education.

These are not technical concerns. They are human ones.

Even in healthcare, where sentiment is more positive, trust still has limits. When presented with AI that outperforms human radiologists, most people still want a human second opinion. Accuracy alone does not build trust. Accountability does.

KPMG’s 2025 study reflects the same tension in the workplace. 70% of US workers are eager for AI’s benefits and 61% report positive impact. Yet only 41% are willing to trust it.

Half the workforce uses AI tools without knowing if they are allowed. Over 40% knowingly use them improperly.

People are already relying on AI. They are just doing so without clarity or confidence.

What this means for brands.

Every AI feature is a trust interaction.

When users already approach AI with scepticism, each interaction either reinforces that doubt or begins to shift it. The chatbot on your site. The recommendation engine. The automated decision. Each moment carries weight.

The brands that struggle tend to repeat the same patterns.

  • They lead with capability, not control. Faster. Smarter. More personalised. But without explaining limits, data use, or how users can intervene.
  • They hide AI in the experience. Users discover it after the fact. That sense of being misled deepens distrust. Research shows that customers who perceive AI as transparent are 8.5 times more likely to express high trust in the brand.
  • They remove human touchpoints without context. A person becomes a bot. Advice becomes an algorithm. The shift feels like a downgrade, not an improvement.
  • They treat AI as a feature launch, not a relationship shift. Announcements focus on innovation, not impact. Customers are left to work out what has changed and what it means for them.

What the brands getting it right do differently.

The brands building trust approach AI as part of the relationship, not just the product.

  • They name it with intent. When we worked with SS&C Intralinks, we renamed their AI platform from ‘Deal.io’ to ‘Link’. Not to obscure the technology, but to frame it as something that connects and supports. The name signals purpose, not just capability.
  • They keep humans visible. Trusted implementations make escalation clear and accessible. Not hidden. Not secondary. The data shows people want a human in the loop, even when AI performs better.
  • They explain clearly. What the AI is doing. What data it uses. Where its limits sit. This information lives in the experience itself, not buried in documentation.
  • They design for control. Users can adjust, override and understand decisions. Progressive disclosure plays a role here. Show the outcome. Then show the reasoning when needed.
  • They communicate early. The trust conversation happens before rollout. Customers understand what is changing and why. There are no surprises.

The gap between technologists and the public.

There is a clear divide between those building AI and those using it.

Inside product teams, AI is seen as progress. A capability to explore and expand.

Outside, the experience is different. AI shows up as a support bot that cannot resolve an issue. As headlines about deepfakes or job loss. As a sense that change is happening without consent.

This divide is measurable. According to KPMG data, 72% of people accept or approve of AI. In the US, that drops to 54%. Americans are more sceptical than most other developed nations. The optimism of Silicon Valley is not shared by the people buying its products.

For brands, that gap matters. Internal excitement can lead to decisions that feel misaligned externally. What looks like progress inside the business may feel like risk to the customer.

What to do about it.

The wider trust gap is not something one brand can solve alone. But how you respond to it is within your control.

  • Start by auditing your AI touchpoints. Where does AI appear across your customer journey? Do users know it is there? Do they understand what it does? Can they take control or opt out?
  • Design for scepticism. Most users are cautious. Clear labelling, visible human alternatives and plain language explanations help reduce friction.
  • Separate communication from promotion. AI integration is not just a feature. It changes how customers interact with your brand. Explain what changes, what stays the same and what safeguards are in place.
  • Track sentiment as closely as adoption. The direction of trust matters more than the speed of uptake. If usage grows while confidence falls, the gap becomes a risk.

Trust is slower than technology.

AI will continue to advance. Capabilities will expand. Integration will deepen.

Trust moves differently. It builds through experience. Slowly. And it can fall quickly.

A single moment of confusion, loss of control or perceived deception carries weight.

The brands that navigate this well will not be the fastest to deploy. They will be the clearest in how they communicate, the most thoughtful in how they design, and the most deliberate in how they build control into the experience.

Let’s get things Done & Dusted.

AI is reshaping products and services. But the relationship between brand and user still sits at the centre.

We are a brand design agency working with technology and financial services firms to design brands and digital experiences that earn trust. From positioning and naming through to UX and AI-powered interactions, we help you introduce new capabilities without losing confidence.

From ambition to action. With clarity at every step.

If your brand is integrating AI and you need to ensure it lands with confidence, not concern, we can help.

Contact us today and let’s assess what your brand needs to take its next step.

Ready when you are.

FAQs

Share this article

Related insights

Post-quantum web security. What your digital platform needs now.

Beyond the chat bar. Designing SaaS interfaces for the agentic era.

Building institutional trust. Why generic branding weakens financial services brands.

Share this article

Inbox insights.Gain competitive edge.

Sign up for insights and curated thought leadership direct to your inbox. Every month.
Submit

Loading preset...

Related Articles

Post-quantum web security. What your digital platform needs now.

Quantum computing will change how encryption works across your digital platforms. Not all at once. But in the places that matter most. Secure connections. Login credentials. Certificates that prove…

Beyond the chat bar. Designing SaaS interfaces for the agentic era.

PostHog made chat its default homepage. Linear embedded an AI agent across the app. Attio moved further towards agent-first interactions. Three of the most design-aware SaaS products in the market,…

Building institutional trust. Why generic branding weakens financial services brands.

There’s no shortage of noise in fintech branding. Big claims. Disruption narratives. Technology framed as the answer to everything. In financial services, that approach erodes confidence. Between…