# The trust layer: what brands must build before AI agents shop for customers

**Published:** May 13, 2026 | **Updated:** May 13, 2026 | **Authors:** Nidhi Nair | **Categories:** Trends and expert opinions

> As agentic commerce emerges, brands must architect AI around trust — continuous context, human guardrails, and visibility — or risk customer loyalty.

---

[Consumer trust has been eroding for years ](https://www.forrester.com/report/forresters-global-customer-experience-index-cx-index-tm-rankings-2025/RES184177)— and AI has accelerated the decline.

People can’t tell whether product photos are real. They don’t know if reviews were written by a consumer or generated by a model. The person responding on Instagram might not be a person at all. The entire information environment around shopping has gotten less reliable, and consumers know it.

The first wave of AI in CX made things worse. Most brands deployed AI to deflect — to keep consumers away from expensive human conversations. Consumers figured that out fast:

- The chatbot that can’t help you.

- The phone tree that loops in circles.

- The “AI assistant” that sends you to an article you’ve already read.

Every one of those moments taught consumers that AI exists to keep them away from someone who can actually help.

[According to PwC research](https://www.pwc.com/us/en/industries/consumer-markets/library/agentic-commerce.html), 64% of consumers say they’d need at least one safeguard before they’d be comfortable with an AI agent making a purchase on their behalf. That’s where we’re starting from: most people don’t trust AI with their money yet.

#### Agentic commerce raises the stakes

When AI agents start shopping on behalf of consumers — comparing products, negotiating, completing purchases — the number of places where trust can break multiplies.

[Harvard Business Review’s February 2026 analysis](https://hbr.org/2026/02/how-brands-can-adapt-when-ai-agents-do-the-shopping) on agentic commerce identified five specific risks:

1. **The agent misunderstands the product.**

2. **The agent acts beyond what the consumer authorized.**

3. **The agent mishandles sensitive information.**

4. **The agent misrepresents the brand.**

5. **The agent fails without a recovery path.**

Each of these risks exists today in some form. As AI agents operate with more autonomy across a wider journey, each one becomes more likely and more consequential. A bad recommendation in a chat widget is annoying. An unauthorized purchase made by an AI agent on your behalf is a different category of failure.

#### Efficiency is table stakes

For most CX leaders, the focus hasn’t changed: serve more consumers without adding headcount. AI has become the answer to that mandate. Most of the investment, most of the vendor pitches, most of the metrics are oriented around one thing — how much volume can AI absorb so we don’t have to hire for it.

The pressure to capture those savings is real and legitimate. But how you capture them matters.

If you drive efficiency by putting AI between the consumer and the resolution — if every conversation starts from scratch because the system doesn’t know who the consumer is — you’re optimizing cost today while breaking the trust that earns you the next conversation.

A deflection that saves $3 on this conversation can break trust with a consumer worth $3,000 over their lifetime. And the worst part: you’ll never see it happen. The consumer doesn’t complain. They just don’t come back. In most industries, the switching cost is low. There are plenty of competitors waiting to earn the trust you just lost.

Efficiency matters, but it’s table stakes. The problem is treating it as the end goal — measuring success by how many conversations you eliminated instead of whether customers trust you enough to come back.

The brands find success with [agentic commerce](/blog/what-is-agentic-commerce/) will be the ones whose consumers trust the AI enough to let it help with more, buy more, come back more. Efficiency reduces cost. Trust earns the next conversation, and the one after that. You need both for a growing, profitable business.

#### Trust is a layer, not a feature

Every AI vendor in CX will tell you their product is trustworthy. That word has become as meaningless as “omnichannel” was five years ago. The question isn’t whether a vendor claims trust. It’s whether the system is architected so that trust is the default outcome.

Three things have to be true.

##### 1. Context has to be continuous

If the AI doesn’t know the consumer — their conversation history, purchase patterns, preferences, past issues — it’s guessing. And consumers can tell when AI is guessing.

Continuous context means the AI knows that this consumer returned a jacket last month because the sizing ran small, so when they’re browsing jackets again today, the AI can recommend sizing up without being asked. That’s what trust feels like in practice.

##### 2. Humans have to be part of the same conversation

The five HBR risks share a common thread: an AI operating without guardrails.

The fix isn’t to limit what the AI can do, but to ensure that when the AI reaches the edge of what it should do, a human can step in without the consumer starting over. Not an escalation. Not a transfer. A continuation.

The consumer should feel like one conversation with one brand, whether AI or a human is on the other side.

##### 3. CX leaders must see and shape what the AI does

If you can’t see how your AI is responding, what it’s promising, where it’s going off-script, you’re trusting the model instead of trusting your own guardrails.

Visibility and control aren’t about limiting AI. They’re about building the kind of confidence — internally and externally — that lets you give the AI more responsibility over time.

None of these are features you toggle on. They’re architectural choices that either exist at the foundation of how the system works, or they don’t. You can’t bolt trust onto a system built around cases and queue numbers any more than you can bolt a second story onto a foundation built for one.

#### The business case for trust

Trust compounds.

- A conversation where the AI remembers this consumer’s preferences and gives a recommendation that actually lands.

- A return where the AI knows what went wrong and offers an exchange that fits.

- A first visit where the AI asks the right questions instead of pushing the wrong products.

Every one of these is a moment where trust is either maintained or broken.

Consumers who trust you come back. They spend more. They cost less to serve because the system already knows them. Every conversation that maintains trust earns the next one.

#### The layer that makes everything else work

[Agentic commerce will reshape how consumers discover, shop, and get help](https://fortune.com/brandstudio/gladly/your-ai-doesnt-know-your-customer-gladly-does/). The protocols will mature. The AI will get smarter. But none of it works if consumers don’t trust the experience.

Trust isn’t a nice-to-have you layer on after the technology is in place. It’s the layer that determines whether the technology creates value or destroys it.

Brands that treat trust as architecture — continuous context, shared conversations between humans and AI, and visible, controllable guardrails — will be the ones consumers invite into their wallets when agents start to shop on their behalf.

---

*This content is provided by Gladly. Visit [gladly.com](https://gladly.com) for more information.*