November 20, 2025

What AI built for resolutions does differently and why it matters in healthcare

5 min listen

6 min read

A parent opens their patient portal at 11:30 PM on a Sunday, and notices their child's critical medication is missing from the prescription list. Panic sets in.

They open the chatbot. It asks: "What can I help you with today?"

Option A (Most healthcare chatbots)

The parent types, "My son's prescription is missing." The bot offers a menu, where you can check prescription status, request a refill, or contact your pharmacy.

None of these options addresses the actual problem. The parent tries "Contact your pharmacy," which triggers: "Our pharmacy is open Monday-Friday, 9 AM - 5 PM. Please call back during business hours."

The chat closes. The parent is alone with their anxiety. The bot counts this as a successful "deflection."

Option B (AI built for crisis)

The parent types, "My son's prescription is missing."

The AI recognizes:

  • Urgency signals: "missing," nighttime query, medication for a minor

  • Clinical context: This isn't a refill request—it's a potential continuity-of-care issue

  • Escalation threshold: This requires human clinical judgment

And the AI responds immediately. "I can see this is urgent. Let me connect you with our on-call nursing team right now. I'm sending them your account details so you don't have to repeat yourself."

Within 90 seconds, a nurse is on the line with full context. The prescription issue is resolved before midnight.

The difference? One system was designed to deflect. The other was designed to resolve.

The crisis support gap in healthcare AI

Most healthcare chatbots are built on the same logic as e-commerce bots. They answer common questions, guide users through FAQs, and keep them out of the support queue. This works beautifully for "Where's my order?" It fails catastrophically for "Is this chest pain normal?"

Here's why.

Crisis psychology is fundamentally different

When a patient is in crisis mode, whether it's a medication concern, post-op complication, or sudden symptom, their cognitive load is maxed out. They can't navigate menus. They can't parse conditional logic. They need one of two things:

  1. Immediate, specific resolution ("Your prescription was moved to your secondary insurance, it's ready for pickup at CVS on Main Street")

  2. Immediate escalation to a human ("Let me connect you with a nurse who can help")

There is no middle ground. A patient at 11:30 PM doesn't want a chatbot that tries. They want a system that knows when to step in and when to step back.

Traditional metrics break down in high-stakes moments

The chatbot industry loves "deflection rate," the percentage of queries handled without a human. In retail, this makes sense. In healthcare, it's dangerous.

A chatbot that deflects a parent asking about their child's fever by offering generic self-care tips hasn't resolved anything. It's just a delayed escalation. When the parent calls back two hours later (now more anxious, now less trusting), that's not efficiency, it's a failure masquerading as automation.

What to measure instead:

  • Resolution rate: Was the patient's actual problem solved?

  • Escalation accuracy: Did the AI correctly identify when human judgment was needed?

  • Time to resolution: How long from initial query to actual answer (including escalations)?

  • Patient confidence: Post-interaction CSAT for urgent queries specifically

Most healthcare AI lacks clinical safeguards

A chatbot trained on general customer service patterns doesn't understand the difference between:

  • "I'm having trouble logging into my portal" (transactional)

  • "I'm having trouble breathing after my surgery" (clinical emergency)

Without explicit training on clinical vs. transactional differentiation, AI treats both as support tickets.

This creates two risks.

Under-escalation
The AI provides self-care guidance for something that requires immediate clinical intervention.

Over-escalation
The AI routes routine questions to on-call clinicians, wasting critical resources and burning out staff.

The right AI does neither. It's trained on clinical keywords, urgency indicators, and patient sentiment, and knows exactly when its job is to get out of the way.

What resolution-first AI looks like in practice

When you design AI for resolution (not deflection), the architecture changes.

Before. Deflection-Optimized Flow
  1. Patient asks a question

  2. Bot searches FAQ

  3. Bot presents 3-5 potential answers

  4. Patient selects one (or gives up)

  5. Metric tracked: Deflection rate

After. Resolution-Optimized Flow
  1. Patient asks a question

  2. AI classifies: Transactional vs. Clinical

  3. If transactional, AI retrieves a specific answer from a verified source (prescription status, appointment time, billing amount)

  4. If clinical, AI escalates immediately with full context

  5. Metric tracked: Resolution rate + escalation accuracy

Meet Gladly

At Gladly, we built our AI on a simple principle. AI should make care more human, not less.

That means:

  • Resolution over deflection. We measure success by problems solved, not calls avoided

  • Intelligent escalation. Our AI knows when to step back and hand off to humans

  • Context preservation. When we escalate, human team members see everything. No patient has to repeat themselves

  • 24/7 without compromise. Always-on doesn't mean always-automated. It means always ready to resolve or escalate

Because when a parent is searching for answers at 11:30 PM, they don't need a chatbot that tries to help. They need a system that either solves the problem or immediately connects them with someone who can.

See how Gladly handles crisis moments in healthcare.