December 2, 2025

Why your pharmacy chatbot is making people sicker

6 min listen

6 min read

When Air Canada's chatbot gave a customer incorrect information about bereavement fares, the airline was held liable for the AI's mistake. In New York, a law firm was fined $5,000 for citing fake cases generated by ChatGPT. These headlines are more than cautionary tales. They expose a critical blind spot in the rush to adopt AI. It’s the failure to demand safety alongside efficiency.

Nowhere are the stakes higher than in healthcare. Yet when selecting AI vendors for patient-facing tools like pharmacy chatbots, most organizations' Request for Proposals (RFPs) focus overwhelmingly on deflection rates and cost reduction. They ask how AI will make them more efficient, but fail to ask how it will keep patients safe. This isn't just a procedural oversight, it's a critical failure with dangerous consequences.

It's time for healthcare leaders to ask better questions. Choosing the right AI partner isn't about finding the cheapest or fastest bot. It's about finding one with the intelligence and guardrails to protect patient well-being.

Better questions for better patient outcomes

If your vendor evaluation process doesn't prioritize patient safety, you're not buying a solution, you're acquiring a liability. Here are six critical questions every healthcare leader should ask before signing a contract for an AI chatbot.

1. How does your AI differentiate clinical from transactional queries?

A patient asking for a prescription status is transactional. A patient asking if their new medication will interact with another is clinical. Your AI must tell the difference instantly and flawlessly.

Most platforms treat all incoming questions as generic support tickets. In healthcare, this approach is dangerously inadequate. Medication non-adherence costs the U.S. healthcare system between $100 billion and $300 billion annually. Every friction point in pharmacy access contributes to this problem.

Note.

An intelligent AI should recognize clinical keywords, patient sentiment, and potential risk. It needs a built-in "nervous system" that understands when a query requires immediate human intervention.

Ask potential vendors to demonstrate this logic. If they can't show you a clear, reliable system for distinguishing a simple request from a potential health crisis, they aren't prepared for patient care.

2. What is your escalation threshold logic?

When does the AI decide it's out of its depth? A safe AI chatbot knows its limits. A patient repeatedly asking the same question in different ways, using words like "confused" or "help," or mentioning severe symptoms, should immediately trigger escalation to a human.

The logic for this cannot be an afterthought. It must be core to the AI's architecture.

Probe your vendor on their specific thresholds.

  • How many attempts does a patient get before escalation?

  • What specific keywords or sentiment patterns trigger a handoff?

  • How does the AI ensure full context transfers seamlessly to the human expert?

A vague answer here is a major red flag. You need a partner obsessed with these details, because they are the guardrails that prevent patient harm.

3. Can you show me your audit trail for compliance?

In healthcare, if it isn't documented, it didn't happen. Every interaction an AI has with a patient must be logged, auditable, and transparent. This isn't just for regulatory compliance, it's essential for quality control and continuous improvement.

Your vendor should provide a comprehensive, unalterable one-view log of every conversation. This audit trail must show what the AI said and what actions it took. Without a robust audit trail, you're flying blind and exposing your organization to significant legal and reputational risk.

4. What happens when your AI doesn't know the answer?

The most dangerous AI is one that pretends to know everything. Public chatbots are notorious for "hallucinating" or fabricating answers when they don't have the correct information. In a consumer context, this is inconvenient. In a healthcare context, it can be lethal.

A responsible healthcare AI should never guess. When faced with a question it cannot answer from its verified knowledge base, its only response should be to escalate to a human. Ask vendors to demonstrate this scenario. Test it with obscure or complex questions.

The AI's refusal to answer is not a failure, it's a critical safety feature. An AI that defaults to, "I don't have that information, but I can connect you with a pharmacist who does," is one you can trust.

5. How do you measure resolution versus deflection?

The tech industry loves the term "deflection." It's a clean, simple metric that quantifies how many customer queries were handled without human involvement. But in healthcare, deflection is a dangerously misleading metric. Deflecting a patient with a serious concern isn't a success, it's a failure of care.

The key metric is resolution. Did the AI successfully and safely resolve the patient's need? Or did it simply close the chat, leaving the patient frustrated and without an answer? True resolution means the patient's goal was achieved, whether through automation or a seamless handoff to a person.

A vendor focused on deflection optimizes for their own efficiency, not your patients' health outcomes.

6. Can you show me outcome data, not just efficiency metrics?

A 40% reduction in call volume means nothing if patient satisfaction plummets or adverse events increase. Efficiency metrics tell you how your AI is performing for your business. Outcome data tells you how it's performing for your patients.

Ask for case studies that demonstrate improvements in patient outcomes. This could include improved patient satisfaction scores (CSAT/NPS), or reduced time-to-resolution for critical queries. A truly patient-centric AI vendor will measure their success by the positive impact they have on patients' lives, not just by the operational costs they reduce.

The mandate for smarter vendor selection

Choosing an AI vendor is one of the most important decisions a healthcare leader will make. Efficiency is powerful, but it cannot come at the expense of patient safety. By asking deeper, more critical questions during the procurement process, you can move beyond superficial metrics and identify a partner who truly understands the responsibility of healthcare.

At Gladly, we believe AI should amplify human care, not replace it. Our approach is built on a foundation of safety, intelligence, and deep respect for the patient-provider relationship. Learn more about how we answer these critical questions and our philosophy on building AI for healthcare.

Angie Tran headshot

Angie Tran

Staff Content & Communications Lead

Angie Tran is the Staff Content & Communications Lead at Gladly, where she oversees brand storytelling, media relations, and analyst engagement. She helps shape how Gladly shows up across content, PR, and thought leadership.

Frequently asked questions