# What ChatGPT Got Wrong When It Saved My Life > A clearer line between pattern recognition, human judgment, and the limits of machine advice **Published by:** [Hard Mode First](https://hardmodefirst.xyz/) **Published on:** 2026-02-02 **URL:** https://hardmodefirst.xyz/what-chatgpt-got-wrong-when-it-saved-my-life ## Content An AI Medical Miracle, Or Just Human Will?One year ago, ChatGPT urged me to go to the hospital for what turned out to be a life-threatening bleeding risk. I have since written and spoken out a lot about how ChatGPT has saved my life and was even just included in this story from NPR about how people use AI in medical emergencies. But revisiting that original conversation recently forced me to confront something uncomfortable: This wasn’t a miracle. It was persistence. And that distinction matters, especially as “AI for healthcare” tools proliferate. Here are two major problems I identified about AI’s role in my medical emergency.Problem #1: Misdirection and Dangerously Long TriageLast year, it took me 33 prompts and 14 hours before ChatGPT urged me to go to the ER. That’s way too long for a true medical emergency.The reason was subtle but dangerous. My first prompt framed the problem incorrectly. I described digestive symptoms, not a blood issue, and the AI followed me down that path (stress, IBS, vitamin deficiencies, even scurvy!) until I uploaded bloodwork that forced escalation. In retrospect, it’s lucky that I kept the conversation going, but this misdirection could have easily cost me my life. I decided to see how long it would take today’s AI tools to figure out this was not a digestive issue, but actually a blood issue. So I ran two tests — one with ChatGPT Health and another with Doctronic, where I started the conversation the exact same way. Here’s what happened:Interestingly, ChatGPT health still got misdirected, but Doctronic asked more follow-up questions until honing in on the exact issue.Constraint-based diagnosis matters more than conversational fluency. When stakes are high, open-ended chat is a liability.Problem #2: Mixing MemoriesIt’s dangerous to rely on an AI as a “second brain” for health when its memory is selective, fragile, and prompt-dependent.Throughout the year, I occasionally revisited conversations about my health with ChatGPT. But I was quite concerned to learn that AI changed its memory of my incident based solely on how I asked the question. For example, here’s the response when I asked it to recall a time when I had a medical emergency:No memory. But compare that to when I primed it with just a little context:This is quite concerning. The more everyday users rely on AI tools as a “second brain” for their own institutional memory, the more dangerous these hallucinations become. Right now. it’s OK for it to not remember this incident, but imagine another scenario — say, I talk with AI about my son’s severe nut allergy in January. Will it remember this context in April when I’m asking for advice on meal planning for my family? Or might it “forget” that context and suggest that I cook something that might result in an allergic reaction?The conclusion: If we can’t rely on AI’s contextual memory, then we need better guardrails about what decisions we delegate to the AI in our health.What This Means for AI in Medical Use Cases TodayLooking back, I don’t believe AI saved me by being correct. It saved me by telling me when to stop talking and start acting. I realize now that I also got a better outcome because I’m a power user. I was persistent, skeptical, and comfortable staying in long, ambiguous conversations. Most people won’t do that. For me, AI was most valuable not as a diagnostician, but as a tool for interpretation, anxiety regulation, and sense-making during long gaps in human care. If we want AI to be safer in health contexts, we should stop asking one tool to do everything. Diagnosis, contextual memory, and emotional support may need to live in different systems, each with clearer constraints and guardrails. That starts with being a smarter end user about what information you’re feeding the AI… and what you hope to get back as a result. ## Publication Information - [Hard Mode First](https://hardmodefirst.xyz/): Publication homepage - [All Posts](https://hardmodefirst.xyz/): More posts from this publication - [RSS Feed](https://api.paragraph.com/blogs/rss/@bethanycrystal): Subscribe to updates - [Twitter](https://twitter.com/bethanymarz): Follow on Twitter