>600 subscribers


Share Dialog
Share Dialog
One year ago, ChatGPT urged me to go to the hospital for what turned out to be a life-threatening bleeding risk.
I have since written and spoken out a lot about how ChatGPT has saved my life and was even just included in this story from NPR about how people use AI in medical emergencies.
But revisiting that original conversation recently forced me to confront something uncomfortable: This wasn’t a miracle. It was persistence. And that distinction matters, especially as “AI for healthcare” tools proliferate.
Here are two major problems I identified about AI’s role in my medical emergency.
The reason was subtle but dangerous. My first prompt framed the problem incorrectly. I described digestive symptoms, not a blood issue, and the AI followed me down that path (stress, IBS, vitamin deficiencies, even scurvy!) until I uploaded bloodwork that forced escalation.
One year ago, ChatGPT urged me to go to the hospital for what turned out to be a life-threatening bleeding risk.
I have since written and spoken out a lot about how ChatGPT has saved my life and was even just included in this story from NPR about how people use AI in medical emergencies.
But revisiting that original conversation recently forced me to confront something uncomfortable: This wasn’t a miracle. It was persistence. And that distinction matters, especially as “AI for healthcare” tools proliferate.
Here are two major problems I identified about AI’s role in my medical emergency.
The reason was subtle but dangerous. My first prompt framed the problem incorrectly. I described digestive symptoms, not a blood issue, and the AI followed me down that path (stress, IBS, vitamin deficiencies, even scurvy!) until I uploaded bloodwork that forced escalation.

ChatGPT Saved My Life (No, Seriously, I’m Writing this from the ER)
How using AI as a bridge when doctors aren't available can improve patient-to-doctor communications in real time emergencies

How to Plan an Annual Family Summit
Simple strategies for setting goals and Priorities with Your Partner for the year ahead


ChatGPT Saved My Life (No, Seriously, I’m Writing this from the ER)
How using AI as a bridge when doctors aren't available can improve patient-to-doctor communications in real time emergencies

How to Plan an Annual Family Summit
Simple strategies for setting goals and Priorities with Your Partner for the year ahead

In retrospect, it’s lucky that I kept the conversation going, but this misdirection could have easily cost me my life.
I decided to see how long it would take today’s AI tools to figure out this was not a digestive issue, but actually a blood issue.
So I ran two tests — one with ChatGPT Health and another with Doctronic, where I started the conversation the exact same way. Here’s what happened:

Interestingly, ChatGPT health still got misdirected, but Doctronic asked more follow-up questions until honing in on the exact issue.
Throughout the year, I occasionally revisited conversations about my health with ChatGPT. But I was quite concerned to learn that AI changed its memory of my incident based solely on how I asked the question.
For example, here’s the response when I asked it to recall a time when I had a medical emergency:

No memory.
But compare that to when I primed it with just a little context:

This is quite concerning.
The more everyday users rely on AI tools as a “second brain” for their own institutional memory, the more dangerous these hallucinations become. Right now. it’s OK for it to not remember this incident, but imagine another scenario — say, I talk with AI about my son’s severe nut allergy in January.
Will it remember this context in April when I’m asking for advice on meal planning for my family? Or might it “forget” that context and suggest that I cook something that might result in an allergic reaction?
Looking back, I don’t believe AI saved me by being correct. It saved me by telling me when to stop talking and start acting.
I realize now that I also got a better outcome because I’m a power user. I was persistent, skeptical, and comfortable staying in long, ambiguous conversations. Most people won’t do that.
For me, AI was most valuable not as a diagnostician, but as a tool for interpretation, anxiety regulation, and sense-making during long gaps in human care.
If we want AI to be safer in health contexts, we should stop asking one tool to do everything. Diagnosis, contextual memory, and emotional support may need to live in different systems, each with clearer constraints and guardrails.
That starts with being a smarter end user about what information you’re feeding the AI… and what you hope to get back as a result.
In retrospect, it’s lucky that I kept the conversation going, but this misdirection could have easily cost me my life.
I decided to see how long it would take today’s AI tools to figure out this was not a digestive issue, but actually a blood issue.
So I ran two tests — one with ChatGPT Health and another with Doctronic, where I started the conversation the exact same way. Here’s what happened:

Interestingly, ChatGPT health still got misdirected, but Doctronic asked more follow-up questions until honing in on the exact issue.
Throughout the year, I occasionally revisited conversations about my health with ChatGPT. But I was quite concerned to learn that AI changed its memory of my incident based solely on how I asked the question.
For example, here’s the response when I asked it to recall a time when I had a medical emergency:

No memory.
But compare that to when I primed it with just a little context:

This is quite concerning.
The more everyday users rely on AI tools as a “second brain” for their own institutional memory, the more dangerous these hallucinations become. Right now. it’s OK for it to not remember this incident, but imagine another scenario — say, I talk with AI about my son’s severe nut allergy in January.
Will it remember this context in April when I’m asking for advice on meal planning for my family? Or might it “forget” that context and suggest that I cook something that might result in an allergic reaction?
Looking back, I don’t believe AI saved me by being correct. It saved me by telling me when to stop talking and start acting.
I realize now that I also got a better outcome because I’m a power user. I was persistent, skeptical, and comfortable staying in long, ambiguous conversations. Most people won’t do that.
For me, AI was most valuable not as a diagnostician, but as a tool for interpretation, anxiety regulation, and sense-making during long gaps in human care.
If we want AI to be safer in health contexts, we should stop asking one tool to do everything. Diagnosis, contextual memory, and emotional support may need to live in different systems, each with clearer constraints and guardrails.
That starts with being a smarter end user about what information you’re feeding the AI… and what you hope to get back as a result.
How I Used AI to Save My Life in 77 Prompts: A Debrief
Reflecting on best practices, lessons learned, and opportunities to improve AI-assisted medical triage
How I Used AI to Save My Life in 77 Prompts: A Debrief
Reflecting on best practices, lessons learned, and opportunities to improve AI-assisted medical triage
3 comments
I was recently included in an NPR story about the time when AI saved my life last year. But revisiting that original conversation recently forced me to confront something uncomfortable: This wasn’t a miracle. It was persistence. And that distinction matters, especially as “AI for healthcare” tools proliferate. I identified two major problems in my use of AI for medical use: 1. Misdirection and Dangerously Long Triage 2. Mixing Memories I wrote about this in today's post: What ChatGPT Got Wrong When It Saved My Life https://hardmodefirst.xyz/what-chatgpt-got-wrong-when-it-saved-my-life
Bethany, thanks so much for this follow-up! I remember you sharing about your ER episode last year — I think we may even have messaged back and forth a couple times about it, as your story piqued my curiosity and concern. Your update also really resonates as I've been trying to help a nearly 80-year-old family member navigate a daunting health situation, make well-informed decisions, and get optimal care. A retired optometrist, they are intelligent, have domain-adjacent knowledge and experience, and have been quite tech-savvy for as long as I've known them. But as they related their recent medical issue to me and revealed how they'd been using Copilot as a health consultant, I had some serious concerns that I shared with them about the guidance they were receiving from their AI assistant. I also discussed with them how I was using Claude quite differently — based on my own knowledge and experience from journalism; medicine; computer and information technology; and use of LLMs — to reach different conclusions about their health situation and best advice for next steps. The limitations you highlight are real and important, and I think a lot of people who are using mass-market general-purpose LLMs don't yet have the "AI literacy" to fully appreciate the risks and benefits of the technology and to understand how they can optimize the quality of the outputs they receive. Also, I'm not sure if you've been watching season 2 of "The Pitt" on HBO, but immune thrombocytopenia was one of the challenging ER cases featured prominently in a recent episode [SPOILER WARNING]: https://pdsa.org/itp-news/2827-the-pitt I hope you're in better health nowadays and wish you and yours the best. Please keep up the writing and posting :)
Thank you so much for sharing this note. It's good of you to have these conversations with your family member too. I do find that AI as emotional support can be really comforting during moments like these, but even that becomes circuitous or counter-productive after a point. That we can get the AI to reflect back onto us essentially whatever we secretly fear is one of the most powerful (and scariest) parts of this technology. And I really appreciate you sharing that ITP storyline. I ended up meeting someone else late last year who had ITP in February 2025 as well. Unlike me, she was re-hospitalized 8 more times throughout the year, as they had a hard time keeping her platelet count up. I feel lucky (so far) that it hasn't happened again...