Earlier this week, I used ChatGPT to help me diagnosis and manage expectations from a particularly severe and unexpected health scare. (You can read more about it here.) I've since been discharged and am in recovery, with ongoing support from specialists beginning today.
In the business world, I think it's really important to conduct retrospectives and debriefs after big events. So I spent the morning reflecting back on the scenario of my long prompt chain to ChatGPT. I wanted to share some of my own observations (and an AI-assisted list of tips and tricks) for anyone else who may find themselves in a similar situation.
Last week, I began to suspect some health concerns, I met with my primary care doctor for some initial conversations and lab work. But on Sunday morning, I woke up with more concerns and I began a long context window (a single chat thread in ChatGPT) where I began the process of parsing through my concerns.
I used this same context window for the next 48 hours, which resulted in 77 queries. The rough breakdown of categories was:
Initial Self-Triage of Presenting Problem
[10 prompts] - Suspected Digestive issues - My first 10 chats focused on follow-ups from my doctor’s dietary recommendations and ways to ease digestive distress. I asked about a low FODMAP diet and how to grocery shop for a meal plan.
[14 prompts] Suspected Nutrient Deficiency - Around 10 a.m., I noticed red spots on my leg. Still assuming a digestive link, I uploaded a photo to ChatGPT. It suggested a possible vitamin C deficiency and mentioned conditions like scurvy. While blood disorders were listed as alternative explanations, I wasn’t considering them yet. Instead, ChatGPT guided me toward a supplement plan.
Context Expansion & Decision-Making Support [13 prompts] - Things got more serious when I noticed my low platelet count in my morning test results. I shared the screenshot with ChatGPT, and urgency escalated as I uploaded all three lab panels and additional symptoms. When AI flagged my worsening red dots as an emergency, I was finally convinced to go to the ER—though not without several back-and-forth questions first.
Hospital Communication Support [33 prompts] - I was in the ER for 30 hours (most of it alone) but continued to use this same context window as a "second brain" to catch all details I was hearing in real time. Any time a new test result came in, I dumped it into the thread. Any time a new potential disease or complication was surfaced, I asked for a high-level overview. I also used this time to prepare questions for the next set of nurse visits and quell any panic attacks from coming on.
Distillation and Synthesis [7 prompts] - Given the deep work I'd already put into this context window, this single thread knew as much as I did (plus more) about my condition. By the second night, my ChatGPT thread held a complete history of my condition. I asked it to summarize my status and next steps to share with my family. At discharge, I had it condense my patient instructions into a clear summary, capturing all key details from the past three days for future reference.
I'm sharing this because I've noticed that most people don’t use AI for deep, iterative problem-solving. I also want to highlight just how much back-and-forth it took to reach a decision. This wasn’t a simple “Should I go to the ER?” moment—it took hours, multiple false leads, and continuous reassessment to land in the right place.
Would a real doctor have gotten me there faster? Almost certainly. But this was a Sunday, and with my symptoms initially seeming like routine IBS, I didn’t recognize the urgency until much later in the thread.
After my human review and analysis of this prompt chain, I ran the entire conversation back through a new context window in ChatGPT to ask the AI what worked best about my strategy. Here are some tips and reflections that we worked on together.
How the AI evaluated my questions & triaging throughout this long context window:
Conversational yet Direct: She framed questions naturally, keeping them clear and concise (e.g., “Should I be concerned about these red spots?” rather than “What do you think about this?”).
Iterative and Expanding: She layered information progressively, allowing AI to refine its responses as new data emerged.
Action-Oriented: Instead of stopping at an answer, she followed up with “What should I do next?” and “How do I explain this to a doctor?”
Cross-Checking for Confirmation: She didn’t take AI’s urgency at face value but double-checked lab result review statuses and messaged her doctor.
Real-Time Monitoring: She continued to engage AI while at the hospital, using it to understand medical jargon and advocate for herself.
When I read back through this chat window, I was honestly surprised and a little embarrassed that it took me so long to get to the crux of what needed to happen – uploading my medical test results into the chat thread and associating them with the red dots.
(Seriously, there was a four-hour window on Sunday where I had convinced myself that the red dots were actually a scurvy rash due to a vitamin C related deficiency.)
I think if anyone is going to use AI to help with self-diagnosis, it's important to recognize when you're just looking for signals to reinforce your own biases and preconceived notions, vs. exploring other new (albeit scarier) possibilities. Given that, I asked the AI to read out on what I could improve.
A few places where the AI found that I could have been more direct or effective in my communication:
Avoiding Leading Assumptions: Some early prompts hinted at self-diagnosis biases (e.g., “I’ve been pushing myself a lot lately. Maybe I’m just not eating well enough?”). A more open-ended approach could allow for broader possibilities.
Stating Decision Points Clearly: Some questions could have been more explicit in defining the desired response (e.g., “Is this urgent?” vs. “At what threshold should I seek emergency care?”).
Using AI to Challenge Thinking: AI is useful for playing devil’s advocate. A prompt like “What’s the least concerning explanation for these symptoms, and what’s the worst-case scenario?” might have provided additional nuance.
More Structured Summaries: Asking AI for a bulleted summary of key concerns before messaging a doctor could have streamlined communication further.
There's a lot more to say about my approach here (and frankly, a lot of things that are not ideal about this approach). But here are five top takeways that I am sticking with me after this experience that I wanted to share out with others.
Go broad first, then go deep. The initial triage phase took the longest. It was important to get through a lot of questions before honing in on more specific theories. This will take longer than you think.
Use AI to help interpret data and test results. It's not a doctor, but AI can give you general trends and observations in a way that's certainly better than the untrained eye. While my platelet count record appeared "low" there was no way of me (as a non-medical professional who hasn't taken a biology class since high school) recognizing that as any other aberrant number.
Keep long context windows for the best retention of memory. One of the biggest mistakes I see people make with AI is ending the conversation too soon. What started out as something innocuous became more valuable over time because I was able to call back to past theories, and ultimately in the hospital, call back to earlier test results.
Use AI to help you manage expectations and get some of the "scary questions" out of the way. I found this was really important for me to be able to actually listen to the doctors better, even when I had just a bit of advance warning.
Use AI as a communication tool for your family. I can't express how great this was. Rather than have to call 5 people and explain the situation in multiple emotionally charged contexts, I just sent my family a human readable, AI-generated readout that included the snapshot of that day's diagnosis and next steps. This also stopped the "phone tree gossip" that tends to persist in families, because everyone was receiving the same information in the same way.
If there’s one takeaway from this experience, it’s that AI is not a replacement for human expertise—but it is a powerful tool for gaining agency over your own decisions. Just as we each build unique relationships with the people in our lives, we will also develop distinct ways of interacting with AI. The best part, for me, is that AI offers something rare in complex, high-stakes moments: the ability to take control of your own narrative. Whether that’s technological agency, medical agency, or simply the ability to make an informed choice when every second counts, that sense of control is invaluable.
For me, long context windows and iterative questioning worked because I process problems by talking through them. I used ChatGPT because I’m comfortable with chat-based conversations, and I appreciate its ability to retain context (a feature of the paid version). This continuity was reassuring—over time, the AI became better at recognizing when I needed direct answers versus a bit of emotional support.
For someone else, a different approach—perhaps using AI for quick summaries or structured decision trees—might be more effective. There’s no singular “right” way to use AI in medical triage, just as there’s no one-size-fits-all way to navigate healthcare itself. Or frankly, no "right" way to navigate a human relationship.
The key is understanding that AI is not just a search engine or a diagnostic tool—it’s a bridge and an intermediary. A translator between what is known factually and how to receive that information in a way that will land with you.
Ultimately, AI’s greatest strength isn’t in telling us what to do—it’s in helping us ask better questions, process information more effectively, and step into difficult decisions with greater clarity. And in a world where healthcare often feels opaque and overwhelming, that alone can be life-changing. At least for me, it was.
PSA - If this story and these recommendations resonated with you, I'd love for you to share these tips, and the original blog post with any friends, family members, or colleagues who might be curious to learn more about AI can help in everyday contexts like this. I'd also love to hear other examples of tools, resources, and tricks you're using. So drop a line or a comment below. Thanks for reading!
Over 500 subscribers
Earlier this week, @bethanymarz navigated a health scare with insight gained from using AI. By reflecting on a lengthy prompt chain with ChatGPT, effective strategies and areas for improvement emerged. The post offers valuable tips for leveraging AI in self-diagnosis and managing well-being.