
ChatGPT Saved My Life (No, Seriously, I’m Writing this from the ER)
How using AI as a bridge when doctors aren't available can improve patient-to-doctor communications in real time emergencies

How to Plan an Annual Family Summit
Simple strategies for setting goals and Priorities with Your Partner for the year ahead

How I Used AI to Save My Life in 77 Prompts: A Debrief
Reflecting on best practices, lessons learned, and opportunities to improve AI-assisted medical triage



ChatGPT Saved My Life (No, Seriously, I’m Writing this from the ER)
How using AI as a bridge when doctors aren't available can improve patient-to-doctor communications in real time emergencies

How to Plan an Annual Family Summit
Simple strategies for setting goals and Priorities with Your Partner for the year ahead

How I Used AI to Save My Life in 77 Prompts: A Debrief
Reflecting on best practices, lessons learned, and opportunities to improve AI-assisted medical triage
That’s to say, myself, circa 2045.
You see, present-day Bethany is stuck in the middle of the chaos zone. She’s walking the very exciting tightrope of Founder Mode: Season 2. She’s building things, she’s breaking things, and she’s (often) awake at 3 a.m., wondering about what the hell to do about it.
But future Bethany? She’s got life on lock. And not only that… but she knows how she got it done.
So I decided to role play a little AI-powered simulation. I shared my very real, very present day problems with a fictionalized version of future me.
And I gotta tell you? “Her” advice actually motivated me to figure a few things out this week. Things that actually worked.
Here’s how I did it.

That’s to say, myself, circa 2045.
You see, present-day Bethany is stuck in the middle of the chaos zone. She’s walking the very exciting tightrope of Founder Mode: Season 2. She’s building things, she’s breaking things, and she’s (often) awake at 3 a.m., wondering about what the hell to do about it.
But future Bethany? She’s got life on lock. And not only that… but she knows how she got it done.
So I decided to role play a little AI-powered simulation. I shared my very real, very present day problems with a fictionalized version of future me.
And I gotta tell you? “Her” advice actually motivated me to figure a few things out this week. Things that actually worked.
Here’s how I did it.

We’ve all seen stories about people who use AI to role-play other personas and get advice from them. I’ve done that, too. (Former Mayor Mike Bloomberg will never know what a helpful role his storytelling played in helping me get myself organized around my own life mission last year.) Creating roles of “known” personas is easy for the AI. Just ask AI to find information on the Internet, or drop in a few transcripts of conversations, then build yourself a bot version of that persona.
But creating a future persona of yourself only works if you have content to train it on. Lucky for me, I spend a lot of time talking with AI about my dreams, life plans, and business goals. Last December, I worked with ChatGPT for over half a day to come up with a 20-year life plan.
So for this simulation, I decided that our resulting document, combined with the present-day knowledge of my current business goals for this year (which I’d already built, when I built my personal operating system), would be more than enough to get us started.
I saved it as a markdown file and asked Claude Code to get into role-play mode. Here’s what happened.

While I’ve been doing my best to tap a combination of my network and my own intuition about what comes next, I decided to lean into intentional hallucinations to see if some of the stories “future me” shared would help me validate or invalidate my current direction of travel.
At first, I asked “future me” for advice on how to make better decisions in the present day.
What she said surprised me. I heard stories of success and stories of failure, all in the same breath.
“I made mistakes and fixed them. The brownstone? This is my third apartment since 2025. The incubator? I shut down the first version after two years and started over.”

This teased out an interesting behavior. I shared my ambitious future goals with an AI, and I received one permutation of a prediction path as to how things might play out.
Notably, this projection operated under the general assumption that I have achieved most of the goals I laid out in my 20-year plan. But it also intentionally hallucinated some predictable sidequests and stories. Some are more realistic than others. (Tribeca? Eh. Probably not my scene. Getting a brownstone one day? Totally a thing I dream about.)
I found some of those daydreams to be pretty inspiring. For instance, I asked, “What would you consider to be your biggest accomplishment?”
Future me responded with a story about helping a woman in the South Bronx start a real business in her real neighborhood. I gotta tell you, the story was pretty on the nose, and it painted a very helpful relative destination that validated my present-day direction of travel.

First, I openly shared what I did with everyone around me. That naturally led to very interesting conversations about what, exactly, I wrote in that 20-year-plan (not to mention, how I use AI today.)
Notably, I found that describing snippets of a fake conversation I’d had with an AI to feel “safer” than handing over my entire life playbook to a single human. This is similar to why people feel safer telling their AI things they would never tell their therapist.
But then something real happened. I noticed that once I articulated the ideas out loud (to an AI), the tone of my week shifted. Not because the universe intervened, but because I had more clarity. At the start of the week, I was stuck on three decisions:
What to sell right now
What physical space I need
How to weave mission-aligned education work into it
It’s only Wednesday, I already have real options on all three. This wasn’t manifesting, it was clarity. The AI didn’t make the decisions for me, but it did help me articulate them well enough so that other people could react to them.
Second, it gave me the confidence to have a few conversations I’d been circling for weeks. Hearing a made-up story from my made-up self about helping a made-up person was all it took for me to tell a real version of that real story to a group of people at an event last night.
In the end, what I liked about this exercise was not that I expected AI to deliver me the absolute truth about what might happen in my future, but that it offered one possible path forward, complete with realistic narrative complexities. False starts. Failure moments. Real fear.
No matter what direction my future takes, those sticky moments and false starts will certainly happen for me. And probably, for you, too. AI didn’t predict my future. But it did help me rehearse one version of it.
We’ve all seen stories about people who use AI to role-play other personas and get advice from them. I’ve done that, too. (Former Mayor Mike Bloomberg will never know what a helpful role his storytelling played in helping me get myself organized around my own life mission last year.) Creating roles of “known” personas is easy for the AI. Just ask AI to find information on the Internet, or drop in a few transcripts of conversations, then build yourself a bot version of that persona.
But creating a future persona of yourself only works if you have content to train it on. Lucky for me, I spend a lot of time talking with AI about my dreams, life plans, and business goals. Last December, I worked with ChatGPT for over half a day to come up with a 20-year life plan.
So for this simulation, I decided that our resulting document, combined with the present-day knowledge of my current business goals for this year (which I’d already built, when I built my personal operating system), would be more than enough to get us started.
I saved it as a markdown file and asked Claude Code to get into role-play mode. Here’s what happened.

While I’ve been doing my best to tap a combination of my network and my own intuition about what comes next, I decided to lean into intentional hallucinations to see if some of the stories “future me” shared would help me validate or invalidate my current direction of travel.
At first, I asked “future me” for advice on how to make better decisions in the present day.
What she said surprised me. I heard stories of success and stories of failure, all in the same breath.
“I made mistakes and fixed them. The brownstone? This is my third apartment since 2025. The incubator? I shut down the first version after two years and started over.”

This teased out an interesting behavior. I shared my ambitious future goals with an AI, and I received one permutation of a prediction path as to how things might play out.
Notably, this projection operated under the general assumption that I have achieved most of the goals I laid out in my 20-year plan. But it also intentionally hallucinated some predictable sidequests and stories. Some are more realistic than others. (Tribeca? Eh. Probably not my scene. Getting a brownstone one day? Totally a thing I dream about.)
I found some of those daydreams to be pretty inspiring. For instance, I asked, “What would you consider to be your biggest accomplishment?”
Future me responded with a story about helping a woman in the South Bronx start a real business in her real neighborhood. I gotta tell you, the story was pretty on the nose, and it painted a very helpful relative destination that validated my present-day direction of travel.

First, I openly shared what I did with everyone around me. That naturally led to very interesting conversations about what, exactly, I wrote in that 20-year-plan (not to mention, how I use AI today.)
Notably, I found that describing snippets of a fake conversation I’d had with an AI to feel “safer” than handing over my entire life playbook to a single human. This is similar to why people feel safer telling their AI things they would never tell their therapist.
But then something real happened. I noticed that once I articulated the ideas out loud (to an AI), the tone of my week shifted. Not because the universe intervened, but because I had more clarity. At the start of the week, I was stuck on three decisions:
What to sell right now
What physical space I need
How to weave mission-aligned education work into it
It’s only Wednesday, I already have real options on all three. This wasn’t manifesting, it was clarity. The AI didn’t make the decisions for me, but it did help me articulate them well enough so that other people could react to them.
Second, it gave me the confidence to have a few conversations I’d been circling for weeks. Hearing a made-up story from my made-up self about helping a made-up person was all it took for me to tell a real version of that real story to a group of people at an event last night.
In the end, what I liked about this exercise was not that I expected AI to deliver me the absolute truth about what might happen in my future, but that it offered one possible path forward, complete with realistic narrative complexities. False starts. Failure moments. Real fear.
No matter what direction my future takes, those sticky moments and false starts will certainly happen for me. And probably, for you, too. AI didn’t predict my future. But it did help me rehearse one version of it.
3 comments
Here's a new one... stuck on how to make a key decision? Try simulating a conversation with future you! 😵💫 This week I used Claude Code to get real advice from my simulated future self. Honestly? It really helped. https://hardmodefirst.xyz/how-i-used-claude-code-to-get-advice-from-my-simulated-future-self
this is such a smart framework! future-self thinking cuts through analysis paralysis. forces you to think about actual outcomes not just possibilities 🧠
It is not a habit I want to get into (feels like a slippery slope)... but it was a very cool exercise