
ChatGPT Saved My Life (No, Seriously, I’m Writing this from the ER)
How using AI as a bridge when doctors aren't available can improve patient-to-doctor communications in real time emergencies

How to Plan an Annual Family Summit
Simple strategies for setting goals and Priorities with Your Partner for the year ahead

How I Used AI to Save My Life in 77 Prompts: A Debrief
Reflecting on best practices, lessons learned, and opportunities to improve AI-assisted medical triage

Subscribe to Hard Mode First
Lessons learned from a lifetime of doing things the hard way, the first time
Share Dialog

The problem with AI adoption has never been about AI. It’s always been about identifying the problem.
After reading Deloitte’s 2026 State of AI in the Enterprise report this week, one stat stood out to me: 66% of organizations report productivity gains from AI, but only 20% have actually grown revenue from it. Meanwhile, 74% say they hope to.
The word, “hope” is doing a lot of work in that sentence.
Deloitte’s own framing is that adoption isn’t the main challenge anymore. Instead the pressure is now operational (ie: data systems, governance, workforce structures, all the boring plumbing...)
But I’d push one layer deeper than that.
Here’s what I see over and over when I run Build First workshops:
Someone walks in with a giant, fuzzy, “we need to use AI for something” mandate from their boss or their board. They bought the licenses, attended the webinars, and developed the internal policies.
But when I ask, “Okay, so what’s the first thing you actually want to build?” They freeze up.
To be fair, this isn’t for lack of ideas. Often, I notice the exact opposite to be true. They start with too many ideas, or ideas which are too big. Which unfortunately often means that none of them are scoped small enough to actually ship.

The problem with AI adoption has never been about AI. It’s always been about identifying the problem.
After reading Deloitte’s 2026 State of AI in the Enterprise report this week, one stat stood out to me: 66% of organizations report productivity gains from AI, but only 20% have actually grown revenue from it. Meanwhile, 74% say they hope to.
The word, “hope” is doing a lot of work in that sentence.
Deloitte’s own framing is that adoption isn’t the main challenge anymore. Instead the pressure is now operational (ie: data systems, governance, workforce structures, all the boring plumbing...)
But I’d push one layer deeper than that.
Here’s what I see over and over when I run Build First workshops:
Someone walks in with a giant, fuzzy, “we need to use AI for something” mandate from their boss or their board. They bought the licenses, attended the webinars, and developed the internal policies.
But when I ask, “Okay, so what’s the first thing you actually want to build?” They freeze up.
To be fair, this isn’t for lack of ideas. Often, I notice the exact opposite to be true. They start with too many ideas, or ideas which are too big. Which unfortunately often means that none of them are scoped small enough to actually ship.

ChatGPT Saved My Life (No, Seriously, I’m Writing this from the ER)
How using AI as a bridge when doctors aren't available can improve patient-to-doctor communications in real time emergencies

How to Plan an Annual Family Summit
Simple strategies for setting goals and Priorities with Your Partner for the year ahead

How I Used AI to Save My Life in 77 Prompts: A Debrief
Reflecting on best practices, lessons learned, and opportunities to improve AI-assisted medical triage
Share Dialog
This isn’t an AI literacy problem. It’s a problem-definition problem.

This is exactly why I built the POP-IT framework, which is a design thinking framework for AI, developed in partnership with Decoded Futures. It’s the backbone of every workshop I run, and it’s deliberately simple because the work of shaping a real problem is already hard enough.
Five letters. Five questions.
P - Problem. What is the problem you are trying to solve? Not a vague aspiration, this should be a a specific bottleneck affecting a real person or team.
O - Output. What do you want to see from the AI when you’re done? A brief, a CSV, a creative asset, a mini-app. Picture exactly what success looks like in your hands.
P - Prompt. What instructions does the AI need? I like to think of this part as if I’m briefing a smart intern. It’s where the role, steps, and clarification questions come into play. (It’s also the part where you need to learn how to use the AI to write a better prompt.)
I - Input. What context, documentation, examples, or data can you share? This is the linchpin of making mini-apps bespoke, hyper-customized and actually work for you. The more relevant context you feed in, the better.
T - Test. How will you know it’s working? I like to imagine in my head what I think the AI is going to do next, then see how close it gets. This is also the step where it’s important to practice iterations to improve the ouput.
You’d be amazed how many hours of wheel-spinning those five questions save.
Not prompt engineering. Not tool selection. Not which model is better at what: Problem shaping.
Scoping things down. Picking the one small thing. Finding the AI-shaped problem hidden inside the fuzzy mandate. Turning “we should use AI for marketing” into “I want to automatically turn every workshop transcript into a LinkedIn draft in my voice by Friday.”
That’s the actual work. And once you’ve done the pre-work (naming the problem, defining the output, gathering your input...) the building is the easy part.
(By the way, this is also why the MIT “95% of generative AI pilots fail” stat doesn’t shock me. Of course they fail. Most of them were never real problems to begin with.) So if you’re new to AI, or if you’re struggling to make AI work for you at your organization. consider this:
Most organizations haven’t stopped long enough to ask the quiet question out loud: “What exactly are we trying to build, and what will happen if we get it right?”
That’s why it really matters what you Build First. And it’s where every workshop starts.
If your team is stuck in “AI for show” mode with lots of tools, lots of decks, zero shipped builds, let’s fix that. Book a Build First workshop or drop me a line at bethany@buildfirst.ai. We’ll start with the first P. What will you Build First?
This isn’t an AI literacy problem. It’s a problem-definition problem.

This is exactly why I built the POP-IT framework, which is a design thinking framework for AI, developed in partnership with Decoded Futures. It’s the backbone of every workshop I run, and it’s deliberately simple because the work of shaping a real problem is already hard enough.
Five letters. Five questions.
P - Problem. What is the problem you are trying to solve? Not a vague aspiration, this should be a a specific bottleneck affecting a real person or team.
O - Output. What do you want to see from the AI when you’re done? A brief, a CSV, a creative asset, a mini-app. Picture exactly what success looks like in your hands.
P - Prompt. What instructions does the AI need? I like to think of this part as if I’m briefing a smart intern. It’s where the role, steps, and clarification questions come into play. (It’s also the part where you need to learn how to use the AI to write a better prompt.)
I - Input. What context, documentation, examples, or data can you share? This is the linchpin of making mini-apps bespoke, hyper-customized and actually work for you. The more relevant context you feed in, the better.
T - Test. How will you know it’s working? I like to imagine in my head what I think the AI is going to do next, then see how close it gets. This is also the step where it’s important to practice iterations to improve the ouput.
You’d be amazed how many hours of wheel-spinning those five questions save.
Not prompt engineering. Not tool selection. Not which model is better at what: Problem shaping.
Scoping things down. Picking the one small thing. Finding the AI-shaped problem hidden inside the fuzzy mandate. Turning “we should use AI for marketing” into “I want to automatically turn every workshop transcript into a LinkedIn draft in my voice by Friday.”
That’s the actual work. And once you’ve done the pre-work (naming the problem, defining the output, gathering your input...) the building is the easy part.
(By the way, this is also why the MIT “95% of generative AI pilots fail” stat doesn’t shock me. Of course they fail. Most of them were never real problems to begin with.) So if you’re new to AI, or if you’re struggling to make AI work for you at your organization. consider this:
Most organizations haven’t stopped long enough to ask the quiet question out loud: “What exactly are we trying to build, and what will happen if we get it right?”
That’s why it really matters what you Build First. And it’s where every workshop starts.
If your team is stuck in “AI for show” mode with lots of tools, lots of decks, zero shipped builds, let’s fix that. Book a Build First workshop or drop me a line at bethany@buildfirst.ai. We’ll start with the first P. What will you Build First?
>700 subscribers
>700 subscribers
No activity yet