
ChatGPT Saved My Life (No, Seriously, I’m Writing this from the ER)
How using AI as a bridge when doctors aren't available can improve patient-to-doctor communications in real time emergencies

How to Plan an Annual Family Summit
Simple strategies for setting goals and Priorities with Your Partner for the year ahead

How I Used AI to Save My Life in 77 Prompts: A Debrief
Reflecting on best practices, lessons learned, and opportunities to improve AI-assisted medical triage

Subscribe to Hard Mode First
Lessons learned from a lifetime of doing things the hard way, the first time
Share Dialog

After running dozens of AI workshops with PE firms, startup CEOs, nonprofit operators, and corporate teams, I’ve started to see a clear pattern. There are four distinct levels of AI adoption at work.
As it turns out, most teams (yes, even the ones who consider themselves “AI-forward”) are stuck at level one.
But here’s the real problem: Even though the rest of the world is obsessed with Level 4, you can’t skip the climb. After teaching 1,500 people to build with AI this past year, I’ve realized the true paradigm shift happens between Level 1 and Level 2.
That’s the moment you stop using software and realize you can make it.
Skipping this step because it feels “simplistic” is a mistake. It’s where you develop computational thinking, which is the bridge between being a passenger and being the pilot. Without it? You’re toast.
Here’s how to rethink AI adoption and move your team forward.


After running dozens of AI workshops with PE firms, startup CEOs, nonprofit operators, and corporate teams, I’ve started to see a clear pattern. There are four distinct levels of AI adoption at work.
As it turns out, most teams (yes, even the ones who consider themselves “AI-forward”) are stuck at level one.
But here’s the real problem: Even though the rest of the world is obsessed with Level 4, you can’t skip the climb. After teaching 1,500 people to build with AI this past year, I’ve realized the true paradigm shift happens between Level 1 and Level 2.
That’s the moment you stop using software and realize you can make it.
Skipping this step because it feels “simplistic” is a mistake. It’s where you develop computational thinking, which is the bridge between being a passenger and being the pilot. Without it? You’re toast.
Here’s how to rethink AI adoption and move your team forward.


ChatGPT Saved My Life (No, Seriously, I’m Writing this from the ER)
How using AI as a bridge when doctors aren't available can improve patient-to-doctor communications in real time emergencies

How to Plan an Annual Family Summit
Simple strategies for setting goals and Priorities with Your Partner for the year ahead

How I Used AI to Save My Life in 77 Prompts: A Debrief
Reflecting on best practices, lessons learned, and opportunities to improve AI-assisted medical triage
Share Dialog
Level one is getting AI to give you useful, consistent outputs 8 or 9 times out of 10. That sounds simple, but it’s harder than it looks. Anyone using ChatGPT or Gemini as a substitute for Google Search needs to learn how technology behaves in non-deterministic environments. Getting people to move away from, “Is this right?” to “Did this generative output elevate my own thinking?” is step one.
In this level, it helps to start by moving taking even basic tasks (ie: “rewrite this email for me”) and practice iterating and improving within one context window until the output is much stronger. The goal is to get to a point where you recognize that you have technological agency to impact the way the AI responds.
The three main things you learn in this level are:
How to give clear directions that technology can interpret
How to add new context (ie: past emails, more details about your company, or even structure and form) to dramatically change the output
How to not give up when the AI first gives you a bad answer (Which is the software equivalent of learning not to give up at the first bug or error message.)
Level two is when you stop repeating yourself and start saving those instructions into a reusable tool. Custom GPTs, Gemini Gems, or Claude Projects. Each one functions as a mini-app with your prompting baked in, which means that every conversation starts from a higher baseline.
It might feel tempting to skip this step and jump straight to vibe coding and building app interfaces. Don’t. Adding interfaces and application frontends too early distract people from the real goal of this level: Getting people to think in systems.
If you’re a manager or leader who expects your team to start building with AI agents in the future, you cannot skip this step.
The teams I meet who are moving the fastest with AI today were also moving slowest at the beginning because they recognized the power of the “aha!” moment happens here.
The three things you learn in this level are:
How to break a problem into small bits and build an very tiny scoped MVP that functions reliable and saves you a little bit of real work every time you use it
How to think in systems by recognizing that an isolated chatbot you can return to is a higher level of thinking than a single chat thread
How to use AI as a thought partner and sounding board to help you dramatically improve the way you structure your prompts, add context, and iterate on outcomes
Level three is about taking that systems thinking lens and building actual interfaces: Websites, dashboards, internal tools, apps. Not by writing code line by line, but by describing what you want and letting AI generate it. And then, ideally, automate it.
This is what people are calling “vibe coding.” You describe what you want, and AI writes the code using tools like Replit, Lovable, Claude Code, Cursor, or Codex. I’m not a classically trained software engineer. I’ve never been one. But I spent the last 18 months teaching myself how to think like one. And now I build new mini-apps multiple times a week.
Chatbots (a level two skill) are a very entry-level way to interact with AI. But once you realize that you can still control and AI’s output via API keys, MCP servers, and bespoke interface design, you can build anything — and make it fit just for you.
That’s why I am now able to run my entire business with software that I built myself. Accessible workshop websites. Client dashboards. Sales pipelines. Financial planning worksheets. All built by describing what I wanted to an AI tool and iterating from there. For me, the largest mental shift was realizing I didn’t have to just be a user of software anymore; I could make it myself. (But that happened in level one.)
The three things you learn in level three are:
How building an app interface is the same process as building a customized chatbot
How to work with AI as a debugging buddy and also a discovery mechanism to learn new things about topics, tools, or architecture approaches entirely new do you
How to use more integrated engineering tools, such as developer documentation, API keys, database managers, hosting and deployment tools, text editors, and product requirement documents with your AI as a thought partner

Agentic systems are when AI stops being a tool you talk to and starts being a system that takes actions on your behalf. Maybe it reads your email and drafts responses. Maybe it monitors your portfolio data and flags anomalies. The important thing is that it takes action, pulling information from one tool, processing it, and pushing results somewhere else.
Here’s the thing that makes this less intimidating than it sounds: An agent is really just a series of small problems linked together. Each step is something AI can already do at level one or two. The agent just chains them together without you having to sit in the middle.

As I’ve learned, the more you can think about your workflow as a set of small problems, the better you’ll be at defining agentic systems. The real skill isn’t programming, it’s decomposition.
You need to know how to break your work into pieces small enough that each one can be handled by a prompt. And you need to pick the right problems.
Sure, there will be people who will sell you fully agentic solutions that can let you skip over some of these steps. But the reality is this. AI can do a lot of things amazingly well, but it’s not a mindreader. It’s still operates lot more like an overeager intern.
In reality, there’s an enormous spectrum, and the gap between level one and level four is where most of the value lives.
Here’s a quick self-assessment for you or your team:
Level 1: Your team uses ChatGPT or Gemini for one-off questions and drafts. Results are hit or miss. No shared practices.
Level 2: At least a few people have built custom GPTs or saved prompts that others on the team reuse. There are internal “this is how we use AI for X” patterns emerging.
Level 3: Someone on your team has built a tool (a dashboard, a prototype, a workflow automation, an internal app) using AI, even if they’re not an engineer.
Level 4: You have automated workflows where AI takes actions across multiple tools without a human in the loop for every step.
Most teams I work with are somewhere between level one and level two. That’s not a failure, that’s normal. The important thing is knowing which level you’re at so you can be intentional about climbing to the next one, and not having unrealistic expectations about the capacity of a human to learn new things.
You don’t have to leap from Google-search-style prompting to building agentic systems overnight. But you do have to start somewhere. Pick one workflow. Get specific about what you want. Save those instructions somewhere reusable. And then ask yourself: what’s the next small problem I can hand off?
The ladder is there. The only question is whether you’re going to climb it.
Want to learn more? If you’re serious about leveling up the AI adoption pyramid for yourself or a team, let’s talk about how Build First workshops, AI hackathons, and structured coaching can get you there. You can book time with me here.
Level one is getting AI to give you useful, consistent outputs 8 or 9 times out of 10. That sounds simple, but it’s harder than it looks. Anyone using ChatGPT or Gemini as a substitute for Google Search needs to learn how technology behaves in non-deterministic environments. Getting people to move away from, “Is this right?” to “Did this generative output elevate my own thinking?” is step one.
In this level, it helps to start by moving taking even basic tasks (ie: “rewrite this email for me”) and practice iterating and improving within one context window until the output is much stronger. The goal is to get to a point where you recognize that you have technological agency to impact the way the AI responds.
The three main things you learn in this level are:
How to give clear directions that technology can interpret
How to add new context (ie: past emails, more details about your company, or even structure and form) to dramatically change the output
How to not give up when the AI first gives you a bad answer (Which is the software equivalent of learning not to give up at the first bug or error message.)
Level two is when you stop repeating yourself and start saving those instructions into a reusable tool. Custom GPTs, Gemini Gems, or Claude Projects. Each one functions as a mini-app with your prompting baked in, which means that every conversation starts from a higher baseline.
It might feel tempting to skip this step and jump straight to vibe coding and building app interfaces. Don’t. Adding interfaces and application frontends too early distract people from the real goal of this level: Getting people to think in systems.
If you’re a manager or leader who expects your team to start building with AI agents in the future, you cannot skip this step.
The teams I meet who are moving the fastest with AI today were also moving slowest at the beginning because they recognized the power of the “aha!” moment happens here.
The three things you learn in this level are:
How to break a problem into small bits and build an very tiny scoped MVP that functions reliable and saves you a little bit of real work every time you use it
How to think in systems by recognizing that an isolated chatbot you can return to is a higher level of thinking than a single chat thread
How to use AI as a thought partner and sounding board to help you dramatically improve the way you structure your prompts, add context, and iterate on outcomes
Level three is about taking that systems thinking lens and building actual interfaces: Websites, dashboards, internal tools, apps. Not by writing code line by line, but by describing what you want and letting AI generate it. And then, ideally, automate it.
This is what people are calling “vibe coding.” You describe what you want, and AI writes the code using tools like Replit, Lovable, Claude Code, Cursor, or Codex. I’m not a classically trained software engineer. I’ve never been one. But I spent the last 18 months teaching myself how to think like one. And now I build new mini-apps multiple times a week.
Chatbots (a level two skill) are a very entry-level way to interact with AI. But once you realize that you can still control and AI’s output via API keys, MCP servers, and bespoke interface design, you can build anything — and make it fit just for you.
That’s why I am now able to run my entire business with software that I built myself. Accessible workshop websites. Client dashboards. Sales pipelines. Financial planning worksheets. All built by describing what I wanted to an AI tool and iterating from there. For me, the largest mental shift was realizing I didn’t have to just be a user of software anymore; I could make it myself. (But that happened in level one.)
The three things you learn in level three are:
How building an app interface is the same process as building a customized chatbot
How to work with AI as a debugging buddy and also a discovery mechanism to learn new things about topics, tools, or architecture approaches entirely new do you
How to use more integrated engineering tools, such as developer documentation, API keys, database managers, hosting and deployment tools, text editors, and product requirement documents with your AI as a thought partner

Agentic systems are when AI stops being a tool you talk to and starts being a system that takes actions on your behalf. Maybe it reads your email and drafts responses. Maybe it monitors your portfolio data and flags anomalies. The important thing is that it takes action, pulling information from one tool, processing it, and pushing results somewhere else.
Here’s the thing that makes this less intimidating than it sounds: An agent is really just a series of small problems linked together. Each step is something AI can already do at level one or two. The agent just chains them together without you having to sit in the middle.

As I’ve learned, the more you can think about your workflow as a set of small problems, the better you’ll be at defining agentic systems. The real skill isn’t programming, it’s decomposition.
You need to know how to break your work into pieces small enough that each one can be handled by a prompt. And you need to pick the right problems.
Sure, there will be people who will sell you fully agentic solutions that can let you skip over some of these steps. But the reality is this. AI can do a lot of things amazingly well, but it’s not a mindreader. It’s still operates lot more like an overeager intern.
In reality, there’s an enormous spectrum, and the gap between level one and level four is where most of the value lives.
Here’s a quick self-assessment for you or your team:
Level 1: Your team uses ChatGPT or Gemini for one-off questions and drafts. Results are hit or miss. No shared practices.
Level 2: At least a few people have built custom GPTs or saved prompts that others on the team reuse. There are internal “this is how we use AI for X” patterns emerging.
Level 3: Someone on your team has built a tool (a dashboard, a prototype, a workflow automation, an internal app) using AI, even if they’re not an engineer.
Level 4: You have automated workflows where AI takes actions across multiple tools without a human in the loop for every step.
Most teams I work with are somewhere between level one and level two. That’s not a failure, that’s normal. The important thing is knowing which level you’re at so you can be intentional about climbing to the next one, and not having unrealistic expectations about the capacity of a human to learn new things.
You don’t have to leap from Google-search-style prompting to building agentic systems overnight. But you do have to start somewhere. Pick one workflow. Get specific about what you want. Save those instructions somewhere reusable. And then ask yourself: what’s the next small problem I can hand off?
The ladder is there. The only question is whether you’re going to climb it.
Want to learn more? If you’re serious about leveling up the AI adoption pyramid for yourself or a team, let’s talk about how Build First workshops, AI hackathons, and structured coaching can get you there. You can book time with me here.
>700 subscribers
>700 subscribers
No activity yet