# Your AI Problem Isn't an AI Problem

*Why 60% of every Build First workshop is spent on the thing nobody wants to do: Actually naming the problem.*

By [Hard Mode First](https://hardmodefirst.xyz) · 2026-04-17

---

The problem with AI adoption has never been about AI. It’s always been about identifying the problem.

After reading [**Deloitte’s 2026 State of AI in the Enterprise report**](https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/content/state-of-ai-in-the-enterprise.html) this week, one stat stood out to me: **66% of organizations report productivity gains from AI, but only 20% have actually grown revenue from it.** Meanwhile, 74% say they _hope_ to.

The word, “hope” is doing a lot of work in that sentence.

Deloitte’s own framing is that adoption isn’t the main challenge anymore. Instead the pressure is now operational (ie: data systems, governance, workforce structures, all the boring plumbing...)

But I’d push one layer deeper than that.

### **Most teams aren’t stuck on AI. They’re stuck on the problem.**

Here’s what I see over and over when I run [**Build First**](https://buildfirst.ai/) workshops:

Someone walks in with a giant, fuzzy, _“we need to use AI for something”_ mandate from their boss or their board. They bought the licenses, attended the webinars, and developed the internal policies.

But when I ask, _“Okay, so what’s the first thing you actually want to build?”_ They freeze up.

To be fair, this isn’t for lack of ideas. Often, I notice the exact opposite to be true. They start with **too many** ideas, or ideas which are **too big**. Which unfortunately often means that none of them are scoped small enough to actually ship.

This isn’t an AI literacy problem. It’s a problem-definition problem.

![](https://storage.googleapis.com/papyrus_images/8835677653467164197ea06fd6c269545601bf1f751a05c7961bf8f4cf68197e.jpg)

Building with AI is a little bit like pushing past all the clouds until you find the sun (aka: the actual presenting problem). Image source: Gemini

* * *

**Enter: POP-IT**
-----------------

This is exactly why I built the [**POP-IT framework**](https://buildfirst.app/popit), which is a design thinking framework for AI, developed in partnership with [**Decoded Futures**](https://www.decodedfutures.nyc/). It’s the backbone of every workshop I run, and it’s deliberately simple because the work of shaping a real problem is already hard enough.

Five letters. Five questions.

*   **P - Problem.** What is the problem you are trying to solve? Not a vague aspiration, this should be a a _specific_ bottleneck affecting a real person or team.
    
*   **O - Output.** What do you want to see from the AI when you’re done? A brief, a CSV, a creative asset, a mini-app. Picture exactly what success looks like in your hands.
    
*   **P - Prompt.** What instructions does the AI need? I like to think of this part as if I’m briefing a smart intern. It’s where the role, steps, and clarification questions come into play. (It’s also the part where you need to learn how to use the AI to write a better prompt.)
    
*   **I - Input.** What context, documentation, examples, or data can you share? This is the linchpin of making mini-apps bespoke, hyper-customized and _actually_ work for you. The more relevant context you feed in, the better.
    
*   **T - Test.** How will you know it’s working? I like to imagine in my head what I think the AI is going to do next, then see how close it gets. This is also the step where it’s important to practice iterations to improve the ouput.
    

You’d be amazed how many hours of wheel-spinning those five questions save.

* * *

Start with the Problem
----------------------

#### Roughly 60% of every workshop I run is spent on the first step.

Not prompt engineering. Not tool selection. Not which model is better at what: _Problem shaping._

Scoping things down. Picking the one small thing. Finding the AI-shaped problem hidden inside the fuzzy mandate. Turning “we should use AI for marketing” into “I want to automatically turn every workshop transcript into a LinkedIn draft in my voice by Friday.”

That’s the actual work. And once you’ve done the pre-work (naming the problem, defining the output, gathering your input...) the building is the easy part.

(By the way, this is also why the MIT “[95% of generative AI pilots fail](https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/)” stat doesn’t shock me. Of course they fail. Most of them were never real problems to begin with.) So if you’re new to AI, or if you’re struggling to make AI work for you at your organization. consider this:

### **Maybe you don’t have an AI problem. Maybe you have a _problem_ problem.**

Most organizations haven’t stopped long enough to ask the quiet question out loud: “What _exactly_ are we trying to build, and what will happen if we get it right?”

That’s why it really matters what you Build First. And it’s where every workshop starts.

* * *

_If your team is stuck in “AI for show” mode with lots of tools, lots of decks, zero shipped builds, let’s fix that._ [**_Book a Build First workshop_**](https://buildfirst.ai/workshops) _or drop me a line at_ [**_bethany@buildfirst.ai_**](mailto:bethany@buildfirst.ai)_. We’ll start with the first P. What will you Build First?_

---

*Originally published on [Hard Mode First](https://hardmodefirst.xyz/your-ai-problem-isnt-an-ai-problem)*
