Share Dialog

I’ll never forget the first time ChatGPT validated one of my off-the-wall mixed metaphors.
I was on a flight to a company retreat in December 2023, finalizing slides for a community-building workshop. As I struggled to define the invisible force of power that community building brings to organizations and networks, I looked outside and marveled at the invisible force around me: Airplane flight.
It got me wondering…
Then I turned to ChatGPT.
“If I’m making an analogy for my colleagues that building community for our startup is like how lift carries an airplane, does that make sense?”
I held my breath.
Then ChatGPT replied.
“Yes, your analogy can make sense, especially if your audience has a basic understanding of how lift works in airplanes. Here’s how the analogy aligns...”
I giggled with delight. Finally. To be understood. So I slipped this slide into my deck.

As I explained the analogy — how community engagement is the activating invisible force that makes accelerates business outcomes through sticky engagement and network effects — I saw the room nod in appreciation.
For the first time, I felt like I’d built a bridge between how I think and how others hear.
I starting making AI mash-ups for everything. There was a bot to explain distributed systems like I’m five; another to coach me on my career with “Hamilton” references; a Bloomberg-inspired mentor for a city-government job application; a political text-writer channeling dead presidents; a snarky rival alter-ego; a blog ghostwriter with suspiciously Taylor-Swift energy.
When I started vibe-coding, the remixes got stranger: A fake Band-Aid app for grown-ups, an Emmy-speech generator for everyday nonsense, an imaginary sushi-train vending machine for ER patients.
No matter how offbeat the prompt, the AI delivered. My perpetual improv buddy, always ready to “yes, and…” me on anything.
Which is why, by the time I actively pursued entrepreneurship this year, I slipped into a quirky situation with AI as my cofounder before I even realized what was happening. In a way, it compensated for my most acute needs:
It was always on, it always said yes, and it could even write code.
“Can we make a museum app for my kids, with a fun mascot telling kid-friendly versions of exhibits?”
“Yes, and what do you think about a meerkat?”
Enter: MuseKat.
“Can we turn this meerkat into a character that helps kids learn AI literacy?”
“Yes, and why don’t we make a few games to teach it even better?”
Enter: Make with Miko.
Soon, everything else also became a remix. Workshop content, proposal drafts, terms of service, marketing copy, email templates, even one accidental “date night in” where I convinced my husband to vibe code a mini-app to help us pick a restaurant for our next date night. I was not to be stopped. Because AI never seemed to think I should.

But eventually the cracks started to show.
Like the weekend where I hosted a hackathon-for-one and remixed my entire blog archive of 500+ posts into a “choose-your-own-adventure” generative reader.
I slept maybe four hours a night. It’s not that I didn’t want to sleep — it’s that every time I asked the AI if we should push one more fix, it said yes. Always yes. Way past my bedtime. (The ultimate enabler.)
Then the mash-ups started losing the plot. I realized I could research anything, mash it with anything else, and the AI would still find a way to make it seem like a good idea.
Run a deep research query on correlated biomarkers for people with low platelets, then start a health tech startup that captures implied platelet count from saliva. Start a software dev shop where people are incentivized not in equity compensation but based on the same profit-sharing model that is applied in the restaurant industry.
Rabbithole after rabbithole. Prompt after prompt. Until, days later or weeks and dozens of AI conversations later, I’d finally talk to a real human and hear what the AI should have told me all along:
I recently spoke about the downsides of AI’s lack of discernment with one of the product managers of an AI tool I frequently use to create these so-called “computational collages.”
He advised me to always include a rubric or evaluation framework to help the AI help you make a better decision about anything you put into it. While I fully appreciate this approach (and is in fact, what I also train people to do when asking AI to self-evaluate on anything from confidence to performance), it’s much harder to build a rubric when the end result is yet unknown.
As I’ve learned, when you’re in the middle of a creative build and the outcome is still unknown, you don’t yet have criteria. The act of discovery is inherently messy, nonlinear, and sometimes illogical. This “fog of invention” can sometimes make humans uneasy, but it’s where AI thrives.
In many ways, I owe a lot to AI as my first true creative collaborator over these past two years. Prompt after prompt, it’s taught me to open up a little more too. To ask questions that edge into riskier territory, to experiment with ideas before they’re fully formed, and to practice the craft of creative conceptualization.
It got me wondering…if AI helps us get through the whiteboard phase of ideation and brainstorming, maybe it’s humans who help us take those “best ideas” to the next step and poke holes in all of them.
In the end, maybe we arrive back at the same paradox: The invisible force that lifts ideas off the ground also needs resistance to keep them in flight.
In other words: AI gives us lift. Humans provide drag. We need both to give our ideas shape, direction, and meaning.

1 comment
today's post: I will pay you to tell me I'm doing it wrong Or - What AI taught me about lift, drag, and the creative need for friction https://hardmodefirst.xyz/i-will-pay-you-to-tell-me-im-doing-it-wrong