# Glitches in the Human Machine

*Patching the emotional, social, and institutional glitches of the high-velocity era*

By [Hard Mode First](https://hardmodefirst.xyz) · 2026-04-27

---

#### **_“I would have given up if you hadn’t been sitting next to me.”_**

I hear some flavor of this sentiment several times a week now. It’s most common in 1-1 or small-group settings, where I guide people through an experience of building something new with AI no-code tools for the very first time. It turns out that some of us (maybe all of us?) really benefit from a little human hand-holding, particularly when we are doing things entirely unfamiliar or scary. Without it, we stop at the first friction point and never push past it.

I recognize this feeling of frustration-turned-gratitude because I felt it, too. What I’ve found is that, even with all of the infinite potential of what AI can do for us, there are some glitches in the (human) machine that keep getting in the way of progress. This is one of them.

These “glitches” aren’t things that can be commented away in code, or worked out with algorithms. They run much deeper and much more subtly through the fabric of the way we work, the way we learn, and the way we connect.

Here are the three most prominent ones I’m seeing.

**#1: The Emotional Glitch**
----------------------------

#### One year ago, I was the one frozen at the screen.

I felt paralyzed in my own progress in building my first apps, and there was not a YouTube tutorial in the world that could have pulled me out of it. In the end, it was 1-1 AI coaching and [pair prompting](https://bethanythebuilder.substack.com/p/pair-prompting) that taught me how to build my way out.

The coaching was tactical and surgical; I showed my work to my coach [Leon Coe](https://www.linkedin.com/in/leoncoe/) and described my process and he gave me tips on how to approach it from an engineering-first mindset. The structured prompts he provided helped, but not as much as the human conversation he facilitated. Some days I just needed someone to encourage me to keep going, to hear me out on my MVPs and talk me through it before I spent the time on screen.

![](https://storage.googleapis.com/papyrus_images/8a55b966829dda212aece01d3d2723600a88e53e42f778a226e95c712e1f0162.jpg)

Even with all the tools available, it’s hard to know where to start. (image source: NanoBanana)

If coaching taught me how to think like an engineer, building [an entire mobile app from scratch](https://bethanythebuilder.substack.com/p/introducing-scribblins-a-creative) with a seasoned developer who had been there, done that, taught me how to have the creative confidence to invent something totally new. I know this may seem like a weird thing to need to learn, but it’s actually an unfamiliar practice for most of us. We’ve spent our digital lives trained as _users_ of software, treating computers as things that are made _for_ us. It is a radical, disorienting shift to suddenly be in the driver’s seat.

When the tables are turned and anyone has the potential to create, most of us have no clue how to even start. The truth is: I needed someone to sit with me and make sure I didn’t crash. And I know I’m not alone, which is what’s really at the heart of this **emotional glitch**.

There’s something decidedly human and important about **being witnessed** in this moment. It’s why, at our [localhost:3000](https://bethanythebuilder.substack.com/p/localhost3000-and-the-power-of-the) gatherings (an AI no-code builder club I co-created with [Louise Macfadyen](https://lmacfadyen.com/)) we see a recurring trend: Builders who have spent months coding in total isolation suddenly choose that room to debut their work. Whether or not the code works is besides the point. They are looking for the one thing an LLM can’t give: A human nod of recognition.

* * *

**#2: The Social Glitch**
-------------------------

#### The first time I met someone who was “AI pilled” was when I met [Pasquale D’Silva](https://x.com/okpasquale) in early 2023.

At the time, hot off the presses of ChatGPT’s November launch, he shared how he’d been able to replace his entire engineering org with AI no-code building, and how he predicted that more humans would seek out AI companions to help us make sense of the world around us. I’m not going to lie: At the time, I felt like I was meeting someone from 10 years in the future and told him as much.

Since I’ve doubled down on my own AI usage in the years that followed, the gap between me and Pasquale is narrowing in some ways, but the gap between me and my “non AI-pilled friends” is widening. To some, I’m the one “living in the future.” To others, I’m “so six months ago” in my approach.

While it’s becoming increasingly common to discuss the individual mental health issues that can result from “[AI psychosis](https://pmc.ncbi.nlm.nih.gov/articles/PMC12712562/)” (which include, among other things, a deluded state of reality), there’s also a subtler, interpersonal byproduct. This **social glitch** isn’t just about losing touch with reality; it’s about losing the ability to _synchronize our realities_.

![](https://storage.googleapis.com/papyrus_images/e2a773b1af03f1a4f3e2edcb74abc974323e9555f5c6f5495c4a3261e36ea28a.jpg)

What it feels like to blow through the context window of a single human (image source: NanoBanana)

Today, the people most absorbed in the technology (like me) are so preoccupied with the dopamine kick of constant creation that it’s becoming harder to prioritize meeting up IRL (“in real life”) over time building something else. When we do meet, it’s a high-energy, exhilarating and multi-threaded conversation that starts and ends with a million different possibilities. Almost as if we have a hard time saying no to each other (because we’ve all gotten so used to playing this infinite game of improv with our AI companions).

On the other side (but equally troubling), I’m finding it tougher to have conversations with humans on the complete other end of the spectrum and _not_ mention AI. And anytime I’m asked even a basic human question (for instance, _“How are you?”_), my unfiltered laundry list of personal updates, thoughts, and plans overwhelm to the point of mental exhaustion. I’ve gotten so accustomed to dropping my full monologue onto every AI agent without skipping a beat that I now completely blow through a single human’s context window every damn time. I’ve become a high-bandwidth creature in a low-latency world. And the truth is: It’s making me a worse friend.

Add to this complexity the range of experiences people have with AI and you can see why this singular delusion is becoming more of a broader social concern.

It’s one thing to have an awkward dinner with a friend. But it’s another thing entirely when our schools, our businesses, and our neighborhoods are all running on different versions of the truth. When we lose the ability to reality-sync, we don’t just lose friends, we lose the ability to collectively build _anything_ together.

![](https://storage.googleapis.com/papyrus_images/535cca6744ded16a3f822c2b1bf7de342f601bff25d39bf0ca0c9a98dd813c8a.jpg)

Merge conflicts in action (image source: NanoBanana)

* * *

**#3: The Institutional Glitch**
--------------------------------

#### In the science fiction book, [Klara and the Sun](https://www.amazon.com/Klara-Sun-novel-Kazuo-Ishiguro/dp/059331817X), there’s a concept of _“_[_lifted_](https://www.litcharts.com/lit/klara-and-the-sun/terms/lifting)_”_ humans.

These are people whose bodies have been biologically modified in some way to allow for them to take on excess capacity (or in theory make them smarter). In the book, this inevitably leads to all sorts of societal divisions between those who are lifted and those who are not. Certain colleges only invite access to “lifted” students, certain jobs are only available for “lifted” kids…you get the picture.

Our present-day AI age is not far off from this fictional depiction. A college degree has been the single-greatest way to “lift” yourself in our society, but adept usage of AI is quickly becoming the new currency of power and influence. The trouble is that institutions can’t keep up.

Universities with the deepest pockets are integrating AI into their curricula first, essentially resourcing their way into an AI-augmented upper class. Meanwhile, underfunded schools will likely pivot toward the pragmatic realism of trade schools. This leaves a hollowing out in the center. For everyone in the middle relying on a standard degree to signal their worth, the floor is falling out. When a four-year credential no longer guarantees a job, the so-called “standard path” becomes a dead end.

![](https://storage.googleapis.com/papyrus_images/76f4baf2d22985975f2a6cd5fbefc0da9ec6df55dbc91392793f24339e5fe6ea.jpg)

When standard paths and institutional decision-making frameworks come into question, what happens next? (image source: NanoBanana)

Companies aren’t doing much better. Through corporate workshops with Build First, I have a unique lens into how organizations around the globe are tackling AI head-on. I notice they often treat AI as a procurement problem rather than a cultural shift. But by focusing on allow-lists and enterprise licenses, organizations accidentally create an internal bifurcation among those who self-start and those who wait for permission. In this high-velocity era (where a company we trained on Gemini in January is already migrating to Claude by April), a “tools-first” mindset [leads only to transience](https://www.theneurondaily.com/p/claude-beat-chatgpt-2-to-1). When decades-old decision-making frameworks are applied to a future where the finish line is constantly moving, actual adoption plummets, and lasting skills never take root.

Teaching AI is a lot less like teaching a tool and a lot more like learning a language. You need more than a vocab book and a single experience at a fine dining restaurant to learn French. Unfortunately, the scope of learning a language is decidedly outside of any single organization, individual, or software’s ability to support it.

These institutional breaking points around resource allocation and procurement do more than just confuse people; they drown out the most important factor of all: The intrinsic motivation to learn it yourself.

* * *

**The Patch**
-------------

Last week, 20 students at the [Zahn Innovation Center](https://zahncenternyc.com/) at City College [presented real AI-powered solutions to business challenges](https://www.linkedin.com/feed/update/urn:li:activity:7453475334525628416/) introduced by [André Ware](https://www.linkedin.com/in/andr%C3%A9-ware-mpa-9542b5142/), the founder of the nonprofit, [BeeU NYC](https://beeunyc.org/).

Just six weeks earlier, most of these students had never utilized AI outside from basic chatbot conversations. The studio class featured liberal arts students, not software engineers. But over just three rapidfire AI build sessions, students were prompted to quickly level up their own understanding of AI, using it first as a thought partner, then as a data visualizer, and ultimately as a problem-solving tool.

We leaned into the friction points that usually stop people (ie: confusion over tool usage, shifting project scopes, even group dynamics) and used them as the primary learning material. And in the end, every single group used AI in at least 4 different ways in their presentation: Research, design, website creation, and even deployable software.

The winning team created an integrated online booking system intended to dramatically streamline the registration flow for BeeU’s hive experience. Notably, nobody on this team had ever built an app before. Today, we are already talking about how to get it deployed live.

This is what it looks like to build in community. But Zahn was a sprint, and BeeU is one nonprofit. While this work is popping up across the city in builder clubs, hackathons, and one-off trainings hosted by nodes like [Decoded Futures](https://www.decodedfutures.nyc/) or AI small business hackathons through neighborhood labs like [Welcome to Chinatown](https://welcometochinatown.com/), it’s not enough to meet the tidal wave of demand. Many of these spaces are often temporary, and it’s impossible to patch a systemic glitch when we’re still working in silos.

![](https://storage.googleapis.com/papyrus_images/25eda8451ad7f347f43506e4ada0cd380359781de0bf8c43b56b186565d74fb5.jpg)

What it might look like to build in community (image source: Gemini)

That’s what I want to build next. [Imagine a storefront in New York City](https://www.buildfirst.nyc/) where a florist, a laundromat owner, a technologist, and a college student sit at the same table on a Wednesday afternoon and ship something real by Friday. Not a course, not a coworking space. Just a room where the emotional, social, and institutional glitches all get answered by the same simple thing: People working on problems they actually have, together.

AI literacy isn’t taught, it’s practiced. Ideally, in public, in place, with people who have real problems and real neighbors. The storefront isn’t a venue for AI training; it’s the answer to all three glitches at once. The emotional permission to start. The social fabric to keep going. The institutional alternative to waiting your turn.

When we stop asking for a curriculum and start creating our own outcomes, the institutional divide starts to dissolve. We aren’t waiting to be lifted; we are architecting our own ladders.

If you want a seat at that table, [let’s build](https://www.buildfirst.nyc/).

---

*Originally published on [Hard Mode First](https://hardmodefirst.xyz/glitches-in-the-human-machine)*
