Earlier this week, a no-code builder named Leo went viral on X after sharing a big problem: His SaaS app, which he built using Cursor, was suddenly under attack. His API keys had been maxed out, users were bypassing subscriptions, and his database was being manipulated—all after he had publicly shared how he "vibe-coded" the app himself.
As it turned out, he had inadvertently exposed his API keys, which exposed a major security vulnerability in his application. The response from the internet ranged from mockery to outright hacking, but also included support from “white hat” hackers offering debugging advice.
Leo was so overwhelmed by both the onslaught and his lack of technical expertise to solve a major issue that he concluded it would be safer to no longer “build in public” and post about his experiences.
As a non-developer myself who has also only recently begun no-code building, I deeply empathize with this problem. I’ve also learned that, while many no-code platforms make it easy to integrate API keys (an important link from connecting one application to another), they don’t always warn users about security best practices—like keeping keys private or implementing rate limits.
Despite spending 15 years in tech, I had never fully grasped just how critical security protocols are for handling APIs until I started building my own no-code project. In fact, I only learned about key protection because I had direct access to a developer friend while building out my own MVP.
This strikes me as both a major challenge and a unique opportunity. As more people build software without coming from traditional coding backgrounds, how do we ensure their applications remain safe, secure, and resilient? More importantly, how can we equip newcomers (myself included!) with the knowledge to avoid critical security oversights before they become costly mistakes?
Like a lot of things regarding security best practices on the Internet, the next best thing to polished or professional training is often real world experience. Reading about this developer’s experience reminded me of a security incident I faced in a former job—back before I knew better.
Several years ago, I was on vacation when I got an email from a U.S. News reporter who wanted to talk to me about one of my favorite topics: helping college students land jobs in the tech sector.
Normally, I do my best to unplug while traveling, but this felt like such an obvious win that I made an exception. Not only did I log into their webcam system from a rental car to conduct an interview while my husband drove to our hotel, but afterward, I even wrote a blog post expanding on my advice for the story.
The only problem? There was no reporter. There was no story. And that software they had me install? It wasn’t built for interviews—it was actually a malicious browser extension that gained access to my entire Google account, including the admin panel.
Suffice it to say, by the time I got back to work on Monday, my organization had revoked my permissions as an admin on the company account. I was mortified—especially by how easily I fell for a scam that played directly into my seemingly innocent desire to support young people in their careers.
But then they told me the worst part: The whole thing was a setup.
As it turned out, as part of an internal cybersecurity test, the company hired a firm to see if they could breach our system through social engineering. I was the chosen mark.
Putting aside the obvious trust issues that you imagine might result when you get spear-phished by your own employer (great confidence boost, thanks), I learned a valuable lesson that day: Be more careful on the Internet.
I think about this story a lot now as a no-code hacker and builder. More often than not, you don’t know what you don’t know—until it’s too late. This is why fundamentals matter more than ever.
I'm worried that these scares (and scams) will keep happening. Just last night, another developer shared a similar issue:
As no-code tools become more popular, these risks won’t be limited to just a handful of builders. The real question isn’t whether to build in public, but how to do so safely.
Today, I asked another developer that I’m working with, essentially, “What else do I need to know that I don’t know right now?”
He laughed a little at that. “I mean, there’s a reason that there are entire specialist professions centered around cybersecurity in engineering.”
Of course, I know that in theory. But how am I supposed to apply it, in practice? After all, if one exposed API key can create such chaos (a lesson even I only learned just a couple of months ago), it begs the question: What else am I missing? What am I not seeing?
This is why I continue to believe that foundational computer science skills (at least for now) remain critically important for builders. Not because everyone needs to hand-code an app from scratch, but because anyone creating software for others will be a lot better off by understanding what makes for a solid foundational layer—one that includes best practices in security, authentication, and handling sensitive user data.
Right now, there’s a huge opportunity to bridge the gap by making coding fundamentals more accessible. I hope more people choose to lean into this moment to educate and empower new builders—rather than exploit their blind spots.
Bethany Crystal
Over 500 subscribers
Building in public is empowering—but it comes with risks. After reading how one no-code founder learned this the hard way after exposed API keys led to chaos, it got me thinking: What else are we overlooking? Some thoughts on the no-code tightrope here: Plus, the time my own employer spear-phished me... 😆 https://hardmodefirst.xyz/when-building-in-public-means-breaking-in-public