tl;dr - I guess I'm writing vibe code developer doc requests now. You can skip this one unless you're also as in the weeds as I am about building with no-code tools. If you happen to be building developer tools for no-code builders, I hope this has some good ideas to help you support more users like me. And if you've figure out some better hacks as a no-code builder, please share my way!
Since I've started off on my no-code journey as an app this year, I've been using a lot of developer tools, including text editors like Cursor, version control tools like Github, no-code builder tools like Replit and Vercel, integration tools like Resend, hosting tools like the AWS console, monitoring tools like Sentry, and business intelligence tools like Metabase.
As a non-coder, the barrier to entry for all of these tools is often still quite painful. In nearly every case, I haven't been able to get by at finding my footing using these apps without even a short 1-1 with a developer or founder friend to get me going.
What I've noticed about using so many developer applications is that a lot of them (still) translate the way they talk about how to use these tools for the non-technical builder persona.
Here are a few things I'd love to see people consider when building developer tools for non-developers.
There's a lot of time spent on formal documentation of new APIs and SDKs for people who know how to read code. But as a non-coder, stuff like this is still pretty overwhelming to me.
I'm still learning about what, exactly, it means to pull in a new API to my codebase, what parameters I need to set and how to test and figure it. So reading through docs like this (with the example requests in code) are not helpful for me.
Instead, I wish OpenAI included a "quick start guide" for vibe coders as a second part of their documentation. One that includes the way to prompt your AI no-code tool with advice on how to start implementing new APIs, as well as how to consider some of the technical architecture changes that might result from any new integration.
In this case, a few things I wanted to know included:
Questions I Asked Before Setting Out to Implement ChatGPT's New Image API
What, exactly, do I say to my Cursor instance in order to call this new API?
What else do I have to change in my codebase in order to call this new API?
How can I test the prompt parameters to see how this image API is implemented?
What do I have to tell my AI about where to store the images? What changes do I need to make to the way I'm hosting my app to account for this?
What database changes do I need to make in order to make room for this new image asset type in my app?
There are probably a lot more questions I should also be asking. So maybe the docs could also give me a fuller sense of the scope of my remaining questions too.
One of the hardest parts for me in building prompt-based AI applications is that it's really hard for me to quickly test how changes to my prompt affect the end result user output.
If I were a developer I'd probably spin up my own internal dashboards to help me measure and test this both in the context of my own app and through the context of how using different LLMs changes the output. But that doesn't seem like the best use of my extremely limited time.
I'd love to see more developer tooling companies offer sandbox environments to test prompts and integrations before committing them to code. The OpenAI Developer Playground is a great starter use case for this, as I can test the words in the prompt.
To take it one step further, I could use help then extracting my final prompt into code and feeding that to my no-code tool (ie: Cursor) to implement it in practice.
One thing that I would have loved in this particular case is a bit of a WYSIWYG module that lets me write in plain text what I want to use the new image API to do, and then share a snippet of my own codebase and have these set of docs essentially write the injected bit of code that includes my desired output. While I recognize this would likely be a patch onto my codebase and Cursor would still need to figure out a lot more, it'd get me a lot closer to realizing my ideas in code.
The first thing I do with these docs is copy and paste them into text files, then train a custom GPT about both my project goals and that documentation set in order to interrogate the AI about how to use the new tool. This is clunky and not always effective.
I was really excited last week when I set out to build a Farcaster MiniApp that right on their docs page, there is a "Ask in ChatGPT" button right on the top right corner.
When you click on this tab, it opens up a custom ChatGPT window with a pre-loaded prompt that asks the AI to review and analyze the docs page, priming the AI to get ready to respond to any questions you have.
To take this further, I'd love to then layer in my additional project goals and any pre-existing docs or code that I have to really layer this in correctly. For me, it's less about the initial starting setup and more about getting the bigger picture architecture of the application to a point that makes sense.
The other nice thing that Farcaster did with their docs was include a link to a live markdown file that they clearly instruct on how to feed to Cursor to keep it up to date.
This was instantly helpful, as from the moment I started to go through the quick start guide to install the relevant libraries needed to build the app, my Cursor already had some context to get me going.
I'd love access to a context-aware developer architecture coach that helps me ask questions about my bigger picture goals of what I'm building in order to then help me break down those hopes and dreams into micro-features. Before implementing a new API or developer tool, I'd like to be able to easily share my current codebase, my hopes and dreams for the feature, and receive a clearly specified incremental set of steps (along with things to watch out for and trade offs).
Yesterday I read Fred Benenson's post, "The Perverse Incentives of Vibe Coding" where he writes about how many of the no-code editors (which typically write a lot of bloated code) have skewed incentives to write bloated code because longer code requires more token limits per call, which in turn gets the costs up for the end user.
In this post, he suggests a few ways of how to break apart the steps into forced planning and then more incremental version control in the build. While I think these make a lot of sense for engineers, it's still hard for me to do this on my own without the years of pre-training on how to correctly chunk and simplify big features into smaller bits.
To compensate, I've built myself a bunch of custom GPTs that I train as "CS professors" or "engineering whisperers" to help me figure out how to better communicate my goals and then translate those desires into code. But it's not good enough in helping me lay out both elegant, sleek code and also lay a foundation for a more complete build.
These are just a few of the things I'd love to see to make developer tools more user-friendly and accessible for non-developers like me. Over time, I imagine we'll see more and more applications that "bridge" between these personas, but right now, by and large, the ecosystem is still not mature enough to fully support all vibes, all the time.
Over 600 subscribers
Request to Build: Developer Tools (and docs) for Non-Developers As a no-code builder, using developer tools can still be quite painful... here are 4 ideas to make your developer tools and docs more user-friendly for the non-technical persona https://hardmodefirst.xyz/[rtb]-request-to-build-better-developer-tools-for-non-developers
280 $DEGEN
We need this for the mini-apps in Warpcast asap. @dwr.eth @linda
The current mini apps docs got me SO close I was really impressed But yes, still needed like 10% more support to get me across the finish line