Ask HN: How far has "vibe coding" come?
Posted by pigon1002 11 hours ago
I’m trying to understand where “vibe coding” realistically stands today.
The project I’m currently working on is getting close to 60k lines of code, with fairly complex business logic. From what I’ve heard, at this scale only a few tools (like Claude’s desktop app) are genuinely helpful, so I haven’t experimented much with other AI coding services.
At the same time, I keep seeing posts about people building 20k lines of code and launching a SaaS in a single 40-hour weekend. That’s made me question whether I’m being overly cautious, or just operating under outdated assumptions.
I already rely on AI quite a bit, and one clear benefit is that I now understand parts of the codebase that I previously wrote without fully grasping. Still, at my current pace, it feels like I’ll need several more months of development, followed by several more months of testing, before this can become a real production service. And that testing doesn’t feel optional.
Meanwhile, products that are described as being “vibe coded” don’t seem to be getting particularly negative evaluations.
So I’m wondering how people here think about this now. Is “you don’t really understand the code, so it’ll hurt you later” still a meaningful criticism? Or are we reaching a point where the default approach to building software itself needs to change?
I’d especially appreciate perspectives from people working on larger or more complex systems.
Comments
Comment by codingdave 8 hours ago
That is one of the strongest valid criticisms. Even if we ignore the possibility that the code that is vibed will be buggy and insecure, the real long-term problem is not having someone who understands the system. Almost every well maintained app has one or more people who grok the whole thing, who can hear a problem described and know right where the fix will be. They'll have a mental model of the whole system and can advise on architecture changes and other refactors. They can help teach the codebase to new folks. And most importantly, when an outage happens, they are the ones who quickly get you back up and running.
The lack of those people is why legacy systems are brittle and hard to maintain, so vibe coding a complex app puts you directly into that painful legacy maintenance mode.
One thought people are starting to throw out there is, "But the AI can just re-write the whole app every time we have a bug, so we never need to know things to that level." But those people have never worked with a customer base who gets ticked off when the same bugs re-appear on every release, or when a dozen small UI changes shows up on every release.
Vibe coding might give you some working code. But working code is an astoundingly low bar to set for actually building a product that pleases a customer base.
Comment by jf22 5 hours ago
With the right MCPs or additional context you can have the tools go read PRs or the last 10 tickets that impacted the system and even go out and read cloud configuration or log files to tell you about problems.
The concept of a "bus factor" is a relic of the past.
Comment by pigon1002 3 hours ago
So far, I’ve been reading through almost 100% of the code AI writes because of the traps and edge cases it can introduce. But now, it feels less like “AI code is full of pitfalls” and more like we need to focus on how to use AI properly.
Comment by ex-aws-dude 1 hour ago
"If it worked for my use case and didn't work for yours, you're obviously just doing something wrong. That's the only explanation."
Comment by anditherobot 2 hours ago
Now it's easier to traverse a live plan and to quickly make micro pivots as you go.
I also think that architecture needs to change. Design patterns that will help to provide as much context to the LLM as possible to increase understanding.
Comment by jackfranklyn 11 hours ago
What actually happens is tiered understanding. I might vibe-code a utility function (don't care about implementation, just that it works), but I need to deeply understand data flow and business logic boundaries.
The 20k LOC weekend SaaS stories are real but missing context. Either the person has deep domain knowledge (knows WHAT to build, AI helps with HOW), or it's mostly boilerplate with thin business logic.
For complex systems, the testing point is key. AI generates code faster than you can verify behaviour. The bottleneck shifts from "writing code" to "specifying behaviour precisely enough that you know when it's right". That part isn't going away.
The people I see struggling aren't the ones who don't understand their code - it's the ones who don't understand their requirements.
Comment by pigon1002 11 hours ago
Comment by throwaw12 10 hours ago
What's cool about this is that, every time new engineer joins this wave, there are more interesting ideas coming and shaping the "vibers" industry.
In my day to day job, now I am worried, it will be very difficult to get a new job, because I vibe so much that I almost forgot to write code from scratch.
Examples:
* Hey Claude, increase mem usage from 500Mb to 1500Mb in production - fire and forget
* Plan mode: What kind of custom metrics can we add to Xyz query processor? Edit mode: add only 3,4 and 9. Later we will discuss 8
* Any other small changes I have...
I primarily became a manager of bunch of AI agents running in parallel. If you interview me and ask me to write some concurrent code, there is a high probability that I will fail it without my AI babies
Comment by verdverm 9 hours ago
They care about the business. If you want to be like them, start by caring far less about the actual code or how it gets made. They will bring in people later who do, who will clean up the vibed mess
*I'm generally not a fan of this, but you asked
Comment by jf22 5 hours ago
Gold plating code with the best quality standards takes LLMs seconds whereas it would take you days doing it by hand.
Comment by verdverm 4 hours ago
and they may or may not "listen", they are non-deterministic and have no formal means or requirements to adhere to anything you write. You know this because they violate your rules all the time in your own experience
Comment by jf22 2 hours ago
And now we have LLMs that review LLM generated code so it's easy to verify quality standards.
Comment by cloudmanager 11 hours ago
Comment by pigon1002 10 hours ago
I should also consider isolating the custom logic in the existing codebase as much as possible, converting it into general logic, and then testing the “vibe” approach directly.
Comment by absynth 7 hours ago
My sense of using LLMs for coding is me feeling like a maintenance programmer even though the code is brand new and I'm debugging a LLM misunderstanding. Weird to be working on a 10k codebase that didn't exist a few hours ago and I'm now debugging it over the next 4 hours. Having it done around 16-20 work hours later is really strange.
Comment by wmeredith 4 hours ago
This is the issue. A firehose of code production is not useful for a long time. But it's very useful in short bursts. Total volume of code was never the bottle neck or the goal.
Working with LLMs has all the same pitfalls of working with people. Programming effectively for the long term requires all the same rules whether you're using LLM's or not: build incrementally, test and refactor as you go (not at the "end"), keep scopes small, ship frequently, etc.
Comment by AnimalMuppet 5 hours ago
For a larger, more complex system, the real barrier is understanding what needs done well enough that you can build something that can do it. (In my opinion, that has always been the real barrier.)
Comment by sejje 4 hours ago
The moat is most people aren't using LLMS, most people aren't building products, most people who might built it will never hear of your product, people will have other ideas competing for their attention, people are incompetent, and all the work that happens after the coding.
How many people do you think are out there vibe coding things?
Aren't companies built on execution? Or do we now build companies on tech stacks alone?
Before LLMS, how could a solo dev get a moat? Any company could hire a team to replicate your product. They might not care how much it costs them. "Hours spent coding" isn't the moat.