The Five Levels: From spicy autocomplete to the dark factory

Posted by benwerd 5 days ago

Counter35Comment31OpenOriginal

Comments

Comment by saulpw 2 hours ago

One of other authors he links to[0] brags that he's released 10 projects in the past month, like "Super Xtreme Mapper, a high-end, professional MIDI mapping software for professional DJs", which has 4 stars on Github. Despite the "high-end, professional...for professional" description, literally no one is going to use it, because this guy can't [be trusted to] maintain this software. Even if Claude Code is doing all the work, adding all the features, and fixing all the bugs, someone has to issue the command to do that work, and to foot the bill. This guy is just spraying code around and snorting digital coke.

There is plausibly something here with AI-generated code but as always, the value is not in the first release but in the years of maintenance and maturation that makes it something you can use and invest in. The problem with AI is that it's giving these people hyper-ADHD, they can't commit to anything, and no one will use vibe-coded tools--I'm betting not even themselves after a month.

[0] https://nraford7.github.io/road-runner-economy/

Comment by Dr_Birdbrain 2 hours ago

My feeling is that AI-generated code is disposable code.

It’s great if you can quickly stand up a tool that scratches an itch for you, but there is minimal value in it for other people, and it probably doesn’t make sense to share it in a repo.

Other people could just quickly vibe-code something of equal quality.

Comment by thewebguyd 1 hour ago

That's how I've been using and treating it, though I'm not primarily a developer. I work in ops, and LLMs write all sorts of disposable code for me. Primarily one-off scripts or little personal utilities. These don't get shared with anyone else, or put on github, etc. but have been incredibly helpful. SQL queries, some python to clean up or dig through some data sets, log files, etc. to spit out a quick result when something more robust or permanent isn't needed.

Plus, so far, LLMs seem better at writing code to do a thing over directly doing the thing, where it's more likely to hallucinate, especially when it comes to working with large CSV or Json files. "Re-order this CSV file to be in Alphabetical order by the Name field" will make up fake data, but "Write a python script to order the Name filed in this CSV to be alphabetical" will succeed.

Comment by WorldMaker 55 minutes ago

My growing (cynical) feeling is that AI-generated code is legacy-code-as-a-service. It is by nature trained on other people and company's legacy code. (There's the training set window which is always in the past. There's the economics question of which companies would ever volunteer to opt-in their best proprietary production code into training sets. Sure there are a few entirely open source companies, but those are still the exception and not the rule.) "Vibe code" is essentially delivered as Day Zero "Legacy Code" in the sense that the person who wrote that code is essentially no longer at the company (even if context windows get extended to incredibly huge sizes and you have great prompt preservation tools, eventually you no longer have the original context and not to mention that the Models themselves retrain and get upgraded every so many months are essentially "different people" each time. But most importantly the Models themselves can't tell you the motivating "how" or "why" of anything, at best maybe good specs documents and prompts do, but even that can be a gamble).

The article starts with a lot of words about how the meaning and nature of "tech debt" are going to change a lot as AI adoption increases and more vibe coding happens, but I think I disagree on what that change means. I don't AI reduces "tech debt". I don't think it is "deflationary" in any way. I think AI are going to gift us a world of tech debt "hyperinflation". When every application in a company is "legacy code" all you have is tech debt.

Having worked in companies with lots of legacy code, the thing you learn is that those apps are never as disposable as you want to believe. The sunk cost fallacy kicks in. (Generative AI Tokens are currently cheap, but cheap isn't free. Budgets still exist.) Various status quo fallacies kick in: "that's how the system has always worked", "we have to ensure every new version is backwards compatible with the old version", "we can't break anyone's existing process/workflow", "we can't require retraining", "we need 1:1 all the same features", and so forth.

You can't just "vibe code" something of equal quality if you can't even figure out what "equal quality" means. That's many the death of a legacy code "rewrite project". By the time you've figured out how every user uses it (including how many bugs are load-bearing features in someone's process) you have too many requirements to consider, not enough time or budget left, and eventually a mandate to quit and "not fix what isn't broken". (Except it was broken enough to start up a discovery process at least once, and may do so again when the next team thinks they can dream up a budget for it.)

Tech debt isn't going away and tech debt isn't getting eliminated. Tech Debt is getting baked into Day Zero of production operations. (Projects may be starting already "in hock to creditors". The article says "Dark Software Factory" but I read "Dark Software Pawn Shop".) Tech debt is potentially increasing at a faster than human scale of understanding it. I feel like Legacy Code skills are going to be in higher demand than ever. It is maybe going to be "deflationary" in cost for those jobs but only because the supply of Legacy Code projects will be so high and software developers will have a buffet to choose from.

Comment by wordpad 1 minute ago

I don't see why AI would be able to help you solve all your legacy code problems.

It still struggles making changes to large code bases, but it doesn't have any problems explaining those code bases to you helping you research or troubleshoot functionality 10x faster, especially if you're knowledgable enough not to take it at its responses as gospel but willing to have the conversation. A simple layman prompt of "are you sure X does Y for Z reason? Then what about Q?" will quickly get to them bottom of any functionality. 1 million token context window is very capable if you manage that context window properly with high level information and not just your raw code base.

And once you understand the problem and required solution, AI won't have any problems producing high quality working code for you, be it in RUST or COBOL.

Comment by bigfishrunning 2 hours ago

> snorting digital coke

What an apt description -- the website on the other side of that link is the most coked-out design I've ever seen.

Comment by galaxyLogic 1 hour ago

Software products are about unique competitive value that grows over time. Products have it or not. AI produced software is like open source in a sense, you get something for free. But whose gonna get rich if everybody can just duplicate your product by asking AI to do it, again?

Think of investing in the stock market by asking AI to do all the trading, for you. Great maybe you make some money. But when everybody catches on that it is better to let the AI do the trading, then others's AI is gonna buy the same stocks as yours, and their price goes up. Less value for you.

Comment by jacquesm 1 hour ago

Spot on. That's why so far all of the supposed solutions to 'the programmer problem' have failed.

Whether this time it will be different I don't know. But originally compilers were supposed to kill off the programmers. Then it was 3G and 4G languages (70's, 80's). Then it was 'no code' which eventually became 'low code' because those pesky edge cases kept cropping up. Now it is AI, the 'dark factory' and other fearmongering. I'll believe it when I see it.

Another HN'er has pointed me into an interesting direction that I think is more realistic: AI will become a tool in the toolbox that will allow experts to do what they did before but faster and hopefully better. It will also be the tool that will generate a ton of really, really bad code that people will indeed not look at because they can not afford to look at it: you can generate more work for a person in a few seconds of compute time than you can cover in a lifetime. So you end up with half baked buggy and insecure solutions that do sort of work on the happy path but that also include a ton of stuff that wasn't supposed to be there in the first place but that wasn't explicitly spelled out in the test set (which is a pretty good reflection of my typical interaction with AI).

The whole thing hinges on whether or not that can be fixed. But I'm looking forward to reading someone's vibe coded solution that is in production at some presumably secure installation.

I'm going to bet that 'I blame the AI' is a pattern what we will be seeing a lot of.

Comment by exmadscientist 52 minutes ago

In the long run, it's going to become about specifications.

Code is valuable because it tells computers what you want them to do. If that can be done at a higher level, by writing a great specification that lets some AI dark factory somewhere just write the app for you in an hour, then the code is now worthless but the spec is as valuable as the code ever was. You can just recode the entire app any time you want a change! And even if AI deletes itself from existence or whatever, a detailed specification is still worth a lot.

Whoever figures out how to describe useful software in a way that can get AI agents to reliably rebuild it from human-authored specifications is going to get a lot of attention over the next ~decade.

Comment by thewebguyd 8 minutes ago

> Whoever figures out how to describe useful software in a way that can get AI agents to reliably rebuild it from human-authored specifications

Which is why I think there's very little threat to the various tech career paths from AI.

Humans suck at writing specifications or defining requirements for software. It's always been the most difficult and frustrating part of the process, and always will be. And that's just actually articulating the requirements, to say nothing of the process of even agreeing on the requirements in the first place to even start writing the spec.

If a business already cannot clearly define what they need to an internal dev team, with experts that can somewhat translate the messy business logic, then they have a total of zero hope to ever do the same but to an unthinking machine and expect any kind of reliable output.

Comment by ElevenLathe 10 minutes ago

One of the unexpected benefits of everyone scrambling to show that they used AI to do their job is that the value of specs and design documents are dawning on people who previously scoffed at them as busywork. Previously, if I wanted to spend a day writing a detailed document containing a spec and discussion of tradeoffs and motivations, I'd have to hide it from my management. Now, I'm writing it for the AI so it's fine.

Comment by vunderba 58 minutes ago

> The problem with AI is that it's giving these people hyper-ADHD

Shouldn't be a problem - I've seen AT LEAST half a dozen almost-assuredly vibe coded projects related to dealing with ADHD in the last month...

Show HN: I gamified a productivity app to help my ADHD friends get things done https://news.ycombinator.com/item?id=46797212

Show HN: built a 24h-clock based radial planner to help with ADHD time blindness https://news.ycombinator.com/item?id=46668890

Show HN: DayZen: Visual day planner for ADHD brains https://news.ycombinator.com/item?id=46742799

Show HN: ADHD Focus Light https://news.ycombinator.com/item?id=46537708

Show HN: I built Focusmo – a focus app for ADHD time-blindness https://news.ycombinator.com/item?id=46695618

Show HN: Local-First ADHD Planner for Windows and Android https://news.ycombinator.com/item?id=46646188

Comment by ben_w 54 minutes ago

> One of other authors he links to[0] brags that he's released 10 projects in the past month, like "Super Xtreme Mapper, a high-end, professional MIDI mapping software for professional DJs", which has 4 stars on Github. Despite the "high-end, professional...for professional" description, literally no one is going to use it, because this guy can't [be trusted to] maintain this software. Even if Claude Code is doing all the work, adding all the features, and fixing all the bugs, someone has to issue the command to do that work, and to foot the bill. This guy is just spraying code around and snorting digital coke.

While I'd expect almost nobody to use apps meeting this description, I disagree about why:

It's not that other people have to foot the bill, it's that the bill is so low that it's a question of this particular app being discovered amongst all the others.

$15/month is a rounding error on most budgets. If every musician buys a Claude subscription and prompts for their own variations on this idea, there's a few million other apps that also do all that this app does, which vary from completely identical (because the prompts themselves were also) to utterly personalised for the particular preferences of exactly one artist.

Comment by observationist 1 hour ago

There's this notion of software maintenance - that software which serves a purpose must be perennially updated and changed - which is a huge, rancid fallacy. If the software tool performs the task it's designed to perform, and the user gets utility out of it, it doesn't matter if the software is a decade old and hasn't been updated.

Sometimes it might, if there are security implications. You might need to fix bugs in networking code, or update crypto handling, or whatever, and those types of things are fine. The idea that you can't have legitimately useful one-off software, used by millions, despite not being updated, is a silly artifact of the MBA takeover of big tech.

Continuous development is not intrinsic to the "goodness" of software. Sometimes it's a big disappointment if software hasn't been updated consistently, but other times, it just doesnt matter. I've got scripts, little apps, tools, things that I've used, sometimes daily, for over a decade, that never ever ever get updated, and I'd be annoyed if I had to. They have simple tasks to perform that they do well; you dont need all the rest of the "and now we have liquid glass icons! oh, and mandatory telemetry, and if you want ads to go away, you must pay for a premium subscription"

The value is in the utility - the work done by the software. How much effort and maintenance goes into creating it often has nothing to do with how useful it is.

Look at windows 11 - hundreds of billions of dollars and years of development and maintenance and it's a steaming pile of horseshit. They're driving people to Linux in record numbers.

Blender is a counter example. They're constructive and deliberate.

What's likely to happen is everyone will have AI access to built-on-the-fly apps and tools that they retain for future use, and platforms will consolidate and optimize the available tools, and nobody will need to vibe-code or engage in extensive software development when their AI butler can do all the software work they might need done.

Comment by anyonecancode 1 hour ago

> There's this notion of software maintenance - that software which serves a purpose must be perennially updated and changed - which is a huge, rancid fallacy. If the software tool performs the task it's designed to perform, and the user gets utility out of it, it doesn't matter if the software is a decade old and hasn't been updated.

If what you are saying is that _maintenance_ is not the same as feature updates and changes, then I agree. If you are literally saying that you think software, once released, doesn't ever need any further changes for maintenance rather than feature reasons, I disagree.

For instance, you mention "security implications," but as a "might" not "will." I think this vastly underestimates security issues inherent in software. I'd go so far say that all software has two categories of security issues -- those that known today, and those that will be uncovered in the future.

Then there's the issue of the runtime environment changing. If it's web-based, changing browser capabilities, for instance. Or APIs it called changing or breaking. Etc.

Software may not be physical, but it's subject to entropy as much as roads, rails, and other good and infrastructure out in the non-digital world.

Comment by observationist 25 minutes ago

Some software - what I take issue with is the notion that all software must be continuously updated, regardless. There are a whole lot of chunks of code that never get touched. There are apps and daemons and widgets that do simple things well, and going back to poke at them over and over for no better reason than "they need updates" is garbage.

There's the whole testing paradigm issue, driven by enshittification, incentivizing surveillance in the guise of telemetry, numbing people to the casual intrusion on their privacy. The midwit UX and UI "engineers" who constantly adjust and tweak and move shit around in pursuit of arbitrary metrics, inflicting A/B testing for no better reason than to make a number go up on a spreadsheet be it engagement, or number of clicks, or time spent on page, or whatever. Or my absolute favorite "but the users are too dumb to do things correctly, so we will infantilize by default and assume they're far too incompetent and lack the agency to know what they want."

Continuous development isn't necessary for everything. I use an app daily that was written over 10 years ago - it does a mathematical calculation and displays the result. It doesn't have any networking, no fancy UI, everything is sleek and minimal and inline, there aren't dependencies that open up a potential vulnerability. This app, by nearly every way in which modern software gets assessed, is built entirely the wrong way, with no automatic updates mechanism, no links back to a website, to issue reporting menu items, no feature changelog, and yet it's one of the absolute best applications I use, and to change it would be a travesty.

Maybe you could convince me that some software needs to be built in the way modern apps are foisted off on us, but when you dig down to the reasons justifying these things, there are far better, more responsible, user respecting ways to do things. Artificial Incompetence is a walled garden dark pattern.

It's shocking how much development happens simply so that developers and their management can justify continued employment, as opposed to anything any user has ever actually wanted. The wasteful, meaningless flood of CI slop, the updates for the sake of updates, the updates because they need control, or subscriptions, or some other way of squeezing every last possible drop of profit out of our pockets, regardless of any actual value for the user - that stuff bugs the crap out of me.

Comment by anyonecancode 12 minutes ago

These posts are in a thread about someone pumping out a large amount of software in a short amount of time using AI. I'm guessing that you and I would agree that programs flung out of an AI shotgun are highly unlikely to be the kind of software that will work well and satisfy users with no changes over 10 years.

Comment by jacquesm 1 hour ago

Sure, but the reason why this is the case is simple: writing software is easy. Writing good software is stupendously hard. So all those manyears that went into maintaining software were effectively just hardening, polishing bug fixes and slow adjustment to changing requirements and new situations. If you throw it all out whenever the requirements change you never and up with something that is secure or as bug free as you can make it.

Comment by lifetimerubyist 1 hour ago

This is why I just roll my eyes when people are like "i'm building things I just didn't have time for before"

Ever stop to wonder that maybe the reason you didn't build it and didn't MAKE the time to build it is...because the idea sucks?

Nobody wants your idea slop.

None of these vibe coded businesses are going to last long term because guess what - why would I pay you anything when I will be able to just vibe code the thing I want myself if I want it bad enough?

Project vomit is just for people that want to pad their github stats. It's programmer virtue signalling. Yawn.

Comment by simonw 1 hour ago

I've talked to a team that's doing the dark factory pattern hinted at here. It was fascinating. The key characteristics:

- Nobody reviews AI-produced code, ever. They don't even look at it.

- The goal of the system is to prove that the system works. A huge amount of the coding agent work goes into testing and tooling and simulating related systems and running demos.

- The role of the humans is to design that system - to find new patterns that can help the agents work more effectively and demonstrate that the software they are building is robust and effective.

It was a tiny team and they stuff they had built in just a few months looked very convincing to me. Some of them had 20+ years of experience as software developers working on systems with high reliability requirements, so they were not approaching this from a naive perspective.

I'm hoping they come out of stealth soon because I can't really share more details than this.

Comment by observationist 1 hour ago

You'd think at some point it'll be enough to tell the AI "ok, now do a thorough security audit, highlight all the potential issues, come up with a best practices design document, and fix all the vulnerabilities and bugs. Repeat until the codebase is secure and meets all the requisite protocol standards and industry best practices."

We're not there yet, but at some point, AI is gonna be able to blitz through things like that the way they blitz through making haikus or rewriting news articles. At some point AI will just be reliably competent.

Definitely not there yet. The dark factory pattern is terrifying, lol.

Comment by simonw 1 hour ago

That's definitely a pattern people are already starting to have good results from - using multiple "agents" (aka multiple system prompts) where one of them is a security reviewer that audits for problems and files issues for other coding agents to then fix.

I don't think this worked at all well six months ago. GPT-5.2 and Opus 4.5 might just be good enough for this pattern to start being effective.

Comment by jwpapi 1 hour ago

Honestly I’m not sure we’re not there yet, run this prompt as a ralph loop for 2 days on your codebase and see where you at...

Comment by hbarka 24 minutes ago

What is the AI analog for Tesla's level of robotaxi, where there's a "safety monitor" in the passenger seat or sans safety monitor there's a trailing guide car[1] and remote driver in Mumbai[2]?

[1] https://electrek.co/2026/01/22/tesla-didnt-remove-the-robota...

[2] https://insideevs.com/news/760863/tesla-hiring-humans-to-con...

Comment by ekidd 2 hours ago

Having actually run some of the software produced by nearly "dark software factories," a lot of that software is completely shit.

Yegge's Beads is a genuinely good design, for example, but it's flakier and more broken the Unix vendor Motif implementations in 1993, and it eats itself more often than Windows 98 would blue screen.

I can actually run a bunch of orchestrated agents, and get code which isn't complete shit. But it's an extremely skill-intensive process, because I'm acting as product manager, lead engineer, and the backstop for the holes in the cognition of a bunch of different Claudes.

So far, the people promising completely dark software factories are either high on their own supply, or grifting to sell books (or occasionally crypto). Or so I judge from using the programs they ship.

Comment by xg15 2 hours ago

I found it kind of fitting that didn't even describe what a human would still do at level 5 nor why it would be desirable. It's just the "natural" progression of a 5 step ladder and that seems to be reason enough.

Comment by thenfcm 1 hour ago

Well isnt the point humans wouldn't need to do basically anything?

It would be 'desirable' because the value is in the product of the labour not the labour itself. (Of course the resulting dystopian hellscape might be considered undesirable)

Comment by ekidd 1 hour ago

As I keep pointing out, if the model ever stops needing you to complete ambitious goals, then what does the model actually need you for?

People somehow imagine an agent that can crush the competition with minimal human oversight. And then they somehow think that they'll be in charge, and not Sam Altman, a government, or possibly the model itself.

If the model's that good, nobody's going to sell it to you.

Comment by 1 hour ago

Comment by badgersnake 1 hour ago

These hype articles are getting very boring.

Comment by Animats 1 hour ago

This is a meta-hype article. It's an article about the hype.

Comment by pphysch 1 hour ago

The autopilot analogy is good because level 4-5 are essentially vaporware outside of success in controlled environments backed by massive investment and engineering.