The Gorman Paradox: Where Are All the AI-Generated Apps?
Posted by ArmageddonIt 1 day ago
Comments
Comment by perrygeo 23 hours ago
Sure, code gen is faster now. And the industry might finally be waking up to the fact that writing code is a small part of producing software. Getting infinitely faster at one step doesn't speed up the overall process. In fact, there's good evidence it that rapid code gen actually slows down other steps in the process like code review and QA.
Comment by decasia 22 hours ago
I realized early on in my enterprise software job that if I produce code faster than average for my team, it will just get stuck in the rest of our review and QA processes; it doesn’t get released any faster.
It feels like LLM code gen can exacerbate and generalize this effect (especially when people send mediocre LLM code gen for review which then makes the reviews become painful).
Comment by LeChuck 18 hours ago
Comment by binary132 21 hours ago
Comment by ninkendo 20 hours ago
Comment by maddmann 22 hours ago
Perhaps looking at iOS, steam, and android release is simply not a great measure of where software is headed. Disappointing that the article didn’t think go a little more outside the box.
Comment by epiccoleman 20 hours ago
https://github.com/epiccoleman/scrapio
https://github.com/epiccoleman/tuber
These are both projects which have repeatedly given me a lot of value but which have very little market mass appeal (well, tuber is pretty fuckin' cool, imho, but you could just prompt one up yourself).
I've also built a handful of tools for my current job which are similarly vibe coded.
The problem with these kinds of things from a "can I sell this app" perspective is that they're raw and unpolished. I use tuber multiple times a week but I don't really care enough about it to get it to a point where I don't have to have a joke disclaimer about not understanding the code. If one of my little generated CLIs or whatever fails, I don't mind, I still saved time. But if I wanted to charge for any of them I'd feel wrong not polishing the rough edges off.
Comment by conartist6 22 hours ago
Comment by maddmann 22 hours ago
Comment by t0mas88 21 hours ago
For example a flight school that I work with has their own simple rental administration program. It's a small webapp with 3 screens. They use it next to a SaaS booking/planning tool, but that tool never replaced their administrative approach. Mainly because it wouldn't support local tax rules and some discount system that was in place there. So before the webapp they used paper receipts and an spreadsheet.
I think the challenge in the future with lots of these tools is going to be how they're maintained and how "ops" is done.
Comment by conartist6 20 hours ago
Somehow AI took over the narrative, but it's almost never the thing that actually created the value that it gets credit for creating.
Comment by maddmann 18 hours ago
Personally, I’ve been writing software for 10 years professionally. It is much easier, especially for someone with little coding experience, to create a quite complex and fully featured web app.
It makes sense that ai models are leveraging frameworks like next js/react/supabase, they are trained/tuned on a very clear stack that is more compatible with how models function. Of course those tools have high value regardless of ai. But ai has rapidly lowered the barrier to entry, and allows be to go much much farther, much faster.
Comment by conartist6 17 hours ago
I also just don't think "going fast" in that sense is such a big a deal. You're talking about frantic speed. I think about speed in terms of growth. The goal is to build sturdy foundations so that you keep growing on an exponential track. Being in a frantic hurry to finish building your foundations is not a good omen for the quality of what will be built on them.
Comment by maddmann 14 hours ago
AI is likely to change fundamental paradigms around software design by significantly decreasing the cost of a line of code/feature/bugfix/and or starting from scratch and enabling more stakeholders to help produce software.
Comment by ponector 17 hours ago
Comment by conartist6 21 hours ago
Comment by maddmann 21 hours ago
The aggregate impact isn’t known yet and the tech is still in its infancy.
Comment by conartist6 20 hours ago
Comment by nbates80 17 hours ago
Comment by NuclearPM 21 hours ago
Comment by prymitive 20 hours ago
Comment by nrdgrrrl 20 hours ago
Comment by pureliquidhw 22 hours ago
Comment by user_7832 21 hours ago
One day in our uni class, the prof played the movie instead of teaching. It is the only class I distinctly remember today in terms of the scope of what it taught me.
Comment by somenameforme 21 hours ago
Comment by user_7832 18 hours ago
Comment by FrustratedMonky 20 hours ago
"The Goal" is a movie based on Eliyahu Goldratt's best-selling business novel that teaches the principles of the Theory of Constraints
Comment by rsynnott 20 hours ago
I don’t understand why this was so surprising to people. People are _terrible_ at self-reporting basically anything, and “people think it makes them mildly more productive but it actually makes them mildly less productive” is a fairly common result for ineffective measures aimed at increasing productivity. If you are doing a thing that you believe is supposed to increase productivity, you’ll be inclined to think that it is doing so unless the result is dramatically negative.
Comment by rstuart4133 9 hours ago
[Parts] of the industry are very aware of the fact, and have been for decades. In fact there was a book on the subject. You probably already are well aware of it. It's "The mythical mean month" by Fred Brooks.
He didn't have to contend with AI's of course, but the underlying driver was the same. He wanted to speed up writing software. Specifically OS/360, a new operating system for IBM, on which Brooks was a project manager. It was badly late, so they tried the obvious tactic on throwing go hordes of programmers at it. I doubt money was a problem, so the said programmers would have been good at their job. Those programmers weren't AI's of course, but the reasoning behind the move seems to be the same as here: OS/360 is just code, therefore the faster you can produce code the faster it will be delivered.
Brooks Law [0] summarises what he believes happened to OS/360: "Adding manpower to a late software project makes it later." Which is doesn't sound too different to the experiences mentioned here: AI's supplying tens of thousands of lines to a large software project that is well outside of their context window to understand is a hindrance, not a help.
Interestingly, that doesn't contradict the experiences reported by people vibe small projects, who say it is much faster. We had a term for difference between the two types of development back in the day: programming in the small vs programming in the large. It seems to have largely disappeared from the vernacular now. Pity, as I think it sums up where AI coding works and where it doesn't.
And it had same disconnect between the two groups, as we see between the vibe coders and the rest. People who spend their lives coding in the small have no idea what people programming in the large do all day. To them, people working on large projects seem to spend an inordinate amount of time producing very little code.
Comment by mattacular 22 hours ago
Typing code and navigating syntax is the smallest part. We're a solid 30 years into industrialized software development. Better late than never?
Comment by djeastm 21 hours ago
I say this because I've had my own app for years and I am now using AI more and more to add features I wouldn't have attempted before (and a lot of UI enhancements)... but I haven't made a new domain name or anything.
Comment by ponector 17 hours ago
Luckily there is a solution, quite popular nowadays: layoff QA team and say developers should test themselves. Couple with rubber stamping merge requests and now you have higher velocity. All development metrics are up!
Comment by jacobriers 6 hours ago
The reason I think so is because I wanted to write a follow-up post, and checked the numbers - for instance, the graph for the Play Store peaks at 140000 app released per month, but all the references I found on the internet were much lower.
I then hunted around for other sources of app store data and found appfigures, which had a free trail. I did a bit of querying and I am seeing a noticeable uptick in number of apps released since around March 2025 (from around 20000 to 35000 for Google and 17000 to 30000 for iOS).
In terms of new GITHUB public repositories, the numbers look correct - so I agree with them there - no uptick in new open source repos in the AI era so far.
Comment by brokensegue 21 hours ago
Comment by rsynnott 18 hours ago
Comment by outside1234 20 hours ago
Comment by rwmj 19 hours ago
Comment by scotty79 22 hours ago
There is only one thing that triggers growth and it is demand. When there's a will, there's a way. But if there's no additional will, new ways won't bring any growth.
Basically AI will show up on the first future spike of demand for software but not before.
My software output increased by manyfold. But none of the software AI wrote for me shows up on the internet. Those are all internal tools, often one off, written for specific task, then discarded.
Comment by rsynnott 18 hours ago
Comment by scotty79 6 hours ago
I don't think you are gonna see the uptick in published software. At least not until there's money to be made from the additional demand.
What you are gonna see instead is drop in sales of published software (probably not games) as people build custom software with AI agents for their personal needs and use instead buying of the shelve products (and SaaS).
Comment by api 22 hours ago
Simplicity is harder than complexity. I want to tattoo this on my forehead.
Huge amounts of poor quality code is technical debt. Before LLMs you frequently saw it from offshore lowest bidder operations or teams of very junior devs with no grey beards involved.
Comment by AshamedCaptain 22 hours ago
Comment by jaxn 21 hours ago
Comment by lkjdsklf 19 hours ago
So much dead and useless code generated by these tools... and tens of thousands of lines of worthless tests..
honestly I don't mind it that much... my changed lines is through the roof relative to my peers and now that stack ranking is back........
Comment by 13415 22 hours ago
Comment by le-mark 22 hours ago
Comment by rjh29 22 hours ago
Comment by scotty79 21 hours ago
I feel like people who feel that about AI never really tried it in agentic mode.
I disable AI autocomplete. While bringing some value it messes with the flow of coding and normal autocomplete in ways I find annoying. (Although half of the problems would probably disappear if I just rebound it to CapsLock instead of Tab, which is the indent key).
But when switched to agentic mode I start with blank canvas and just describe applications I want. In natural language. I tell which libraries to use. Then I gradually evolve it using domain language or software development language whatever fits best to my desires about code or behavior. There are projects where I don't type any code at all and I inspect the code very rarely. I'm basically a project manager and part time QA while the AI does all the development including unit testing.
And it unncannily gets things right. At least Gemini 3 Pro (High) does. Sonnet 4.5 occasionally gets things wrong and difference in behavior tells me that it's not a fundamental problem. It's something that gets solved with stronger LLMs.
Comment by KurSix 2 hours ago
Comment by scotty79 1 hour ago
It works great when dealing with microservices architecture that was all the rage recently. Of course it doesn't solve it's main issue that is that microservices talk to each other but it still lets you sprint through a lot of work.
It's just that if you engineered (or engineer) things well, you get immediate huge benefits from AI coders. But if all you did last decade was throw in more spaghetti into already a huge bowl of spaghetti you are out of luck. Serves you right. The sad thing is that most humans will get pushed out into doing this kind of "real development" so it's probably a good time to learn to love legacy, because you are legacy.
Comment by le-mark 13 minutes ago
Lol no one needs AI to tell them that!
Comment by rjh29 19 hours ago
Comment by scotty79 14 hours ago
1. I had two text documents containing plain text to compare. One with minor edits (done by AI).
2. I wanted to see what AI changed in my text.
3. I tried the usual diff tools. They diffed line by line and result was terrible. I searched google for "text comparison tool but not line-based"
4. As second search result it found me https://www.diffchecker.com/
5. Initially it did equally bad job but I noticed it had a switch "Real-time diff" which did exactly what I wanted.
6. I got curious what is this algorithm. So I asked Gemini with "Deep Research" mode: "The website https://www.diffchecker.com/ uses a diff algorithm they call real-time diff. It works really good for reformatted and corrected text documents. I'd like to know what is this algorithm and if there's any other software, preferably open-source that uses it."
7. As a first suggestion it listed diff-match-patch from Google. It had Python package.
8. I started Antigravity in a new folder, ran uv init. Then I prompted the following:
"Write a commandline tool that uses https://github.com/google/diff-match-patch/wiki/Language:-Py... to generate diff of two files and presents it as side by side comparison in generated html file."
[...]
"I installed the missing dependance for you. Please continue." - I noticed it doesn't use uv for installing dependencies so I interrupted and did it myself.
[...]
"This project uses uv. To run python code use
uv run python test_diff.py" - I noticed it still doesn't use uv for running the code so its testing fails.
[...]
"Semantic cleanup is important, please use it." - Things started to show up but it looked like linear diff. I noticed it had a call to semantic cleanup method commented out so I thought it might help if I push it in that direction.
[...]
"also display the complete, raw diff object below the table" - the display of the diff still didn't seem good so I got curious if it's the problem with the diffing code or the display code
[...]
"I don't see the contents of the object, just text {diffs}" - it made a silly mistake by outputting template variable instead of actual object.
[...]
"While comparing larger files 1.txt and 2.txt I notice that the diff is not very granular. Text changed just slightly but the diff looks like deleting nearly all the lines of the document, and inserting completely fresh ones. Can you force diff library to be more granular?
You seem to be doing the right thing https://github.com/google/diff-match-patch/wiki/Line-or-Word... but the outcome is not good.
Maybe there's some better matching algoritm in the library?" - it seemed that while on small tests that Antigravity made itself it worked decently but on the texts that I actually wanted to compare was still terrible although I've seen glimpses of hope because some spots were diffed more granularly. I inspected the code and it seemed to be doing character level diffing as per diff-match-patch example. While it processed this prompt I was searching for solution myself by clicking around diff-match-patch repo and demos. I found a potential solution by adjusting cleanup, but it actually solved the problem by itself by ditching the character level diffing (which I'm not sure I would have come up with at this point). Diffed object looked great but as I compared the result to https://www.diffchecker.com/ output it seemed that they did one minor thing about formatting better.
[...]
"Could you use rowspan so that rows on one side that are equivalent to multiple rows on the other side would have same height as the rows on the other side they are equivalent to?" - I felt very clumsily trying to phrase it and I wasn't sure if Antigravity will understand. But it did and executed perfectly.
I didn't have to revert a single prompt and interrupted just two times at the beginning.
So I basically went from having two very similar text files and knowing very little about diffing to knowing a bit more and having my own local tool that let's me compare texts in satisfying manner, with beautiful highlighting and formatting, that I can extend or modify however I like, that mirrors interesting part of the functionality of the best tool I found online. And all of that in the time span shorter than it took me to write this comment (at least the coding part was, I followed few wrong paths during my search for a bit).
My experience tells me that even if I could replicate what I did today (keeping motivated is an issue for me), it would most likely be multi-day project full of frustration and hunting small errors and venturing into wrong paths. Python isn't even my strongest language. Instead it was a pleasant and fun evening with occasional jaw drops and feeling so blessed that I live in SciFi times I read about as a kid (and adult).
Oh, yeah, I didn't use auto-complete once, because it mostly sucks. ;-)
Comment by scotty79 18 hours ago
I lead a team once and wasn't particularly fond of it. For me AI is godsend. It's like being a tech lead and product owner but without having to deal with people and multitude of their idiosyncrasies.
I can understand how AI can't work well for a developer whose work is limited to reading tickets in Jira and implementing them in 3-5 business days, because that's exactly whom AI replaces. I also did that during my career and I liked it but I can see that if all you do at work is swing a shovel you might find it hard to incorporate power digger into your daily work process. But if you can step out a bit it feels great. You can still keep you shovel and chisel nice corners or whatever in places where digger did less than stellar job. But the digger just saves so much work.
Try Antigravity from Google. Not for your daily work. Just to make some stupid side projects that come to your mind, you don't care about, or process some data, make a gui for something, literally whatever, it costs nothing. I hope you'll see what I see.
Comment by 13415 19 hours ago
The problem is that it can give very bad general advise in more complex cases, and it seems to be especially clueless about software architecture. I need to learn to force myself to ignore AI advice when my initial reaction is "Hm, I don't know." It seems that my bullshit detector is better than any AI so far even if I know the topic less good.
Comment by ghc 22 hours ago
Building software and publishing software are fundamentally two different activities. If AI tilts the build vs. buy equation too far into the build column, we should see a collapse in the published software market.
The canary will be a collapse in the outsourced development / consulting market, since they'd theoretically be undercut by internal teams with AI first -- they're expensive and there's no economy of scale when they're building custom software for you.
Comment by conartist6 22 hours ago
I feel silly explaining this as if it's a new thing, but there's a concept in social organization called "specialization" in which societies advance because some people decide to focus on growing food while some people focus on defending against threats and other people focus on building better tools, etc. A society which has a rich social contract which facilitates a high degree of specialization is usually more virile than a subsistence economy in which every individual has all the responsibilities: food gathering, defense, toolmaking, and more.
I wonder if people are forgetting this when they herald the arrival of a new era in which everyone is the maker of their own tools...
Comment by gfdvgfffv 21 hours ago
I don’t need to hire a programmer. I don’t need to be a programmer. I can use a tool to program for me.
(We sure as hell aren’t there yet, but that’s a possibility).
Comment by conartist6 20 hours ago
Comment by rtp4me 20 hours ago
In the case of AI, Claude costs $100 or $200/mo for really good coding tasks. This is much less expensive than hiring someone to do the same thing for me.
Comment by conartist6 19 hours ago
Comment by rtp4me 19 hours ago
Comment by conartist6 19 hours ago
Comment by rtp4me 18 hours ago
And to your note that real production code is not necessarily a high bar, what is "real production code"? Does it need to be 10,000 lines of complex C/rust code spread across a vast directory structure that requires human-level thinking to be production ready? What about smaller code bases that do one thing really well?
Honestly, I think many coders here on HN dismiss the smaller, more focused projects when in reality they are equally important as the large, "real" production projects. Are these considered non-production because the code was not written by hand?
Comment by conartist6 17 hours ago
Each of those things is a mountain of complexity compared to the molehill of writing a single script. If you're standing on top of a molehill on top of a mountain, it's not the molehill that's got your head in the clouds.
Comment by claytongulick 20 hours ago
What makes you think so?
Most of the stuff I've read, my personal experience with the models, and my understanding of how these things work all point to the same conclusion:
AI is great at summarization and classification, but totally unreliable with generation.
That basic unreliablity seems to fundamental to LLMs, I haven't seen much improvement in the big models, and a lot of the researchers I've read are theorizing that we're pretty close maxing out what scaling training and inference will do.
Are you seeing something else?
Comment by gfdvgfffv 14 hours ago
So while I don’t think the world I described exists today — one where non-programmers, with neither programming nor programmer-management experience, use these tools to build software — I don’t a priori disbelieve its possibility.
Comment by senordevnyc 20 hours ago
If you mean that a completely non-technical user can't vibe code a complex app and have it be performant, secure, defect-free, etc, then I agree with you. For now. Maybe for a long time, we'll see.
But right now, today, I'm a professional software engineer with two decades of experience and I use Cursor and Opus to reliably generate code that's on par with the quality of what I can write, at least 10x faster than I can write it. I use it to build new features, explore the codebase, refactor existing features, write documentation, help with server management and devops, debug tricky bugs, etc. It's not perfect, but it's better than most engineers I've worked with in my career. It's like pair programming with a savant who knows everything, some of which is a little out of date, who has intermediate level taste. With a tiny bit of steering, we're an incredibly productive duo.
Comment by conartist6 14 hours ago
My work is to make sure that you don't need to reach for AI just because human typing speed is limited.
I love to think in terms of instruments versus assistants: an assistant is unpredictable but easy to use. It tries to guess what you want. An instrument is predictable but relatively harder to use. It has a skill curve and perhaps a skill cap. The purpose of an instrument is to directly amplify the expressive power of its user or player through predictable, delicately calibrated responses.
Comment by wizzwizz4 20 hours ago
Programming is far more the latter kind of task than the former. Data-processing or system control tasks in the "solve ordinary, well-specified problem" category are solved by executing software, not programming.
Comment by singpolyma3 20 hours ago
Comment by zkmon 21 hours ago
Comment by danaris 20 hours ago
I see so many people quote that damnable Heinlein quote about specialization being for insects as if it's some deep insight worth making the cornerstone of your philosophy, when in fact a) it's the opinion of a character in the book, and b) it is hopelessly wrong about how human beings actually became as advanced as we are.
We're much better off taking the Unix philosophy (many small groups of people each getting really really good at doing very niche things, all working together) to build a society. It's probably still flawed, but at least it's aimed in the right direction.
Comment by blazespin 13 hours ago
And we're seeing that in the labor numbers.
Sometimes things are harder to see because it's chipping away and everywhere at the margins.
Comment by ghc 57 minutes ago
Comment by jackfranklyn 23 hours ago
I've been building accounting tools for years. AI can generate a function to parse a bank statement CSV pretty well. But can it handle the Barclays CSV that has a random blank row on line 47? Or the HSBC format that changed last month? Or the edge case where someone exports from their mobile app vs desktop?
That's not even touching the hard stuff - OAuth token refresh failures at 3am, database migrations when you change your mind about a schema, figuring out why Xero's API returns different JSON on Tuesdays.
The real paradox: AI makes starting easier but finishing harder. You get to 80% fast, then spend longer on the last 20% than you would have building from scratch - because now you're debugging code you don't fully understand.
Comment by estimator7292 18 hours ago
It took me a few days to realize what was happening. Once I got some good files it was just a couple hours to understand the problem. Then three weeks untangling the parser and making it actually match the spec.
And then three months refactoring the whole backend into something usable. It would have taken less time to redo it from scratch. If I'd known then what I know now, I would have scrapped the whole project and started over.
Comment by KurSix 3 hours ago
Comment by nunez 19 hours ago
Comment by rtp4me 20 hours ago
Comment by dimitri-vs 19 hours ago
But with a big fat asterisk that you: 1. Need to make it aware of all relevant business logic 2. Give it all necessary tools to iterate and debug and 3. Have significant experience with strengths and weaknesses of coding agents.
To be clear I'm talking about cli agents like Claude Code which IMO is apples and oranges vs ChatGPT (and even Cursor).
Comment by KellyCriterion 23 hours ago
Comment by garden_hermit 23 hours ago
Comment by thunky 22 hours ago
People start announcing that they're using AI to do their job for them? Devs put "AI generated" banners all over their apps? No, because people are incentivised to hide their use of AI.
Businesses, on the other hand, announce headcount reductions due to AI and of course nobody believes them.
If you're talking about normal people using AI to build apps those apps are all over the place, but I'm not sure how you would expect to find them unless you're looking. It's not like we really need that many new apps right now, AI or not.
Comment by callc 22 hours ago
The link at the bottom of the post (https://mikelovesrobots.substack.com/p/wheres-the-shovelware...) goes over this exactly.
> Businesses, on the other hand, announce headcount reductions due to AI and of course nobody believes them.
It’s an excuse. It’s the dream peddled by AI companies: automate intelligence so you can fire your human workers.
Look at the graphs in the post, then revisit claims about AI productivity.
The data doesn’t lie. AI peddlers do.
Comment by ogogmad 21 hours ago
This reminds me of the people who said that we shouldn't raise the alarm when only a few hundred people in this country (the UK) got Covid. What's a few hundred people? A few weeks later, everyone knew somebody who did.
Comment by rsynnott 18 hours ago
Re the Covid metaphor; that only works because Covid was the pandemic that did break out. It is arguably the first one in a century to do so. Most putative pandemics actually come to very little (see SARS1, various candidate pandemic flus, the mpox outbreak, various Ebola outbreaks, and so on). Not to say we shouldn’t be alarmed by them, of course, but “one thing really blew up, therefore all things will blow up” isn’t a reasonable thought process.
Comment by wizzwizz4 20 hours ago
Comment by anorwell 19 hours ago
From my perspective, it's not the worst analogy. In both cases, some people were forecasting an exponential trend into the future and sounding an alarm, while most people seemed to be discounting the exponential effect. Covid's doubling time was ~3 days, whereas the AI capabilities doubling time seems to be about 7 months.
I think disagreement in threads like this often can trace back to a miscommunication about the state today / historically versus. Skeptics are usually saying: capabilities are not good _today_ (or worse: capabilities were not good six months ago when I last tested it. See: this OP which is pre-Opus 4.5). Capabilities forecasters are saying: given the trend, what will things be like in 2026-2027?
Comment by wizzwizz4 18 hours ago
Comment by bccdee 22 hours ago
Comment by thunky 20 hours ago
That's a silly argument. Someone could have made all of those clones before, but didn't. Why didn't they? Hint: it's not because it would have taken them longer without AI.
I feel like these anti-AI arguments are intentially being unrealistic. Just because I can use Nano Banana to create art does not mean I'm going to be the next Monet.
Comment by bccdee 19 hours ago
Yes it is. "How much will this cost us to build" is a key component of the build-vs-buy decision. If you build it yourself, you get something tailored to your needs; however, it also costs money to make & maintain.
If the cost of making & maintaining software went down, we'd see people choosing more frequently to build rather than buy. Are we seeing this? If not, then the price of producing reliable, production-ready software likely has not significantly diminished.
I see a lot of posts saying, "I vibe-coded this toy prototype in one week! Software is a commodity now," but I don't see any engineers saying, "here's how we vibe-coded this piece of production-quality software in one month, when it would have taken us a year to build it before." It seems to me like the only software whose production has been significantly accelerated is toy prototypes.
I assume it's a consequence of Amdahl's law:
> the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used.
Toy prototypes proportionally contains a much higher amount of the type of rote greenfield scaffolding that agents are good at writing. The sticker problems of brownfield growth and robustification are absent.
Comment by garden_hermit 22 hours ago
I am very willing to believe that there are many obscure and low-quality apps being generated by AI. But this speaks to the fact that mere generation of code is not productive, that generating quality applications requires other forms of labor that is not presently satisfied by generative AI.
Comment by thunky 17 hours ago
IMO you're not seeing this because nobody is coming up with good ideas because we're already saturated with apps. And apps are already releasing features faster than anyone wants them. How many app reviews have you read that say: "Was great before the last update". Development speed and ability isn't the thing holding us back from great software releases.
Comment by rsynnott 18 hours ago
Comment by cheevly 15 hours ago
Comment by KellyCriterion 22 hours ago
When it comes to "AI-generated apps" that work out of the box, I do not believe in them - I think for creating a "complete" app, the tools are not good enough (yet?). Context & co is required, esp. for larger apps and to connect the building blocks - I do not think there will be any remarkable apps coming out of such a process.
I see the AI tools just as a junior developer who will create datastructures, functions, etc. when I instruct it to do so: It attends in code creation & optimization, but not in "complete app architecture" (maybe as sparring partner)
Comment by samsullivan 17 hours ago
Comment by oxag3n 12 hours ago
Parsers and data serialization in general is mature and more standardized area of software engineering. Can AI write a good parse? May be. Will it though?
Comment by nostrademons 22 hours ago
...which makes it a great fit for executives that live by the 80/20 rule and just choose not to do the last 20%.
Comment by senordevnyc 20 hours ago
I run a SaaS solo, and that hasn't really been my experience, but I'm not vibe coding. I fully understand all the code that my AI writes when it writes it, and I focus on sound engineering practices, clean interfaces, good test coverage, etc.
Also, I'm probably a better debugger than AI given an infinite amount of time and an advantage in available tools, but if you give us each the same debugging tools and see who can find and fix the bug fastest, it'll run circles around me, even for code that I wrote myself by hand!
That said, as time has gone on, the codebase has grown beyond my ability to keep it all in my head. That made me nervous at first, but I've come to view it just like pretty much any job with a large codebase, where I won't be familiar with large parts of the codebase when I first jump into them, because someone else wrote it. In this respect, AI has also been really helpful to help me get back up to speed on a part of the codebase I wrote a year ago that I need to now refactor or whatever.
Comment by sunrunner 21 hours ago
Demand is the real bottleneck: New tools expand who can ship, but they don’t expand how many problems are worth solving or audiences worth serving. Adoption tends to concentrate among "lead users" and not "everyone".
App store markets are power-law distributed (no citations sorry, it's just my belief): A tiny slice of publishers captures the most downloads/revenue. That’s discoverability, not "lack of builders".
Attention and distribution are winner-take-most: Even if creation is cheap, attention is scarce.
The hidden (recurring) cost as other commenters point out is maintenance: Tooling slashes first release cost but not lifecycle cost.
Problem-finding outweighs problem-solving: If the value of your app depends on users or data network effects, you still face the "cold start problem".
"Ease" can change the meaning of the signal: If anyone can press a button and "ship an app" the act of shipping stops signaling much. Paradoxically, effort can increase personal valuation (the IKEA effect), and a lower cost to the creator as seen from the outside kills the (Zahavi) signal.
And finally, maybe people just don't actually want to use and/or make apps that much? That's not to say that good apps aren't valuable, but the ubiquity of various platforms app stores implies that there's some huge demand, but if most app usage is concentrated amongst a small number of genuine day-to-day problem solving tools that already have heavy hitters that have been around for years, an influx of new things perhaps isn't that interesting.
Comment by falcor84 22 hours ago
Comment by vouwfietsman 22 hours ago
Comment by falcor84 12 hours ago
> Why buy a CRM solution or a ERM system when “AI” can generate one for you in hours or even minutes?
Obviously that's a strawman argument that shouldn't be taken at face-value. AI-generated software is rapidly improving, but it will take some time until it can do that sort of work without human intervention. Extrapolating from METR's chart[0], we should expect a SotA AI to one-shot a modern commercial CRM in around the early 2030s. It's then up to anyone here to decide if that's something we should actively prepare for already (I personally think we should).
[0] https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...
Comment by KurSix 2 hours ago
Comment by callc 22 hours ago
Give some concrete examples of why current LLM/AI is disruptive technology like digital cameras.
That’s the whole point of the article. Show the obvious gains.
Comment by JW_00000 22 hours ago
Comment by falcor84 22 hours ago
It's important to note that we're now arguing about the level of quality of something that was a "ha, ha, interesting" in a sidenote by Andrej Karpathy 10 years ago [0], and then became a "ha, ha, useful for weekend projects" in his tweet from a year ago. I'm looking forward to reading what he'll be saying in the next few years.
[0] https://karpathy.github.io/2015/05/21/rnn-effectiveness/
Comment by callc 22 hours ago
If AI had such obvious gains, why not accelerate that timeline to 6 months?
Take the average time to make a simple app, divide by the supposed productivity speed up, and this should be the time we see a wave of AI coded apps.
As time goes on, the only conclusion we can reach (especially looking at the data) is that the productivity gains are not substantial.
Comment by amelius 21 hours ago
Because in the beginning of a new technology, the advantages of the technology benefit only the direct users of the technology (the programmers in this case).
However, after a while, the corporations see the benefit and will force their employees into an efficiency battle, until the benefit has shifted mostly away from the employees and towards their bosses.
After this efficiency battle, the benefits will become observable from a macro perspective.
Comment by spit2wind 21 hours ago
Comment by falcor84 19 hours ago
Comment by lukeschlather 20 hours ago
Comment by amelius 21 hours ago
Comment by rsynnott 18 hours ago
And they’re not even really talking about the future. People are making extremely expansive claims about how amazing llm coding tools are _right now_. If these claims were correct, one would expect to see it in the market.
Comment by exasperaited 22 hours ago
There were still valid practical and technical objections for many (indeed, there still is at least one technical objection against digital), the philosophical objections are still as valid as they were (and if you ask me digital has not come close to delivering on its promise to be less environmentally harmful).
But every working press photographer knew they would switch when there were full-frame sensors that were in range of budget planning that shot without quality compromise at the ISO speed they needed or when the organisations they worked for completed their own digital transition. Every working fashion photographer knew that viable cameras already existed.
ETA: Did it disrupt the wider industry? Obviously. Devastatingly. For photographers? It lowered the barrier to entry and the amount they could charge. But any working photographer had encountered that at least once (autofocus SLRs did the same thing, minilabs did the same thing, E6 did it, etc. etc.) and in many ways it was a simple enabling technology because their workflows were also shifting towards digital so it was just the arrival of a DDD workflow at some level.
—
Putting aside that aside, I am really not convinced your comparison isn't a category error, but it is definitely an interesting one for a couple of reasons I need to think about for a lot longer.
Not least that digital photography triggered a wave of early retirements and career switches, that I think the same thing is coming in the IT industry, and that I think those retirements will be much more damaging. AI has so radically toxified the industry that it is beginning to drive people with experience and a decade or more of working life away. I consider my own tech retirement to have already happened (I am a freelancer and I am still working, but I have psychologically retired, and very early; I plan to live out my working life somewhere else, and help people resisting AI to continue to resist it)
Comment by newsoftheday 19 hours ago
I was planning to work until mid 60's FT but retired this year because of, as you put it, AI toxification.
Comment by ori_b 21 hours ago
Comment by falcor84 19 hours ago
TFA is only looking at releases on app stores (rather than eg the number of github repos, which has been growing a lot). The analog would be of the number of photos being published around 2025, which I believe had been pretty steady. It's only with the release of smart phones and facebook a few years afterwards that we started seeing a massive uptick in the overall number of photos out there.
Comment by binary132 21 hours ago
I for one hail the curmudgeons. Uphold curmudgeon thought.
Comment by dunsany 22 hours ago
Comment by jcims 22 hours ago
I don’t trust the process enough to commit to it for user facing services, but I regularly find toy use cases and itches to scratch where the capability to crank out something useful in 20 minutes has been a godsend.
>Beginning to think of the vibe-coded apps akin to spreadsheets with lots of macros.
This resonates.
Comment by SecretDreams 22 hours ago
These things normally die a sigmoidal death after the creator changes jobs.
Comment by NitpickLawyer 22 hours ago
Comment by nunez 19 hours ago
Non-technical business stakeholders who own requirements for line-of-business apps can generate *working* (to them) end-to-end prototypes just by typing good-enough English into a text box.
Not just apps, too! Anything! Spreadsheets, fancy reports, customer service, you name it --- type what you need into the box and wait for it to vend what took an entire team days/weeks to do. Add "ultra-think really hard" to the prompt to trigger the big boy models and get the "really good" stuff. It sounds a little 1984, but whatever, it works.
Design? Engineering? QA? All of those teams are just speed bumps now. Sales, Legal, Core business functions, and a few really experienced nerds is all you need. Which has always been the dream for many business owners.
It doesn't matter that LLMs provide 60% of the solution. 60% > 0%, and that's enough to justify offshoring everything and everyone in the delivery pipeline to cheaper labor (or use that as a threat to suppress wages and claw back workers' rights), including senior engineers who are being increasingly pressured to adopt these tools.
A quick jaunt through /r/cscareerquestions on Reddit is enough to see that this train blasted off from its station with a full tank of fuel for the long-haul.
There's always the possibility that several really bad things happen that makes the entire industry remember that software engineering is an actual discipline and that treating employees well is generally a good thing. For now, all of this feels permanent, and all of it sucks.
Comment by zkmon 22 hours ago
The AI businesses are busy selling AI to each other. Non-tech businesses are busy spending their AI budgets on useless projects. Everybody is clueless, and like - let's jump in just like we did for blockchain, because we don't want to lose out or be questioned on our tech adaption.
Comment by api 22 hours ago
The best AI companies will be founded a year after the crash.
Comment by bityard 23 hours ago
Comment by KurSix 3 hours ago
Comment by pico303 19 hours ago
Also, why is this the “Gorman” Paradox? He literally links to the article that I remember as first proposing this paradox (though I don’t think the original author called it a paradox). This author just summarizes that article. It should be the Judge Paradox, after the original author.
Comment by zephen 11 hours ago
Yeah. Normally, of course, I link to an XKCD, but for this observation, I like this cartoon:
https://i.programmerhumor.io/2024/12/programmerhumor-io-prog...
> Also, why is this the “Gorman” Paradox?
I just made the same point in a comment, before I read yours.
Gorman is obviously a marketing guy, but he's tech-adjacent enough he should realize this is going to go over like a lead balloon.
Comment by NicuCalcea 23 hours ago
Comment by xwindowsorg 22 hours ago
Comment by patapong 23 hours ago
As others have said, I think a lot of the difficulty in creating an app lies in making the numerous choices to make the app function, not necessarily in coding. You need "taste" and the ability to push through uncertainty and complexity, which are orthogonal to using Ai in many cases.
Comment by android521 9 hours ago
Comment by hintymad 13 hours ago
Comment by alangibson 20 hours ago
3D printed houses can only manage the underlying structure of the house. It doesn't cover finishing work which is much harder on the nerves and pocketbook.
Likewise, AI code generation is only useful for the actual implementation. That's maybe 20% of the work. Coordination with stakeholders, architecture, etc are much harder on the nerves and pocketbook.
Comment by MK2k 18 hours ago
Same with more complex systems: entire shop systems with payment integration, ERP and all – heavily supported by LLM code tools with a 3-10x productivity boost, just done by the CTO and no additional developers needed. They exist, the shop greets it’s customers and all you see is Vue and Tailwind as the tech stack where Shopify could’ve been. It’s now completely owned by the company selling things (they just don't sell the software).
Comment by Jordan-117 18 hours ago
Comment by chrsw 22 hours ago
But it's happening.
Comment by alexsmirnov 17 hours ago
- you create small utility that covers only features needed only for you. As many researches show that any individual uses only less than 20% of software functionality, your tool covers only 10-20% that matters for you
- it only runs locally, on user computer or phone, and never has more than one customer. Performance, security, compliances do not matter
- the code lies next to application, and small enough to fix any bug instantly, in a single AI agent run
- as a single user, you don't care about design, UX, or marketing. Do the job is only matter
It means, majority of vibe coded applications run under radar, used only by a few individuals. I can see it myself: I have a bunch of vibe code utilities that never intended for a broad auditory . And, many of my friend and customers, mention the same: "I vibe coded utility that does ... for me". This means a big consequences for software development: the area for commercial development shrinks, nothing that can be replaced by the small local utility has a market value.
Comment by pgt 20 hours ago
EACL replaced SpiceDB at work for us, and would not have been possible without AI.
Comment by anovick 21 hours ago
For them, it will be a slow death (e.g. similar to how Figma unseated Adobe in the digital design art space).
As for new app markets, you will surely see (compared to past generations) smaller organizations being able to achieve market dominance, often against much more endowed competitors. And this will be the new normal.
Comment by II2II 22 hours ago
If you're talking about internally developed software: AI generated apps suffer from the same pitfalls.
If you're talking about third-party alternatives: AI generated apps suffer from the same pitfalls.
Bonus reasons: advertising your product as AI generated will likely be seen as a liability. It tends to be promoted as a means of developing software more rapidly or for eliminating costly developers. There is relatively little talk about the quality of software quality, and most of the talk we do see is from developers who have a lot to lose from the shift to AI generated software. (I'm not saying they're wrong, just that they are the loudest because they have the most to lose.)
Comment by zephen 11 hours ago
https://mikelovesrobots.substack.com/p/wheres-the-shovelware...
Shouldn't it be the "Judge Paradox" ???
Comment by jmkni 21 hours ago
I was playing around with v0, and was able to very quickly get 'kinda sorta close-ish' to an application I've been wanting to build for a while, it was quite impressive.
But then the progress slowed right down, I experienced that familiar thing many others have where, once you get past a certain level of complexity, it's breaking things, removing features, re-introducing bugs all while burning through your credits.
It was at this point I remembered I'm actually a software engineer, so pushed the code it had written to github and pulled it down.
Total mess. Massive files, duplicated code all over the place, just a shitshow. I spent a day refactoring it so I could actually work with it, and am continuing to make progress on the base it built for me.
I think you can vibe code the basis of something really quickly, but the AI starts to get confused and trip over it's own shitty code. A human would take a step back and realise they need to do some refactoring, but AI just keeps adding to the pile.
It has saved me a good few months of work on this project, but to really get to a finished product it's going to be a few more months of my own work.
I think a lot of non-technical people are going to vibe-code themselves to ~60-70% of the way there and then hit a wall when the AI starts going around in circles, and they have no idea how to work with the generated code themselves.
Comment by brazukadev 20 hours ago
Or you can get back to vibecoding after fixing things and establishing a good base. then it helps you go faster until you feel like understanding and refactoring things because it got some things wrong. It is a continuous process.
Comment by jmkni 20 hours ago
Comment by nayroclade 22 hours ago
Comment by jrm4 21 hours ago
Strong chance that the push in innovation in this space doesn't get reflected in "apps sold or downloaded," and in fact hurts this metric (and perhaps people buying and selling code in general) -- but still results of "people and organizations solving their own problems with code."
Comment by slrainka 22 hours ago
Comment by tylerchilds 22 hours ago
We took risks today in the hopes that these decisions will make enough money to offset the labor cost of the decision.
Ai promises to eliminate labor, so businesses correctly identify AI risks as free debt.
Comment by dns_snek 21 hours ago
These aren't promises, they're just hopes and dreams. Unless these businesses happen to be signing contracts with AI providers to replace their labor in a few years, they're incorrectly identifying AI risks as free debt.
Comment by tylerchilds 15 hours ago
Realistically, the execs see it as either them or their subordinates and the idea of a captain dying with a ship is not regarded as noble amongst themselves. So they’ll sacrifice the crew, if only for one more singular day at sea.
Comment by timonoko 1 day ago
https://github.com/timonoko/Plotters/blob/main/showgcode_inc...
Comment by jordemort 22 hours ago
Comment by ant512 20 hours ago
But when AI can be used to improve itself, that's when things get interesting.
Comment by ineedasername 5 hours ago
-Feb Claude code
-April OpenAI Codex
-June Gemini Code
And that's not even accounting for the "1 prompt to build your saas business" services popping up.
This isn't Fermi paradox territory, it's just lightspeed lag time.
Take breath. Brace yourselves. The shovelware will be here soon enough.
And SWE's? Take heart. The more there is, the more the difference in quality will be easily seen between giving a power tool to a hobbyist and giving it to an expert that knows their craft.
Comment by fortran77 22 hours ago
He started talking aobut Objective-C and how it was 10x more productive than other programming languages and how easy it is to write good applications quickly with it. Someone shouted out the question: "If it's so easy and fast to write applications, where are all the NeXT killer apps?" There was no good answer....
Comment by II2II 22 hours ago
Objective-C itself didn't have much of a chance for many reasons. One is that most APIs were based upon C or C++ at the time. The availability of Objective-C on other platforms will do little to improve productivity if the resulting program is essentially C with some Objective-C code that you developed from scratch yourself. Microsoft was, for various reasons, crushing almost everyone at the time. Even titans like DEC and Sun ended up falling. Having a 10x more productive API was not going to help if it reached less than 1/100th of the market. (Objective-C, in my opinion, was an interesting but not terribly unique language so it was the NeXT API that offered the productivity boost.) Also keep in mind that it took a huge marketing push for Java to survive, and being platform agnostic certainly helpted it. Seeming as Java was based upon similar principles, and a more conventional syntax, Objective-C was also less appealing.
Comment by kragen 21 hours ago
You're right that there are programs that are just a thin layer of glue over existing C APIs, and the existing C API is going to largely determine how much effort that is. But there are other programs where calls to external APIs are only a small fraction of the code and effort. If OO was the huge productivity boost Jobs was claiming, you'd expect those programs to be much easier to write in Objective-C than in C. Since they made the choice to implement Objective-C as part of GCC, people could easily write them on other Unixes, too. Mostly they didn't.
My limited experience with Objective-C is that they are easier to write, just not to the extent Jobs claimed. OO makes Objective-C code more flexible and easier to test than code in C. It doesn't make it easier to design or debug. And, as you say, other languages were OO to a similar extent as Objective-C while similarly not sacrificing efficiency, such as C++ and (many years later) Java and C#.
Comment by chihuahua 22 hours ago
But it's unlikely that Steve Jobs of all people would want to provide that explanation.
Around 2001 my company sent me to a training class for Objective-C and as far as I can remember, it's like a small tweak of C++ with primitive smart pointers, so I doubt that it's 10x more productive than any other language. Maybe 1.01x more productive.
Comment by kragen 21 hours ago
Objective-C++ is a different matter, but it was written many years after the time we are discussing.
Comment by chihuahua 20 hours ago
What I do remember is that it's an odd language, but nothing about it suggested that it would even be 2x more productive than C or C++ or Java.
I didn't get to use it much after the week-long class; the only reason the company sent 3 of us across the country for a week is because the CTO had a bizarre obsession with Objective-C and OS X.
Comment by kragen 20 hours ago
Comment by krackers 15 hours ago
Comment by pancsta 21 hours ago
Comment by singpolyma3 20 hours ago
Comment by smokel 22 hours ago
My bet is that we will see much more software, but more customized, and focused precisely on certain needs. That type of software will mostly be used privately.
Also, don't underestimate how long it will take for the masses to pick up new tools. There are still people, even here on Hacker News, proclaiming that AI coding assistants do not offer value.
Comment by xnx 22 hours ago
Comment by journey2s 1 day ago
Comment by krackers 15 hours ago
And yet in real world use you get stuff like https://github.com/scikit-learn/scikit-learn/pull/32101 (not to pick on that particular author since I pulled it completely at random. But it's notable that this also was not a fly-by-night PR by a complete newb. The author seems to have reasonable credentials and programming experience, and this is a fairly "self-contained" PR limited to just one function. Yet despite those advantages, the LLM output couldn't pass muster.)
Comment by preommr 22 hours ago
Comment by callc 22 hours ago
See contradicting data here: https://mikelovesrobots.substack.com/p/wheres-the-shovelware...
Comment by preommr 20 hours ago
Companies like lovable are reporting millions of projects that are basically slop apps. They're just not released as real products with an independent company.
The data is misleading - it's like saying high-quality phone cameras had no impact on the video industry. Just look at how much of network tv is filmed with iphone cameras. At best you might have some ads, and some minor projects using it, but nothing big. Completely ignoring that youtube or tiktok are built off of people's phone cameras and their revenue rivals major networks.
I am sorry, I just don't want to have this conversation about AI and it's impact for the millionth time because it just devolves into semantics, word games, etc. It's just so tiring.
[0] https://www.gamesradar.com/platforms/pc-gaming/steams-slop-p...
Comment by bccdee 22 hours ago
Comment by preommr 20 hours ago
There's a world of difference between the technical capabilities of a technology, and people actually executing it.
Comment by oulipo2 21 hours ago
that said, the article says "Why buy a CRM solution or a ERM system when “AI” can generate one for you in hours or even minutes"
and I'd say that it's also wrong to see it that way, because the issue with "rolling your own" is not so much shipping the initial version, than maintaining and developing and securing it over time. This is what takes a lot of manpower, and that's why you usually want to rely on an external vendor, who focuses exclusively on maintaining and updating that component, so you don't have to do it on your own
Comment by mellosouls 21 hours ago
How do they know there aren't apps and services with significant AI contributions? Any closed source app is by definition a black box in regard to the composition.
We decry AI slop but this article shows no AI is needed for that.
Comment by senordevnyc 20 hours ago
I also know multiple non-technical people who have built little apps for themselves or for their company to use internally that previously would have purchased some off the shelf software.
Comment by erichocean 20 hours ago
Works great, and it's easy to add features to (as evidenced by the dozen or stuff I've added already).
Is that a "new library"? It certainly doesn't show up in any statistic the OP could point to.
Comment by insane_dreamer 20 hours ago
Comment by samyar 21 hours ago
Comment by freen 21 hours ago
Comment by bossyTeacher 23 hours ago
Comment by gombosg 22 hours ago
OK I don't have numbers to back it up but I wouldn't be surprised if most of the investment and actual AI use was not tech (software engineering), but other use cases.
Comment by bpt3 22 hours ago
Comment by spwa4 16 hours ago
The big problem for demand is wealth concentration, ie. the problem is money. Lots of people not having money, specifically. So unless AI actually becomes a way to transfer money to a great deal of people, it won't move the needle much. And if it becomes an actual reason to do layoffs (as opposed to an excuse, like it is now) then it will actually have negative impact.
Comment by davydm 1 day ago
You'll always find someone claiming to have made a thing with ai alone, and some of these may even work to an extent, but the reality is that the models have zero understanding, so if you're looking to replicate something that already exists (or exists in well defined, open source parts), you're going to get further than if you're thinking truly outside the box.
Then there's library and framework churn. AI models aren't good with this (as evidenced by the hours I wanted trying to get any model to help me through a webpack4 to webpack5 upgrade. There was no retained context and no understanding, so it kept telling me to ado webpack4 things that don't work in 5).
So if you're going to make something that's easily replicated in a well-dpcunented framework with lots of stack overflow answers, you might get somewhere. Of course, you could have gotten then yourself with those same inputs, and, as a bonus, you'd actually understand what was done and be able to fix the inevitable issues, as well as extend with your own functionality. If you're asking for something more niche, you have to bring a lot more to the table, and you need a fantastic bullshit detector as the model will confidently lead you down the wrong path.
Simply put, ai is not the silver bullet it's sold as, and the lack of app explosion is just more evidence on that pile.
Comment by callc 22 hours ago
I experienced this too asking LLM to help with a problem with a particular Arduino board. Even with being a very popular microcontroller, it is probably giving blended answers from the 15 other types of Arduino boards, not the one I have.
Comment by smokel 22 hours ago
I find that most software development falls squarely into this category.
Comment by nacozarina 1 day ago
The industrial age was plagued by smog. And so shall be the Information Age.
Comment by Lapsa 22 hours ago
Comment by m0llusk 19 hours ago
Comment by WesolyKubeczek 21 hours ago
Nobody is writing a tutorial on how to use ML to make a robot to fold laundry. Instead, it's you spend tokens to spend more tokens to spend even more tokens. At this point, the word "token" starts bearing unwelcome connotations with "tokens" from NFTs and Brave's "BAT" fads.
Comment by NedF 14 hours ago
Comment by hmans 21 hours ago
Comment by dboreham 22 hours ago
Comment by ParanoidShroom 22 hours ago
https://countrx.app/ is something I vibed in a month. Can people here tell? Sure the typical gradiënt page is something to spot, but native apps i think are harder. I would love to see app store and Google Play Store stats to see how many new apps are onboarded.
Looking at distribution channels like Google Play, they added significant harder thresholds to be able to publish an app to reduce low quality new apps. Presumably due to gen ai?
Edit: Jesus guys, the point I'm trying to make is that there are probably a lot more out there that are not visible... Im not claiming i developed the holy grail vibe coding.
Comment by icepat 22 hours ago
Example: https://imgur.com/a/Sh3DtmF
Comment by ParanoidShroom 19 hours ago
Comment by StilesCrisis 22 hours ago
Comment by esseph 22 hours ago
Comment by pxx 22 hours ago
Comment by sockopen 22 hours ago
Comment by frisia 21 hours ago
Comment by bediger4000 22 hours ago
Comment by ParanoidShroom 21 hours ago
Comment by brazukadev 20 hours ago
This is visible now and is terrible AI slop. Proved the point.