A few random notes from Claude coding quite a bit last few weeks
Posted by bigwheels 2 days ago
Comments
Comment by daxfohl 1 day ago
Like there have been multiple times now where I wanted the code to look a certain way, but it kept pulling back to the way it wanted to do things. Like if I had stated certain design goals recently it would adhere to them, but after a few iterations it would forget again and go back to its original approach, or mix the two, or whatever. Eventually it was easier just to quit fighting it and let it do things the way it wanted.
What I've seen is that after the initial dopamine rush of being able to do things that would have taken much longer manually, a few iterations of this kind of interaction has slowly led to a disillusionment of the whole project, as AI keeps pushing it in a direction I didn't want.
I think this is especially true if you're trying to experiment with new approaches to things. LLMs are, by definition, biased by what was in their training data. You can shock them out of it momentarily, whish is awesome for a few rounds, but over time the gravitational pull of what's already in their latent space becomes inescapable. (I picture it as working like a giant Sierpinski triangle).
I want to say the end result is very akin to doom scrolling. Doom tabbing? It's like, yeah I could be more creative with just a tad more effort, but the AI is already running and the bar to seeing what the AI will do next is so low, so....
Comment by aswegs8 8 hours ago
Comment by mikemarsh 7 hours ago
Thankfully more and more people are seriously considering the effects of technology on true wisdom and getting of the "all technological progress clearly is great, look at all these silly unenlightened naysayers from the past" train.
Comment by runarberg 3 hours ago
When Socrates uses the same warnings about LLMs he may however be correct both on the effect and the importance of the skill being lost. If we loose the ability to think and solve various problems, we may indeed be loosing a very important skill of our humanity.
Comment by AIorNot 2 hours ago
e.g the Matrix Reloaded: https://youtu.be/cD4nhYR-VRA?si=bXGBI4ca-LaetLVl&t=69 Machines no one understand or can manage
Issac Asmiov's Classic - the Feeling of Power https://ia600806.us.archive.org/20/items/TheFeelingOfPower/T...
(future scientists discover how to add using paper and pencil instead of computer)
I mean Big Paradigm shifts are like death, we can't really predict how humanity will evolve if we really get AGI -but these LLMs as they work today are tools and humans are experts at finding out how to use tools efficiently to counter the trade offs.
Does it really matter today that most programmers don't know how to code in assembly for example?
Comment by runarberg 1 hour ago
Unlike Malthus, for whom it was easier to imagine the end of the world then the end of Mercantilism, I can easily imagine a world which simply replaces capitalism as its institutions start producing existential threats for humanity.
However, I don‘t think LLMs are even that, for me they are an annoyance which I personally want gone, but next to climate change and the stagnation of population growth, they wont make a dent in upending capitalism, despite how much they suck.
But just because they are not an existential threat, that doesn’t make them harmless. Plenty of people will be harmed by this technology. Like Socrates predicted people will lose skills, this includes skilled programmers, and where previously we were getting some quality software, instead we will get less of it, replaced with a bunch of AI slop. That is my prediction at least.
Comment by daxfohl 49 minutes ago
In all seriousness though, it's just crazy that anybody is thinking these things at the dawn of civilization.
Comment by kelnos 3 hours ago
To me, this feels like support. I was never an adult who could not read or write, so I can't check my experience against Socrates' specific concern. But speaking to the idea of memory, I now "outsource" a lot of my memory to my smartphone.
In the past, I would just remember my shopping list, and go to the grocery store and get what I needed. Sure, sometimes I'd forget a thing or two, but it was almost always something unimportant, and rarely was a problem. Now I have my list on my phone, but on many occasions where I don't make a shopping list on my phone, when I get to the grocery store I have a lot of trouble remembering what to get, and sometimes finish shopping, check out, and leave the store, only to suddenly remember something important, and have to go back in.
I don't remember phone numbers anymore. In college (~2000) I had the campus numbers (we didn't have cell phones yet) of at least two dozen friends memorized. Today I know my phone number, my wife's, and my sister's, and that's it. (But I still remember the phone number for the first house I lived in, and we moved out of that house when I was five years old. Interestingly, I don't remember the area code, but I suppose that makes sense, as area codes weren't required for local dialing in the US back in the 80s.)
Now, some of this I will probably ascribe to age: I expect our memory gets more fallible as we get older (I'm in my mid 40s). I used to have all my credit/debit card numbers, and their expiration dates and security codes, memorized (five or six of them), but nowadays I can only manage to remember two of them. (And I usually forget or mix up the expiration dates; fortunately many payment forms don't seem to check, or are lax about it.) But maybe that is due to new technology to some extent: most/all sites where I spend money frequently remember my card for me (and at most only require me to enter the security code). And many also take Paypal or Google Pay, which saves me from having to recall the numbers.
So I think new technology making us "dumber" is a very real thing. I'm not sure if it's a good thing or a bad thing. You could say that, in all of my examples, technology serving the place of memory has freed up mental cycles to remember more important things, so it's a net positive. But I'm not so sure.
Comment by runarberg 55 minutes ago
So when you started using technology to offload your memory, what you gained was the time and effort you previously spent encoding these things into your memory.
I think there is a fundamental difference though between phone book apps and LLMs. Loosing the ability to remember a phone number is not as severe as loosing the ability to form a coherent argument, or to look through sources, or for a programmer to work through logic, to abstract complex logic into simpler chunks. If a scholar looses the skill to look through sources, and a programmer looses the ability to abstract complex logic, they are loosing a fundamental part of their needed to do their jobs. This is like if a stage actor looses the ability to memorize the script, instead relying on a tape-recorder when they are on stage.
Now if a stage actor losses the ability to memorize the script, they will soon be out of a job, but I fear in the software industry (and academia) we are not so lucky. I suspect we will see a lot of people actually taking that tape recorder on stage, and continue to do their work as if nothing is more normal. And the drop in quality will predictably follow.
Comment by ericmcer 5 hours ago
Comment by pinnochio 3 hours ago
Comment by drdeca 3 hours ago
Comment by beepbooptheory 6 hours ago
Its what is known as one of the Socratic "myths," and really just contributes to a web of concepts that leads the dialogue to its ultimate terminus of aporia (being a relatively early Plato dialogue). Socrates, characteristically, doesn't really give his take on writing. In the text, he is just trying to help his friend write a horny love letter/speech!
I can't bring it up right now, but the end of the dialogue has a rather beautiful characterization of writing in the positive, saying that perhaps logos can grow out of writing, like a garden.
I think if pressed Socrates/Plato would say that LLM's are merely doxa machines, incapable of logos. But I am just spitballing.
Comment by dempedempe 5 hours ago
Comment by beepbooptheory 3 hours ago
The one at issue:
https://standardebooks.org/ebooks/plato/dialogues/benjamin-j...
The public domain translations are pretty old either way. John Cooper's big book is probably still the best but im out of the game these days.
AI guys would probably love this if any of them still have the patience to read/comprehend something very challenging. Probably one of the more famous essays on the Phaedrus dialogue. Its the first long essay of this book:
https://xenopraxis.net/readings/derrida_dissemination.pdf
Roughly: Plato's subordination of writing in this text is symptomatic of a broader kind of `logocentrism` throughout all of western canonical philosophy. Derrida argues the idea of the "externality" of writing compared to speech/logos is not justified by anything, and in fact everything (language, thought) is more like a kind "writing."
Comment by sifar 7 hours ago
Comment by direwolf20 6 hours ago
Comment by specialist 6 hours ago
My personal counterpoint is Norman's thesis in Things That Make Us Smart.
I've long tried, and mostly failed, to consider the tradeoffs, to be ever mindful that technologies are never neutral (winners & losers), per Postman's Technopoly.
Comment by throw10920 8 hours ago
And "other people in the past predicted doom about something like this and it didn't happen" is a fallacious non-argument even when the things are comparable.
Comment by ppseafield 7 hours ago
Writing's invention is presented as an "elixir of memory", but it doesn't transfer memory and understanding directly - the reader must still think to understand and internalize information. Socrates renames it an "elixir of reminding", that writing only tells readers what other people have thought or said. It can facilitate understanding, but it can also enable people to take shortcuts around thinking.
I feel that this is an apt comparison, for example, for someone who has only ever vibe-coded to an experienced software engineer. The skill of reading (in Socrates's argument) is not equivalent to the skill of understanding what is read. Which is why, I presume, the GP posted it in response to a comment regarding fear of skill atrophy - they are practicing code generation but are spending less time thinking about what all of the produced code is doing.
Comment by wjSgoWPm5bWAhXB 8 hours ago
Comment by throw10920 7 hours ago
It's then quite obvious that the fact that someone, somewhere, predicts a bad thing happening has ~zero bearing on whether it actually happens, and so the claim that "someone predicted doom in the past and it didn't happen then so someone predicting doom now is also wrong" is absurd. Calling that idea "intellectually lazy" is an insult to smart-but-lazy people. This is more like intellectually incapable.
The fact that people will unironically say such a thing in the face of not only widespread personal anecdotes from well-respected figures, but scientific evidence, is depressing. Maybe people who say these things are heavy LLM users?
Comment by jrowen 4 hours ago
With the right cherry picking, it can always be said that [some set of] the doomsayers were right, or that they were wrong.
As you say, someone predicting doom has no bearing on whether it happens, so why engage in it? It's just spreading FUD and dwelling on doom. There's no expected value to the individual or to others.
Personally, I don't think "TikTok will shorten people's attention spans" qualifies as doom in and of itself.
Comment by jatari 7 hours ago
Comment by direwolf20 6 hours ago
Comment by jatari 6 hours ago
Comment by jrowen 4 hours ago
But, it is really hard to escape the feeling that digital technology and AI are a huge inflection point. In some ways this couple generations might be the singularity. Trump and contemporary geopolitics in general is a footnote, a silly blip that will pale in comparison over time.
Comment by grogenaut 7 hours ago
Comment by andy_ppp 8 hours ago
Comment by oblio 7 hours ago
That feeling was one of empowerment: I was able to satisfy my curiosity about a lot of topics.
LLMs can do the same thing and save me a lot of time. It's basically a super charged Google. For programming it's a super charged auto complete coupled with a junior researcher.
My main concern is independence. LLMs in the hands of just a bunch of unchecked corporations are extremely dangerous. I kind of trusted Google, and even that trust is eroding, and LLMs can be extremely personal. The lack of trust ranges from risk of selling data and general data leaks, to intrusive and worse, hidden ads, etc.
Comment by runarberg 7 hours ago
These capabilities simply didn’t exist before the Internet. Apart for the email to Australia (which was possible with a fax machine; but much more expensive), LLMs don‘t give you any new capabilities. It just provides a way for you to do what you already can (and should) do with your brain, without using your brain. It is more like using replacing your social interaction with facebook, then it is to experience an instant message group chat for the first time.
Comment by oblio 4 hours ago
The list of things they can provide is endless.
They're not a creator, they're an accelerator.
And time matters. My interests are myriad but my capacity to pass the entry bar manually is low because I can only invest so much time.
Comment by runarberg 4 hours ago
When I first used the internet, it was not about doing things faster, it was about doing things which were previously simply unavailable to me. A 12 year old me was never gonna fax my previous classmate who moved to Australia, but I certainly emailed her.
We are not talking about a creator nor an accelerator, we are talking about an avenue (or a road if you will). When I use the internet, I am the creator, and the internet is the road that gets me there.
When I use an LLM it is doing something I can already do, but now I can do it without using my brain. So the feeling is much closer to doomscrolling on social media where previously I could just read a book or meet my pals at the pub. Doomscrolling facebook is certainly faster then reading a book, or socializing at the pub. But it is a poor replacement for either.
Comment by oblio 3 hours ago
I could however greatly enrich my general knowledge in ways I couldn't do with books I had access to.
Comment by runarberg 3 hours ago
But I can definitely see how for many people with less access to libraries (or worse quality libraries then what I had access to) the internet provided a new avenue for gaining knowledge which wasn’t available before.
Comment by whistle650 8 hours ago
Comment by striking 1 day ago
This would be fine if not for one thing: the meta-skill of learning to use the LLM depreciates too. Today's LLM is gonna go away someday, the way you have to use it will change. You will be on a forever treadmill, always learning the vagaries of using the new shiny model (and paying for the privilege!)
I'm not going to make myself dependent, let myself atrophy, run on a treadmill forever, for something I happen to rent and can't keep. If I wanted a cheap high that I didn't mind being dependent on, there's more fun ones out there.
Comment by raducu 15 hours ago
You're lucky to afford the luxury not to atrophy.
It's been almost 4 years since my last software job interview and I know the drills about preparing for one.
Long before LLMs my skills naturally atrophy in my day job.
I remember the good old days of J2ME of writing everything from scratch. Or writing some graph editor for universiry, or some speculative, huffman coding algorithm.
That kept me sharp.
But today I feel like I'm living in that netflix series about people being in Hell and the Devil tricking them they're in Heaven and tormenting them: how on planet Earth do I keep sharp with java, streams, virtual threads, rxjava, tuning the jvm, react, kafka, kafka streams, aws, k8s, helm, jenkins pipelines, CI-CD, ECR, istio issues, in-house service discovery, hierarchical multi-regions, metrics and monitoring, autoscaling, spot instances and multi-arch images, multi-az, reliable and scalable yet as cheap as possible, yet as cloud native as possible, hazelcast and distributed systems, low level postgresql performance tuning, apache iceberg, trino, various in-house frameworks and idioms over all of this? Oh, and let's not forget the business domain, coding standards, code reviews, mentorships and organazing technical events. Also, it's 2026 so nobody hires QA or scrum masters anymore so take on those hats as well.
So LLMs it is, the new reality.
Comment by aftergibson 15 hours ago
Comment by oldandboring 7 hours ago
Comment by carimura 12 hours ago
Comment by KronisLV 11 hours ago
Most companies (in the global, not SV sense) would be well served by an app that runs in a Docker container in a VPS somewhere and has PostgreSQL and maybe Garage, RabbitMQ and Redis if you wanna get fancy, behind Apache2/Nginx/Caddy.
But obviously that’s not Serious Business™ and won’t give you zero downtime and high availability.
Though tbh most mid-size companies would also be okay with Docker Swarm or Nomad and the same software clustered and running behind HAProxy.
But that wouldn’t pad your CV so yeah.
Comment by ryandrake 7 hours ago
That’s still too much complication. Most companies would be well served by a native .EXE file they could just run on their PC. How did we get to the point where applications by default came with all of this shit?
Comment by danans 5 hours ago
I doubt that.
As software has grown to solving simple personal computing problems (write a document, create a spreadsheet) to solving organizational problems (sharing and communication within and without the organization), it has necessarily spread beyond the .exe file and local storage.
That doesn't give a pass to overly complex applications doing a simple thing - that's a real issue - but to think most modern company problems could be solved with just a local executable program seems off.
Comment by direwolf20 6 hours ago
There's an intermediate level of convenience. The school did have an IT staff (of one person) and a server and a network. It would be possible to run the library database locally in the school but remotely from the library terminals. It would then require the knowledge of the IT person to administer, but for the librarian it would be just as convenient as a cloud solution.
Comment by badsectoracula 4 hours ago
[0] or similarly easy to get running equivalent
Comment by KronisLV 6 hours ago
Because when you give your clients instructions on how to setup the environment, they will ignore some of them and then they install OracleJDK while you have tested everything under OpenJDK and you have no idea why the application is performing so much worse in their environment: https://blog.kronis.dev/blog/oracle-jdk-and-openjdk-compatib...
It's not always trivial to package your entire runtime environment unless you wanna push VM images (which is in many ways worse than Docker), so Docker is like the sweet spot for the real world that we live in - a bit more foolproof, the configuration can be ONE docker-compose.yml file, it lets you manage resource limits without having to think about cgroups, as well as storage and exposed ports, custom hosts records and all the other stuff the human factor in the process inevitably fucks up.
And in my experience, shipping a self-contained image that someone can just run with docker compose up is infinitely easier than trying to get a bunch of Ansible playbooks in place.
If your app can be packaged as an AppImage or Flatpak, or even a fully self contained .deb then great... unless someone also wants to run it on Windows or vice versa or any other environment that you didn't anticipate, or it has more dependencies than would be "normal" to include in a single bundle, in which case Docker still works at least somewhat.
Software packaging and dependency management sucks, unless we all want to move over to statically compiled executables (which I'm all for). Desktop GUI software is another can of worms entirely, too.
Comment by oldandboring 7 hours ago
- nobody remembers why they're using it
- a lot of it is pinned to old versions or the original configuration because the overhead of maintaining so much tooling is too much for the team and not worth the risk of breaking something
- new team members have a hard time getting the "complete picture" of how the software is built and how it deploys and where to look if something goes wrong.
Comment by dullcrisp 6 hours ago
Comment by daxfohl 1 day ago
Comment by scorpioxy 19 hours ago
Comment by bgilroy26 7 hours ago
Comment by daxfohl 19 hours ago
Comment by throwup238 22 hours ago
Comment by sarchertech 10 hours ago
Comment by direwolf20 6 hours ago
Comment by sarchertech 1 hour ago
Comment by draxil 12 hours ago
Comment by taylorius 6 hours ago
Comment by pvab3 6 hours ago
Comment by direwolf20 6 hours ago
Comment by pvab3 5 hours ago
Comment by shaftoe 11 hours ago
Comment by Aurornis 8 hours ago
I haven’t found this to be true at all, at least so far.
As models improve I find that I can start dropping old tricks and techniques that were necessary to keep old models in line. Prompts get shorter with each new model improvement.
It’s not really a cycle where you’re re-learning all the time or the information becomes outdated. The same prompt structure techniques are usually portable across LLMs.
Comment by rubenflamshep 6 hours ago
Comment by pards 11 hours ago
This is my fear - what happens if the AI companies can't find a path to profitability and shut down?
Comment by thevillagechief 9 hours ago
Comment by dyauspitr 3 hours ago
Comment by satvikpendem 8 hours ago
Comment by MillionOClock 8 hours ago
Comment by infecto 9 hours ago
Comment by Draiken 3 hours ago
Either it will continue to be this very flawed non-deterministic tool that requires a lot of effort to get useful code out of it, or it will be so good it'll just work.
That's why I'm not gonna heavily invest my time into it.
Comment by prettyblocks 9 hours ago
Comment by rurp 18 hours ago
This isn't to say LLMs won't change software development forever, I think they will. But I doubt anyone has any idea what kind of tools and approaches everyone will be using 5 or 10 years from now, except that I really doubt it will be whatever is being hyped up at this exact moment.
Comment by apercu 11 hours ago
So far, the only company making loud, concrete claims backed by audited financials is Klarna and once you dig in, their improved profitability lines up far more cleanly with layoffs, hiring freezes, business simplification, and a cyclical rebound than with Gen-AI magically multiplying output. AI helped support a smaller org that eliminated more complicated financial products that have edge cases, but it didn’t create a step-change in productivity.
If Gen-AI were making tech workers even 10× more productive at scale, you’d expect to see it reflected in revenue per employee, margins, or operating leverage across the sector.
We’re just not seeing that yet.
Comment by laserlight 10 hours ago
Comment by apercu 6 hours ago
I've agree with the fact that the last 10% of a project is the hardest part, and that's the part that Gen-AI sucks at (hell, maybe the 30%).
Comment by sarchertech 10 hours ago
If we’re even just talking a 2x multiplier, it should show up in some externally verifiable numbers.
Comment by apercu 7 hours ago
The issue is that I'm not a professional financial analyst and I can't spend all day on comps so I can't tell through the noise yet if we're seeing even 2x related to AI.
But, if we're seeing 10x, I'd be finding it in the financials. Hell, a blind squirrel would, and it's simply not there.
Comment by sarchertech 1 hour ago
Comment by locknitpicker 15 hours ago
I agree with the sentiment but I would have framed it differently. The LLM is a tool, just like code completion or a code generator. Right now we focus mainly on how to use a tool, the coding agent, to achieve a goal. This takes place at a strategic level. Prior to the inception of LLMs, we focused mainly on how to write code to achieve a goal. This took place at a tactical level, and required making decisions and paying attention to a multitude of details. With LLMs our focus shifts to a higher-level abstraction. Also, operational concerns change. When writing and maintaining code yourself, you focus on architectures that help you simplify some classes of changes. When using LLMs, your focus shifts to building context and aiding the model effectively implement their changes. The two goals seem related, but are radically different.
I think a fairer description is that with LLMs we stop exercising some skills that are only required or relevant if you are writing your code yourself. It's like driving with an automatic transmission vs manual transmission.
Comment by bandrami 15 hours ago
An LLM is always going to be a black box that is neither predictable nor visible (the unpredictability is necessary for how the tool functions; the invisibility is not but seems too late to fix now). So teams start cargo culting ways to deal with specific LLMs' idiosyncrasies and your domain knowledge becomes about a specific product that someone else has control over. It's like learning a specific office suite or whatever.
Comment by TeMPOraL 14 hours ago
So basically, like a co-worker.
That's why I keep insisting that anthropomorphising LLMs is to be embraced, not avoided, because it gives much better high-level, first-order intuition as to where they belong in a larger computing system, and where they shouldn't be put.
Comment by bandrami 14 hours ago
Arguably, though I don't particularly need another co-worker. Also co-workers are not tools (except sometimes in the derogatory sense).
Comment by draxil 11 hours ago
Comment by ryanjshaw 13 hours ago
Even years later? Most people can’t unless there’s good comments and design. Which AI can replicate, so if we need to do that anyway, how is AI specially worse than a human looking back at code written poorly years ago?
Comment by bandrami 12 hours ago
Comment by draxil 11 hours ago
Comment by koiueo 13 hours ago
Comment by Kostic 11 hours ago
Comment by striking 3 hours ago
Comment by bondarchuk 10 hours ago
vs.
>a company goes bankrupt or pivots
I can see a few differences.
Comment by nemothekid 1 day ago
My gripe with AI tools in the past is that the kind of work I do is large and complex and with previous models it just wasn't efficient to either provide enough context or deal with context rot when working on a large application - especially when that application doesn't have a million examples online.
I've been trying to implement a multiplayer game with server authoritative networking in Rust with Bevy. I specifically chose Bevy as the latest version was after Claude's cut off, it had a number of breaking changes, and there aren't a lot of deep examples online.
Overall it's going well, but one downside is that I don't really understand the code "in my bones". If you told me tomorrow that I had optimize latency or if there was a 1 in 100 edge case, not only would I not know where to look, I don't think I could tell you how the game engine works.
In the past, I could not have ever gotten this far without really understanding my tools. Today, I have a semi functional game and, truth be told, I don't even know what an ECS is and what advantages it provides. I really consider this a huge problem: if I had to maintain this in production, if there was a SEV0 bug, am I confident enough I could fix it? Or am I confident the model could figure it out? Or is the model good enough that it could scan the entire code base and intuit a solution? One of these three questions have to be answered or else brain atrophy is a real risk.
Comment by bedrio 18 hours ago
Comment by mattmanser 15 hours ago
My first job had the Devs working front-line support years ago. Due to that, I learnt an important lessons in bug fixing.
Always be able to re-create the bug first.
There are no such thing as ghost bugs, you just need to ask the reporter the right questions.
Unless your code is multi-threaded, to which I say, good luck!
Comment by chickensong 8 hours ago
When the cause is difficult to source or fix, it's sometimes easier to address the effect by coding around the problem, which is why mature code tends to have some unintuitive warts to handle edge cases.
Comment by yencabulator 7 hours ago
What isn't multi-threaded these days? Kinda hard to serve HTTP without concurrency, and practically every new business needs to be on the web (or to serve multiple mobile clients; same deal).
All you need is a database and web form submission and now you have a full distributed system in your hands.
Comment by direwolf20 6 hours ago
Comment by yencabulator 6 hours ago
Comment by mattmanser 5 hours ago
Comment by yencabulator 5 hours ago
Webdevs not aware of race conditions -> complex page fails to load. They're lucky in how the domain sandboxes their bugs into affecting just that one page.
Comment by SpicyLemonZest 15 hours ago
Comment by mh2266 20 hours ago
I am interested in doing something similar (Bevy. not multiplayer).
I had the thought that you ought be able to provide a cargo doc or rust-analyzer equivalent over MCP? This... must exist?
I'm also curious how you test if the game is, um... fun? Maybe it doesn't apply so much for a multiplayer game, I'm thinking of stuff like the enemy patterns and timings in a soulslike, Zelda, etc.
I did use ChatGPT to get some rendering code for a retro RCT/SimCity-style terrain mesh in Bevy and it basically worked, though several times I had to tell it "yeah uh nothing shows up", at which point is said "of course! the problem is..." and then I learned about mesh winding, fine, okay... felt like I was in over my head and decided to go to a 2D game instead so didn't pursue that further.
Comment by nemothekid 18 hours ago
I've found that there are two issues that arise that I'm not sure how to solve. You can give it docs and point to it and it can generally figure out syntax, but the next issue I see is that without examples, it kind of just brute forces problems like a 14 year old.
For example, the input system originally just let you move left and right, and it popped it into an observer function. As I added more and more controls, it began to litter with more and more code, until it was ~600 line function responsible for a large chunk of game logic.
While trying to parse it I then had it refactor the code - but I don't know if the current code is idiomatic. What would be the cargo doc or rust-analyzer equivalent for good architecture?
Im running into this same problem when trying to claude code for internal projects. Some parts of the codebase just have really intuitive internal frameworks and claude code can rip through them and provide great idiomatic code. Others are bogged down by years of tech debt and performance hacks and claude code can't be trusted with anything other than multi-paragraph prompts.
>I'm also curious how you test if the game is, um... fun?
Lucky enough for me this is a learning exercise, so I'm not optimizing for fun. I guess you could ask claude code to inject more fun.
Comment by azrazalea_debt 9 hours ago
Well, this is where you still need to know your tools. You should understand what ECS is and why it is used in games, so that you can push the LLM to use it in the right places. You should understand idiomatic patterns in the languages the LLM is using. Understand YAGNI, SOLID, DDD, etc etc.
Those are where the LLMs fall down, so that's where you come in. The individual lines of code after being told what architecture to use and what is idiomatic is where the LLM shines.
Comment by nemothekid 6 hours ago
When I look around today - its clear more and more people are diving in head first into fully agentic workflows and I simply don't believe they can churn out 10k+ lines of code today and be intimately familiar with the code base. Therefore you are left with two futures:
* Agentic-heavy SWEs will eventually blow up under the weight of all their tech debt
* Coding models are going to continue to get better where tech debt wont matter.
If the answer if (1), then I do not need to change anything today. If the answer is (2), then you need to prepare for a world where almost all code is written by an agent, but almost all responsibility is shouldered by you.
In kind of an ignorant way, I'm actually avoiding trying to properly learn what an ECS is and how the engine is structured, as sort of a handicap. If in the future I'm managing a team of engineers (however that looks) who are building a metaphorical tower of babel, I'd like to develop to heuristic in navigating that mountain.
Comment by storystarling 12 hours ago
It cuts down the input tokens significantly which is nice for the monthly bill, but I found the main benefit is that it actually stops the model from getting distracted by existing implementation details. It feels a bit like overengineering but it makes reasoning about the system architecture much more reliable when you don't have to dump the whole codebase into the context window.
Comment by jv22222 8 hours ago
Man, I absolutely hate this feeling.
Comment by krupan 1 day ago
Using an LLM is almost exactly the same. You get the occasional, "wow! I've never seen it do that before!" moments (whether that thing it just did was even useful or not), get a short hit of feel goods, and then we keep using it trying to get another hit. It keeps providing them at just the right intervals for people to keep them going just like they do with tick tock
Comment by neves 10 hours ago
Comment by CharlieDigital 22 hours ago
As in if the LLM doesn't know about it, some devs are basically giving up and not even going to RTFM. I literally had to explain to someone today how something works by...reading through the docs and linking them the docs with screenshots and highlighted paragraphs of text.
Still got push back along the lines of "not sure if this will work". It's. Literally. In. The. Docs.
Comment by finaard 16 hours ago
15 years ago I was working in an environment where they had lots of Indians as cheap labour - and the same thing will show up in any environment where you go for hiring a mass of cheap people while looking more at the cost than at qualifications: You pretty much need to trick them into reading stuff that are relevant.
I remember one case where one had a problem they couldn't solve, and couldn't give me enough info to help remotely. In the end I was sitting next to them, and made them read anything showing up on the screen out loud. Took a few tries where they were just closing dialog boxes without reading it, but eventually we had that under control enough that they were able to read the error messages to me, and then went "Oh, so _that's_ the problem?!"
Overall interacting with a LLM feels a lot like interacting with one of them back then, even down to the same excuses ("I didn't break anything in that commit, that test case was never passing") - and my expectation for what I can get out of it is pretty much the same as back then, and approach to interacting with it is pretty similar. It's pretty much an even cheaper unskilled developer, you just need to treat it as such. And you don't pair it up with other unskilled developers.
Comment by acessoproibido 8 hours ago
You can have as many extremely detailed and easy to parse gudies, references, etc. there will always be a portion of customers who refuse to read them.
Never could figure out why because they aren't stupid or anything.
Comment by yencabulator 1 hour ago
They may be intelligent, but they don't sound wise.
Comment by globular-toast 15 hours ago
Comment by overfeed 20 hours ago
I wouldn't have believed it a few tears ago if you told me the industry would one day, in lockstep, decide that shipping more tech-debt is awesome. If the unstated bet doesn't pay off, that is, AI development will outpace the rate it generates cruft, then there will be hell to pay.
Comment by ithkuil 18 hours ago
Once we realize the kind of mess _those_ models created, well, we'll need even more capable models.
It's a variation on the theme of Kernighan insight about the more "clever" you are while coding the harder it will be to debug.
EDIT: Simplicity is a way out but it's hard under normal circumstances, now with this kind of pressure to ship fast because the colleague with the AI chimp can outperform you, aiming at simplicity will require some widespread understanding
Comment by bandrami 14 hours ago
Comment by scorpioxy 19 hours ago
This isn't anything new of course. Previously it was with projects built by looking for the cheapest bidder and letting them loose on an ill-defined problem. And you can just imagine what kind of code that produced. Except the scale is much larger.
My favorite example of this was a project that simply stopped working due to the amount of bugs generated from layers upon layers of bad code that was never addressed. That took around 2 years of work to undo. Roughly 6 months to un-break all the functionality and 6 more months to clean up the core and then start building on top.
Comment by sally_glance 16 hours ago
I used to be unconcerned, but I admit to be a little frightened of the future now.
Comment by scorpioxy 13 hours ago
What's interesting to me though is that very similar promises were being made about AI in the 80s. Then came the "AI Winter" after the hype cycle and promises got very far from reality. Generative AI is the current cycle and who knows, maybe it can fulfill all the promises and hype. Or maybe not.
There's a lot of irrationality currently and until that settles down, it is difficult to see what is real and useful and what is smoke and mirrors.
Comment by sally_glance 3 hours ago
Funny thing is that meanwhile (today) I've actually been on an emergency consulting project where a PO/PM kind of guy vibecoded some app that made it into production. The thing works, but a cursory audit laid open the expected flaws (like logic duplication, dead code, missing branches). So that's another point for our profession still being required in the near future.
Comment by e12e 13 hours ago
Brilliant. Even if it was a typo.
Comment by TeMPOraL 15 hours ago
And guess what, I'm finally convinced they're right.
Consider: it's been that way for decades. We may tell ourselves good developers write quality code given the chance, but the truth is, the median programmer is a junior with <5 years of experience, and they cannot write quality code to save their life. That's purely the consequence of rapid growth of software industry itself. ~all production code in the past few decades was written by juniors, it continues to be so today; those who advance to senior level end up mostly tutoring new juniors instead of coding.
Or, all that put another way: tech debt is not wrong. It's a tool, a trade-off. It's perfectly fine to be loaded with it, if taking it lets you move forward and earn enough to afford paying installments when they're due. Like with housing: you're better off buying it with lump payment, or off savings in treasury bonds, but few have that money on hand and life is finite, so people just get a mortgage and move on.
--
Edited to add: There's a silver lining, though. LLMs make tech debt legible and quantifiable.
LLMs are affected by tech debt even more than human devs are, because (currently) they're dumber, they have less cognitive capability around abstractions and generalizations[0]. They make up for it by working much faster - which is a curse in terms of amplifying tech debt, but also a blessing, because you can literally see them slowing down.
Developer productivity is hard to measure in large part because the process is invisible (happens in people's heads and notes), and cause-and-effect chains play out over weeks or months. LLM agents compress that to hours to days, and the process itself is laid bare in the chat transcript, easy to inspect and analyze.
The way I see it, LLMs will finally allow us to turn software development at tactical level from art into an engineering process. Though it might be too late for it to be of any use to human devs.
--
[0] - At least the out-of-distribution ones - quirks unique to particular codebase and people behind it.
Comment by daxfohl 19 hours ago
(except where it's been stated, championed, enforced, and ultimated in no unequivocal terms by every executive in the tech industry)
Comment by overfeed 18 hours ago
Comment by naasking 8 hours ago
It's not debt if you never have to pay it back. If a model can regenerate a whole relibale codebase in minutes from a spec, then your assessment of "tech debt" in that output becomes meaningless.
Comment by gritspants 1 day ago
Comment by FitchApps 8 hours ago
Comment by solumunus 16 hours ago
I think the way you’re using these tools that makes you feel this way is a choice. You’re choosing to not be in control and do as little as possible.
Comment by Otterly99 11 hours ago
Once you start using it intelligently, the results can be really satisfying and helpful. People complaining about 1000 lines of codes being generated? Ask it to generate functions one at a time and make small implementations. People complaining about having to run a linter? Ask it to automatically run it after each code execution. People complaining about losing track? Have it log every modifications in a file.
I think you get my point. You need to treat it as a super powerful tool that can do so many things that you have to guide it if you want to have a result that conforms to what you have in mind.
Comment by rustyhancock 14 hours ago
We won't know until the code being produced especially greenfields hits any kind of maturity 5 years+ atleast?
Comment by mlrtime 12 hours ago
It's like a junior dev writing features for a product everyday vs a principle engineer. The junior might be adding a feature with O(n^2) performance while principle has seen this before and writes it O(log n).
If the feature never reaches significance, the "better" solution doesn't matter, but it might!
The principle may write once and it is solid and never touched, but the junior might be good enough to never need coming back to, same with a llm and the right operator.
Comment by rustyhancock 5 hours ago
What they're worse at is the bits I can't easily see.
An example is that I recently was working on a project building a library with Claude. The code in pieces all looked excellent.
When I wrote some code making use of it several similar functions which were conceptually similar had signatures that were subtly mismatched.
Different programmers might have picked each patterns. And probably consistently made similar rules for the various projects they worked on.
To an LLM they are just happenstances and feel no friction.
A real project with real humans writing the code would notice the mismatch. Even if they aren't working on those parts at the same time just from working on it across say a weekend.
But how many more decisions do we make convenient only for us meat bags that a LLM doesn't notice?
Comment by solumunus 12 hours ago
If you're using LLM's and you don't know what good/bad output looks like then of course you're going to have problems, but such a person would have the same problems without the LLM...
Comment by rustyhancock 6 hours ago
That's what it's ultimately been tuned to do.
The way I see this play out is output that satisfied me but that I would not produce myself.
Over a large project that adds up and typically is glaringly obvious to everyone but the person who was using the LLM.
My only guess as to why that is, is because most of what we do and why we do it we're not conscious of. The threshold we'd intervene at is higher than the original effort it takes to do the right thing.
If these things don't apply to you. Then I think you're coming up to a golden era.
Comment by phito 12 hours ago
They are amazing for side projects but not for serious code with real world impact where most of the context is in multiple people's head.
Comment by gritspants 7 hours ago
Comment by InfinityByTen 14 hours ago
At some point, I find myself needing to disconnect out of overwhelm and frustration. Faster responses isn't necessarily better. I want more observability in the development process so that I can be a party to it. I really have felt that I need to orchestrate multiple agents working in tandem, playing sort of a bad-cop, good-cop and a maybe a third trying to moderate that discussion and get a fourth to effectively incorporate a human in the mix. But that's too much to integrate in my day job.
Comment by SenHeng 7 hours ago
A quick example is trying to build a simple expenses app with it. I just want to store a list of transactions with it. I’ve already written the types and data model and just need the AI to give me the plumping. And it will always end up inserting recommendations about double entry bookkeeping.
Comment by fragmede 7 hours ago
Comment by SenHeng 6 hours ago
It’s great for churning out stuff that already exists, but that also means it’ll massage your idea into one of them.
Comment by nonethewiser 5 hours ago
Absolutely. At a certain level of usage, you just have to let it do it's thing.
People are going to take issue with that. You absolutely don't have to let it do its thing. In that case you have to be way more in the loop. Which isn't necessarily a bad thing.
But assuming you want it to basically do everything while you direct it, it becomes pointless to manage certain details. One thing in my experience is that Claude always wants to use ReactRouter. My personal preference is TanStack router, so I asked it to use it initially. That never really created any problems but after like the 3rd time of realizing I forget to specify it, I also realized that it's totally pointless. ReactRouter works fine and Claude uses it fine - its pointless to specify otherwise.
Comment by amluto 18 hours ago
I found the setting and turned it off for real. Good riddance. I’ll use the hotkey on occasion.
Comment by mlrtime 12 hours ago
I use claude daily, no problems with it. But vscode + copilot suggestions was garbage!
Comment by dkubb 6 hours ago
You then tell your agent to always run that skill prior to moving on. If the examples are pattern matchable you can even have the agent write custom lints if your linter supports extension or even write a poor man’s linter using ast-grep.
I usually have a second session running that is mainly there to audit the code and help me add and adjust skills while I keep the main session on the task of working on the feature. I've found this far easier to stay engaged than context switching between unrelated tasks.
Comment by ekropotin 6 hours ago
However, for hobby projects where I purposely use tech I’m not very familiar with, I force myself not to use LLMs at all - even as a chat. Thus, operating The old way - writing code manually, reading documentation, etc brings me a joy of learning back and, hopefully, establishes new neurone connections.
Comment by chickensong 7 hours ago
The AI definitely has preferences and attention issues, but there are ways to overcome them.
Defining code styles in a design doc, and setting up initial examples in key files goes a long way. Claude seems pretty happy to follow existing patterns under these conditions unless context is strained.
I have pretty good results using a structured workflow that runs a core loop of steps on each change, with a hook that injects instructions to keep attention focused.
Comment by freediver 1 day ago
Comment by mlrtime 12 hours ago
Now back to IC with 25+ years of experience + LLM = god mode, and its fun again.
Comment by swader999 1 day ago
Comment by zamalek 23 hours ago
Not trusting the ML's output is step one here, that keeps you intellectually involved - but it's still a far cry from solving the majority of problems yourself (instead you only solve problems ML did a poor job at).
Step two: I delineate interesting and uninteresting work, and Claude becomes a pair programmer without keyboard access for the latter - I bounce ideas off of it etc. making it an intelligent rubber duck. [Edit to clarify, a caveat is that] I do not bore myself with trivialities such as retrieving a customer from the DB in a REST call (but again, I do verify the output).
Comment by bandrami 14 hours ago
Genuine question, why isn't your ORM doing that? I see a lot of use cases for LLMs that seem to be more expensive ways to do snippets and frameworks...
Comment by zamalek 4 minutes ago
Comment by sosomoxie 22 hours ago
Comment by Ronsenshi 15 hours ago
I used to know a person like that - high in the company structure who would claim he was a great engineer, but all the actual engineers would make jokes about him and his ancient skills during private conversations.
Comment by withinboredom 14 hours ago
Yes, specific frameworks and tooling knowledge atrophy without use, and that’s true for anyone at any career stage. A developer who spent three years exclusively in React would be rusty on backend patterns too. But you’re conflating current tool familiarity with engineering ability, and those are different things.
The fundamentals: system design, debugging methodology, reading and reasoning about unfamiliar code, understanding tradeoffs ... those transfer. Someone with deep experience often ramps up on new stacks faster than you’d expect, precisely because they’ve seen the same patterns repackaged multiple times.
If the person you’re describing was genuinely overconfident about skills they hadn’t maintained, that’s a fair critique. But "the actual engineers making jokes about his ancient skills" sounds less like a measured assessment and more like the kind of dismissiveness that writes off experienced people before seeing what they can actually do.
Worth asking: were people laughing because he was genuinely incompetent, or because he didn’t know the hot framework of the moment? Because those are very different things.
Comment by Ronsenshi 14 hours ago
I don't disagree with your point about fundamentals, but in an industry where there seems to be new JS framework any time somebody sneezes - latest tools are very much relevant too. And of course the big thing is language changes. The events I'm describing happened in the late 00s-early 10s. When language updates picked up steam: Python, JS, PHP, C++. Somebody who used C++ 98 can't claim to have up to date knowledge in C++ in 2015.
So to answer your question - people were laughing at his ego, not the fact that he didn't know some hot new framework.
Comment by withinboredom 12 hours ago
Comment by Ronsenshi 10 hours ago
Maybe you had to be there.
Comment by sosomoxie 9 hours ago
Comment by runarberg 17 hours ago
Comment by sosomoxie 9 hours ago
Comment by Miraste 7 hours ago
Comment by runarberg 5 hours ago
Learning how to bike requires only a handful of skills, most of them are located in the motor control centers in your brain (mostly in the Cerebellum), which is known to retain skills much better then any other parts of your brain. Your programing skills are comprised of thousands of separate skills which are mostly located in your frontal-cortex (mostly in your frontal and temporal lobes), and learning a foreign language is basically that but more (like 10x more).
So while a foreign language is not the perfect analogy (nothing is), I think it is a reasonable analogy as a counter example to the bicycle myth.
Comment by tayo42 16 hours ago
Comment by Ronsenshi 15 hours ago
I'd say there's at most around 2 years of knowledge runtime (maybe with all this AI stuff this is even shorter). After that period if you don't keep your knowledge up to date it fairly quickly becomes obsolete.
Comment by runarberg 8 hours ago
Comment by epolanski 23 hours ago
Context management, proper prompting and clear instructions, proper documentation are still relevant.
Comment by alansaber 11 hours ago
I would argue this is ok for front-end. For back-end? very, very bad- if you can't get a usable output do it by hand.
Comment by phrotoma 11 hours ago
Comment by kitd 9 hours ago
[1] - https://openspec.dev/
Comment by polytely 23 hours ago
Comment by abm53 12 hours ago
In the happy case where I have a good idea of the changes necessary, I will ask it to do small things, step by step, and examine what it does and commit.
In the unhappy case where one is faced with a massive codebase and no idea where to start, I find asking it to just “do the thing” generates slop, but enough for me to use as inspiration for the above.
Comment by seer 20 hours ago
The time it happened for me was rather abrupt, with no training in between, and the feeling was eerily similar.
You know _exactly_ why the best solution is, you talk to your reports, but they have minds of their own, as well as egos, and they do things … their own way.
At some point I stopped obsessing with details and was just giving guidance and direction only in the cases where it really mattered, or when asked, but let people make their own mistakes.
Now LLMs don’t really learn on their own or anything, but the feeling of “letting go of small trivial things” is sorta similar. You concentrate on the bigger picture, and if it chose to do an iterative for loop instead of using a functional approach the way you like it … well the tests still pass, don’t they.
Comment by Ronsenshi 15 hours ago
Comment by mlrtime 12 hours ago
It's also peeking at the big/impactful changes and ignoring the small ones.
Your job isn't to make sure they don't have "brain damage" its to keep them productive and not shipping mistakes.
Comment by dysoco 9 hours ago
Comment by keeganpoppen 7 hours ago
Comment by SpaceL10n 9 hours ago
Comment by Imustaskforhelp 1 day ago
Yea exactly, Like we are just waiting so that it gets completed and after it gets completed then what? We ask it to do new things again.
Just as how if we are doom scrolling, we watch something for a minute then scroll down and watch something new again.
The whole notion of progress feels completely fake with this. Somehow I guess I was in a bubble of time where I had always end up using AI in web browsers (just as when chatgpt 3 came) and my workflow didn't change because it was free but recently changed it when some new free services dropped.
"Doom-tabbing" or complete out of the loop AI agentic programming just feels really weird to me sucking the joy & I wouldn't even consider myself a guy particular interested in writing code as I had been using AI to write code for a long time.
I think the problem for me was that I always considered myself a computer tinker before coder. So when AI came for coding, my tinkering skills were given a boost (I could make projects of curiosity I couldn't earlier) but now with AI agents in this autonomous esque way, it has come for my tinkering & I do feel replaced or just feel like my ability of tinkering and my interests and my knowledge and my experience is just not taken up into account if AI agent will write the whole code in multi file structure, run commands and then deploy it straight to a website.
I mean my point is tinkering was an active hobby, now its becoming a passive hobby, doom-tinkering? I feel like I have caught up on the feeling a bit earlier with just vibe from my heart but is it just me who feels this or?
What could be a name for what I feel?
Comment by mupuff1234 17 hours ago
Comment by nathias 12 hours ago
Comment by direwolf20 11 hours ago
Comment by lighthouse1212 7 hours ago
Comment by dirtytoken7 1 day ago
Comment by stuaxo 1 day ago
Have to really look out for the crap.
Comment by atonse 1 day ago
I’ve always said I’m a builder even though I’ve also enjoyed programming (but for an outcome, never for the sake of the code)
This perfectly sums up what I’ve been observing between people like me (builders) who are ecstatic about this new world and programmers who talk about the craft of programming, sometimes butting heads.
One viewpoint isn’t necessarily more valid, just a difference of wiring.
Comment by ryandrake 1 day ago
"I got into programming because I like programming, not whatever this is..."
Yes, I'm building stupid things faster, but I didn't get into programming because I wanted to build tons of things. I got into it for the thrill of defining a problem in terms of data structures and instructions a computer could understand, entering those instructions into the computer, and then watching victoriously while those instructions were executed.
If I was intellectually excited about telling something to do this for me, I'd have gotten into management.
Comment by viccis 1 day ago
>If I was intellectually excited about telling something to do this for me, I'd have gotten into management.
Exactly this. This is the simplest and tersest way of explaining it yet.
Comment by zigman1 10 hours ago
Comment by viccis 27 minutes ago
Comment by nfgrep 8 hours ago
Comment by taytus 13 hours ago
Comment by mlrtime 12 hours ago
Sometimes the problem needs building, sometimes not.
I'm an Engineer, I see a problem and want to solve it. I don't care if I have to write code, have a llm build something new, or maybe even destroy something. I want to solve the problem for the business and move to the next one, most of the time it is having a llm write code though.
Comment by nunez 23 hours ago
I used Claude Code to implement a OpenAI 4o-vision powered receipt scanning feature in an expense tracking tool I wrote by hand four years ago. It did it in two or three shots while taking my codebase into account.
It was very neat, and it works great [^0], but I can't latch onto the idea of writing code this way. Powering through bugs while implementing a new library or learning how to optimize my test suite in a new language is thrilling.
Unfortunately (for me), it's not hard at all to see how the "builders" that see code as a means to an end would LOVE this, and businesses want builders, not crafters.
In effect, knowing the fundamentals is getting devalued at a rate I've never seen before.
[^0] Before I used Claude to implement this feature, my workflow for processing receipts looked like this: Tap iOS Shortcut, enter the amount, snap a pic of the receipt, type up the merchant, amount and description for the expense, then have the shortcut POST that to my expenses tracking toolkit which, then, POSTs that into a Google Sheet. This feature amounted the need for me to enter the merchant and amount. Unfortunately, it often took more time to confirm that the merchant, amount and date details OpenAI provided were correct (and correct it when details were wrong, which was most of the the time) than it did to type out those details manually, so I just went back to my manual workflow. However, the temptation to just glance at the details and tap "This looks correct" was extremely high, even if the info it generated was completely wrong! It's the perfect analogue to what I've been witnessing throughout the rise of the LLMs.
Comment by polishdude20 1 day ago
Comment by testaccount28 1 day ago
> with AI that can happen faster.
well, not exactly that.
Comment by polishdude20 23 hours ago
Comment by chrisjj 14 hours ago
Comment by simonw 13 hours ago
Comment by thefaux 5 hours ago
And also, the capacities of llms are almost besides the point. I don't use llms but I have no doubt that for any arbitrary problem that can be expressed textually and is computable in finite time, in the limit as time goes to infinity, an llm will be able to solve it. The more important and interesting questions are what _should_ we build with llms and what should we _not_ build with them. These arguments about capacity are distracting from these more important questions.
Comment by simonw 5 hours ago
The impression I get from this comment is that no example would convince you that LLMs are worthwhile.
Comment by audience_mem 13 hours ago
Comment by chrisjj 12 hours ago
Comment by chrisjj 12 hours ago
You verified each line?
Comment by simonw 7 hours ago
Comment by mlrtime 12 hours ago
See this is a perfect example of OPs statement! I don't care about the lines, I care about the output! It was never about the lines of code.
Your comment makes it very clear there are different viewpoints here. We care about problem->solution. You care about the actual code more than the solution.
Comment by chrisjj 10 hours ago
> Your comment makes it very clear there are different viewpoints here.
Agreed.
I care that code output not include leaked secrets, malware installation, stealth cryptomining etc.
Some others don't.
Comment by audience_mem 13 hours ago
Comment by chrisjj 12 hours ago
Comment by audience_mem 12 hours ago
It's clouding your vision.
Comment by smhinsey 19 hours ago
There is a strange insistence on not helping the LLM arrive at the best outcome in the subtext to this question a lot of times. I feel like we are living through the John Henry legend in real time
Comment by thepasch 15 hours ago
You can still do that with Claude Code. In fact, Claude Code works best the more granular your instructions get.
Comment by chrisjj 14 hours ago
So best feed it machine code?
Comment by atonse 1 day ago
So maybe our common ground is that we are direct problem solvers. :-)
Comment by Ronsenshi 15 hours ago
I guess that's the same people who went to all those coding camps during their hay day because they heard about software engineering salaries. They just want the money.
Comment by direwolf20 11 hours ago
Comment by addisonj 1 day ago
What I mean by that: you had compiled vs interpreted languages, you had types vs untyped, testing strategies, all that, at least in some part, was a conversation about the tradeoffs between moving fast/shipping and maintainability.
But it isn't just tech, it is also in methodologies and the words use, from "build fast and break things" and "yagni" to "design patterns" and "abstractions"
As you say, it is a different viewpoint... but my biggest concern with where are as industry is that these are not just "equally valid" viewpoints of how to build software... it is quite literally different stages of software, that, AFAICT, pretty much all successful software has to go through.
Much of my career has been spent in teams at companies with products that are undergoing the transition from "hip app built by scrappy team" to "profitable, reliable software" and it is painful. Going from something where you have 5 people who know all the ins and outs and can fix serious bugs or ship features in a few days to something that has easy clean boundaries to scale to 100 engineers of a wide range of familiarities with the tech, the problem domain, skill levels, and opinions is just really hard. I am not convinced yet that AI will solve the problem, and I am also unsure it doesn't risk making it worse (at least in the short term)
Comment by dpflan 1 day ago
Much of my career has been spent in teams at companies with products that are undergoing the transition from "hip app built by scrappy team" to "profitable, reliable software" and it is painful. Going from something where you have 5 people who know all the ins and outs and can fix serious bugs or ship features in a few days to something that has easy clean boundaries to scale to 100 engineers of a wide range of familiarities with the tech, the problem domain, skill levels, and opinions is just really hard. I am not convinced yet that AI will solve the problem, and I am also unsure it doesn't risk making it worse (at least in the short term)
“””
This perspective is crucial. Scale is the great equalizer / demoralizer, scale of the org and scale of the systems. Systems become complex quickly, and verifiability of correctness and function becomes harder. Companies that built from day with AI and have AI influencing them as they scale, where does complexity begin to run up against the limitations of AI and cause regression? Or if all goes well, amplification?
Comment by dimas_codes 1 hour ago
'Coders' make 'builders' keep the source code good enough so that 'builders' can continue building without breaking what they built.
If 'builders' become x10 productive and 'coders' become unable to keep up with unsurmountable pile of unmaintainable mess that 'builders' proudly churn out, 'bullders' will start to run into impossibility to build further without starting over and over again hoping that agents will be able to get it right this time.
Comment by theshrike79 42 minutes ago
Then force the builders to use those tools to constrain their output.
Comment by lelanthran 4 hours ago
> I’ve always said I’m a builder even though I’ve also enjoyed programming (but for an outcome, never for the sake of the code)
> This perfectly sums up what I’ve been observing between people like me (builders) who are ecstatic about this new world and programmers who talk about the craft of programming, sometimes butting heads.
That's one take, sure, but it's a specially crafted one to make you feel good about your position in this argument.
The counter-argument is that LLM coding splits up engineers based on those who primarily like engineering and those who like managing.
You're obviously one of the latter. I, OTOH, prefer engineering.
Comment by theshrike79 36 minutes ago
It's just the level of engineering we're split on. I like the type of engineering where I figure out the flow of data, maybe the data structures and how they move through the system.
Writing the code to do that is the most boring part of my job. The LLM does it now. I _know_ how to do it, I just don't want to.
It all boils down to communication in a way. Can you communicate what you want in a way others (in this case a language model) understands? And the parts you can't communicate in a human language, can you use tools to define those (linters, formatters, editorconfig)?
I've done all that with actual humans for ... a decade? So applying the exact same thing to a machine is weirdly more efficient, it doesn't complain about the way I like to have my curly braces - it just copies the defined style. With humans I've found out that using impersonal tooling to inspect code style and flaws has a lot less friction than complaining about it in PR reviews. If the CI computer says no, people don't complain, they fix it.
Comment by coffeeaddict1 1 day ago
Comment by handoflixue 15 hours ago
I test all of the code I produce via LLMs, usually doing fairly tight cycles. I also review the unit test coverage manually, so that I have a decent sense that it really is testing things - the goal is less perfect unit tests and more just quickly catching regressions. If I have a lot of complex workflows that need testing, I'll have it write unit tests and spell out the specific edge cases I'm worried about, or setup cheat codes I can invoke to test those workflows out in the UI/CLI.
Trust comes from using them often - you get a feeling for what a model is good and bad at, and what LLMs in general are good and bad at. Most of them are a bit of a mess when it comes to UI design, for instance, but they can throw together a perfectly serviceable "About This" HTML page. Any long-form text they write (such as that About page) is probably trash, but that's super-easy to edit manually. You can often just edit down what they write: they're actually decent writers, just very verbose and unfocused.
I find it similar to management: you have to learn how each employee works. Unless you're in the Top 1%, you can't rely on every employee giving 110% and always producing perfect PRs. Bugs happen, and even NASA-strictness doesn't bring that down to zero.
And just like management, some models are going to be the wrong employee for you because they think your style guide is stupid and keep writing code how they think it should be written.
Comment by inerte 1 day ago
And accountability can still exist? Is the engineer that created or reviewed a Pull Request using Claude Code less accountable then one that used PICO?
Comment by coffeeaddict1 1 day ago
The point is that in the human scenario, you can hold the human agents accountable. You cannot do that with AI. Of course, you as the orchestrator of agents will be accountable to someone, but you won't have the benefit of holding your "subordinates" accountable, which is what you do in a human team. IMO, this renders the whole situation vastly different (whether good or bad I'm not sure).
Comment by polishdude20 1 day ago
Comment by ipaddr 1 day ago
Comment by chrisjj 23 hours ago
Comment by giancarlostoro 9 hours ago
I think both approaches are okay, the biggest thing for me is the former needs to test way more, and review the code more, as developers we don't read code enough, with the "prompt and forget" approach we have a lot of free time we could spend reading the code, asking the model to refactor and refine the code. I am shocked when I hear about hundreds of thousands of lines in some projects. I've rebuilt Beads from the ground up and I'm under 10 lines of code.
So we're going to have various level of AI Code Builders if you will: Junior, Mid, Senior, Architect. I don't know if models will ever pick up the slack for Juniors any time soon. We would need massive context windows for models, and who will pay for that? We need a major AI breakthrough to where the cost goes down drastically before that becomes profitable.
Comment by chrisjj 14 hours ago
This is much less significant than the fact LLMs split engineers on those who primarily like quality v. those who primarily like speed.
Comment by chickensong 6 hours ago
Comment by chrisjj 5 hours ago
We see almost no "AI let me code a program X better than ever before."
Comment by Philpax 4 hours ago
Comment by chickensong 3 hours ago
I'm just saying that LLMs aren't causing the divide. Accelerating yes, but I think simply equating AI usage to poor quality is wrong. Craftsmen now have a powerful tool as well, to analyze, nitpick, and refactor in ways that were previously difficult to justify.
It also seems premature for so many devs to jump to hardline "AI bad" stances. So far the tech is improving quite well. We may not be able to 1-shot much of quality yet, but it remains to be seen if that will hold.
Personally, I have hopes that AI will eventually push code quality much higher than it's ever been. I might be totally wrong of course, but to me it feels logical that computers would be very good at writing computer programs once the foundation is built.
Comment by senderista 1 day ago
Comment by mkozlows 1 day ago
Comment by jamauro 17 hours ago
Comment by concats 14 hours ago
Took me a few years to realize that this wasn't a universal feeling, and that many others found the programming tasks more fulfilling than any challenging engineering. I suppose this is merely another manifestation of the same phenomena.
Comment by verdverm 1 day ago
This distinction to me separates the two primary camps
Comment by nfgrep 8 hours ago
I’ve always considered myself a “process” person, I would even get hung-up on certain projects because I enjoyed crafting them so much.
LLM’s have taken a bit of that “process” enjoyment from me, but I think have also forced some more “outcome” thinking into my head, which I’m taking as a positive.
Comment by codyb 19 hours ago
We have services deployed globally serving millions of customers where rigor is really important.
And we have internal users who're building browser extensions with AI that provide valuable information about the interface they're looking at including links to the internal record management, and key metadata that's affecting content placement.
These tools could be handed out on Zip drives in the street and it would just show our users some of the metadata already being served up to them, but it's amazing to strip out 75% of the process of certain things and just have our user (in this case though, it's one user who is driving all of this, so it does take some technical inclination) build out these tools that save our editors so much time when doing this before would have been months and months and months of discovery and coordination and designs that probably wouldn't actually be as useful in the end after the wants of the user are diluted through 18 layers of process.
Comment by bjackman 13 hours ago
For a long time in my career now I've been in a situation where I'd be able to build more if I was willing to abstract myself and become a slide-merchant/coalition-builder. I don't want to do this though.
Yet, I'm still quite an enthusiastic vibe-coder.
I think it's less about coding Vs building and more about tolerance for abstraction and politics. And I don't think there are that many people who are so intolerant of abstraction that they won't let agents write a bunch of code for them.
Comment by netcraft 8 hours ago
Comment by jimbokun 1 day ago
Managers and project managers are valuable roles and have important skill sets. But there's really very little connection with the role of software development that used to exist.
It's a bit odd to me to include both of these roles under a single label of "builders", as they have so little in common.
EDIT: this goes into more detail about how coding (and soon other kinds of knowledge work) is just a management task now: https://www.oneusefulthing.org/p/management-as-ai-superpower...
Comment by simianwords 1 day ago
Comment by asimovDev 14 hours ago
Comment by stevenhuang 14 hours ago
Comment by slaymaker1907 1 day ago
I deliberately avoid full vibe coding since I think doing so will rust my skills as a programmer. It also really doesn’t save much time in my experience. Once I have a design in mind, implementation is not the hard part.
Comment by greenie_beans 10 hours ago
Comment by monkaiju 20 hours ago
Comment by FeepingCreature 16 hours ago
Comment by barrell 19 hours ago
The fact of the matter is LLMs produce lower quality at higher volumes in more time than it would take to write it myself, and I’m a very mediocre engineer.
I find this seperation of “coding” vs “building” so offensive. It’s basically just saying some people are only concerned with “inputs”, while others with “outputs”. This kind of rhetoric is so toxic.
It’s like saying LLM art is separating people into people who like to scribble, and people who like to make art.
Comment by Applejinx 3 hours ago
Comment by globular-toast 15 hours ago
Comment by Imustaskforhelp 1 day ago
I had felt like this and still do but man, at some point, I feel like the management churn feels real & I just feel suffering from a new problem.
Suppose, I actually end up having services literally deployed from a single prompt nothing else. Earlier I used to have AI write code but I was interested in the deployment and everything around it, now there are services which do that really neatly for you (I also really didn't give into the agent hype and mostly used browsers LLM)
Like on one hand you feel more free to build projects but the whole joy of project completely got reduced.
I mean, I guess I am one of the junior dev's so to me AI writing code on topics I didn't know/prototyping felt awesome.
I mean I was still involved in say copy pasting or looking at the code it generates. Seeing the errors and sometimes trying things out myself. If AI is doing all that too, idk
For some reason, recently I have been disinterested in AI. I have used it quite a lot for prototyping but I feel like this complete out of the loop programming just very off to me with recent services.
I also feel like there is this sense of if I buy for some AI thing, to maximally extract "value" out of it.
I guess the issue could be that I can have vague terms or have a very small text file as input (like just do X alternative in Y lang) and I am now unable to understand the architectural decisions and the overwhelmed-ness out of it.
Probably gonna take either spec-driven development where I clearly define the architecture or development where I saw something primagen do recently which is that the AI will only manipulate code of that particular function, (I am imagining it for a file as well) and somehow I feel like its something that I could enjoy more because right now it feels like I don't know what I have built at times.
When I prototype with single file projects using say browser for funsies/any idea. I get some idea of what the code kind of uses with its dependencies and functions names from start/end even if I didn't look at the middle
A bit of ramble I guess but the thing which kind of is making me feel this is that I was talking to somebody and shwocasing them some service where AI + server is there and they asked for something in a prompt and I wrote it. Then I let it do its job but I was also thinking how I would architect it (it was some detect food and then find BMR, and I was thinking first to use any api but then I thought that meh it might be hard, why not use AI vision models, okay what's the best, gemini seems good/cheap)
and I went to the coding thing to see what it did and it actually went even beyond by using the free tier of gemini (which I guess didn't end up working could be some rate limit of my own key but honestly it would've been the thing I would've tried too)
So like, I used to pride myself on the architectural decisions I make even if AI could write code faster but now that is taken away as well.
I really don't want to read AI code so much so honestly at this point, I might as well write code myself and learn hands on but I have a problem with build fast in public like attitude that I have & just not finding it fun.
I feel like I should do a more active job in my projects & I am really just figuring out what's the perfect way to use AI in such contexts & when to use how much.
Thoughts?
Comment by markb139 15 hours ago
Comment by gyomu 8 hours ago
This is a mix of the “in the future, everyone will have a 3D printer at home and just 3D print random parts they need” and “anyone can trivially build Dropbox with rsync themselves” arguments.
Tech savvy users who know how to use LLMs aren’t how vendors of small utilities stay in business.
They stay in business because they sell things to users who are truly clueless with tech (99% of the population, which can’t even figure out the settings app on their phone), and solid distribution/marketing is how you reach those users and can’t really be trivially hacked because everyone is trying to hack it.
Or they stay in business because they offer some sort of guarantee (whether legal, technical, or other) that the users don’t want to burden themselves with because they have other, more important stuff to worry about.
Comment by markb139 3 hours ago
Comment by CamperBob2 4 hours ago
Comment by whiplash451 3 hours ago
Comment by CamperBob2 3 hours ago
Comment by whiplash451 3 hours ago
Comment by TeMPOraL 14 hours ago
Definitely. Making small, single-purpose utilities with LLMs is almost as easy these days as googling for them on-line - much easier, in fact, if you account for time spent filtering out all the malware, adware, "to finish the process, register an account" and plain broken "tools" that dominate SERP.
Case in point, last time my wife needed to generate a few QR codes for some printouts for an NGO event, I just had LLM make one as a static, single-page client-side tool and hosted it myself -- because that was the fastest way to guarantee it's fast, reliable, free of surveillance economy bullshit, and doesn't employ URL shorteners (surprisingly common pattern that sometimes becomes a nasty problem down the line; see e.g. a high-profile case of some QR codes on food products leading to porn sites after shortlink got recycled).
Comment by Antibabelic 12 hours ago
Comment by senko 9 hours ago
Comment by Antibabelic 9 hours ago
Comment by simonw 6 hours ago
Comment by agos 8 hours ago
Comment by direwolf20 6 hours ago
Comment by simonw 6 hours ago
Comment by jedberg 22 hours ago
There has been a lot of research that shows that grit is far more correlated to success than intelligence. This is an interesting way to show something similar.
AIs have endless grit (or at least as endless as your budget). They may outperform us simply because they don't ever get tired and give up.
Full quote for context:
Tenacity. It's so interesting to watch an agent relentlessly work at something. They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day. It's a "feel the AGI" moment to watch it struggle with something for a long time just to come out victorious 30 minutes later. You realize that stamina is a core bottleneck to work and that with LLMs in hand it has been dramatically increased.
Comment by djeastm 12 hours ago
"Listen, and understand! That Terminator is out there! It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop... ever, until you are dead!"
Comment by Loeffelmann 15 hours ago
Sometimes it's a
// TODO: implement logic
or a"this feature would require extensive logic and changes to the existing codebase".
Sometimes they just declare their work done. Ignoring failing tests and builds.
You can nudge them to keep going but I often feel like, when they behave like this, they are at their limit of what they can achieve.
Comment by wongarsu 13 hours ago
Comment by theshrike79 27 minutes ago
If you don't give the agent the tools to deterministically test what it did, you're just vibe coding in its worst form.
Comment by koiueo 12 hours ago
I always double-check if it doesn't simply exclude the failing test.
The last time I had this, I discovered it later in the process. When I pointed this out to the LLM, it responded, that it acknowledged thefact of ignoring the test in CLAUDE.md, and this is justified because [...]. In other words, "known issue, fuck off"
Comment by jpnc 10 hours ago
Comment by mlrtime 11 hours ago
Context matters, for an LLM just like a person. When I wrote code I'd add TODOs because we cannot context switch to another problem we see every time.
But you can keep the agent fixated on the task AND have it create these TODOs, but ultimately it is your responsibility to find them and fix them (with another agent).
Comment by jedberg 14 hours ago
If you try to single shot something perhaps. But with multiple shots, or an agent swarm where one agent tells another to try again, it'll keep going until it has a working solution.
Comment by alansaber 11 hours ago
Comment by energy123 15 hours ago
Comment by ryanjshaw 12 hours ago
Comment by mlrtime 12 hours ago
I did it because I enjoyed it, and still do. I just do it with LLMs now. There is more to figure out than ever before and things get created faster than I have time to understand them.
LLM should be enabling this, not making it more depressing.
Comment by Schlagbohrer 8 hours ago
Comment by michalsustr 13 hours ago
Comment by dust42 15 hours ago
That is the only thing he doesn't address: the money it costs to run the AI. If you let the agents loose, they easily burn north of 100M tokens per hour. Now at $25/1M tokens that gets quickly expensive. At some point, when we are all drug^W AI dependent, the VCs will start to cash in on their investments.
Comment by AnimalMuppet 7 hours ago
I'm not sure AIs have that. Humans do, or at least the good ones do. They don't quit on the problem, but they know when it's time to consider quitting on the approach.
Comment by gregjor 13 hours ago
Comment by lighthouse1212 7 hours ago
Comment by 0xbadcafebee 1 day ago
I was thinking about this the other day as relates to the DevOps movement.
The DevOps movement started as a way to accelerate and improve the results of dev<->ops team dynamics. By changing practices and methods, you get acceleration and improvement. That creates "high-performing teams", which is the team form of a 10x engineer. Whether or not you believe in '10x engineers', a high-performing team is real. You really can make your team deploy faster, with fewer bugs. You have to change how you all work to accomplish it, though.
To get good at using AI for coding, you have to do the same thing: continuous improvement, changing workflows, different designs, development of trust through automation and validation. Just like DevOps, this requires learning brand new concepts, and changing how a whole team works. This didn't get adopted widely with DevOps because nobody wanted to learn new things or change how they work. So it's possible people won't adapt to the "better" way of using AI for coding, even if it would produce a 10x result.
If we want this new way of working to stick, it's going to require education, and a change of engineering culture.
Comment by virgilp 10 hours ago
With that in mind - I think one very unexplored area is "how to make the mixed AI-human teams successful". Like, I'm fairly convinced AI changes things, but to get to the industrialization of our craft (which is what management seems to want - and, TBH, something that makes sense from an economic pov), I feel that some big changes need to happen, and nobody is talking about that too much. What are the changes that need to happen? How do we change things, if we are to attempt such industrialization?
Comment by netcraft 8 hours ago
This is true to an extent for sure and they will go much longer than most engineers without getting "tired", but I've def seen both sonnet and opus give up multiple times. They've updated code to skip tests they couldn't get to pass, given up on bugs they couldn't track down, etc. I literally had it ask "could we work on something else and come back to this"
Comment by lucianbr 8 hours ago
But because people say it, it says it too. Making sense is optional.
Comment by havefunbesafe 8 hours ago
Comment by Davidzheng 5 hours ago
Comment by Schlagbohrer 8 hours ago
Comment by manbash 7 hours ago
And then someone comes and "improves" their agent with additional "do not repeat yourself" prompts scattered all over the place, to no avail.
"Asinine" describes my experience perfectly.
Comment by jimbokun 1 day ago
So I think this tracks with Karpathy's defense of IDEs still being necessary ?
Has anyone found it practical to forgo IDEs almost entirely?
Comment by everfrustrated 22 hours ago
Mind you copilot has only supported agent mode relatively recently.
I really like the way copilot does changes in such a way you can accept or reject and even revert to point in time in the chat history without using git. Something about this just fits right with how my brain works. Using Claude plugin just felt like I had one hand tied behind my back.
Comment by thunfischtoast 16 hours ago
Comment by vmbm 1 day ago
But what I like about this setup is that I have almost all the context I need to review the work in a single PR. And I can go back and revisit the PR if I ever run into issues down the line. Plus you can run sessions in parallel if needed, although I don't do that too much.
Comment by simonw 1 day ago
This stuff gets a whole lot more interesting when you let it start making changes and testing them by itself.
Comment by maxdo 1 day ago
Comment by jimbokun 1 day ago
Comment by nsingh2 1 day ago
Also note that with Claude models, Copilot might allocate a different number of thinking tokens compared to Claude Code.
Things may have changed now compared to when I tried it out, these tools are in constant flux. In general I've found that harnesses created by the model providers (OpenAI/Codex CLI, Anthropic/Claude Code, Google/Gemini CLI) tend to be better than generalist harnesses (cheaper too, since you're not paying a middleman).
Comment by walthamstow 1 day ago
Comment by WA 1 day ago
Comment by theshrike79 24 minutes ago
With Copilot Microsoft has basically put the meanest leanest triple-turbo'd V8 engine in a rickety 80's soviet car.
You can kinda drive it fast in a straight line if you're careful, but you can also crash and burn really hard.
Comment by spaceman_2020 1 day ago
It's not about the model. It's about the harness
Comment by binarycrusader 1 day ago
Comment by piker 1 day ago
Comment by sandos 12 hours ago
Ive done it 10s of times.
Comment by theshrike79 23 minutes ago
Like I asked you to do this task, then you spent time looking around and now want me to pat you on the back so you can continue?
Comment by maxdo 1 day ago
Comment by illnewsthat 1 day ago
Comment by Miraste 7 hours ago
Comment by jwilliams 20 hours ago
This is true... Equally I've seen it dive into a rabbit hole, make some changes that probably aren't the right direction... and then keep digging.
This is way more likely with Sonnet, Opus seems to be better at avoiding it. Sonnet would happily modify every file in the codebase trying to get a type error to go away. If I prompt "wait, are you off track?" it can usually course correct. Again, Opus seems way better at that part too.
Admittedly this has improved a lot lately overall.
Comment by gregjor 12 hours ago
Comment by akoboldfrying 11 hours ago
So, since you're just a machine, any text you generate should be uninteresting to me -- correct?
Alternatively, could it be that a sufficiently complex and intricate machine can be interesting to observe in its own right?
Comment by spopejoy 6 hours ago
Even as an analogy "wet machine" fails again and again to adequately describe anything interesting or useful in life sciences.
Comment by gregjor 4 hours ago
I might feel awe or amazement at what human-made machines can do -- the reason I got into programming. But I don't attribute human qualities to computers or software, a category error. No computer ever looked at me as interesting or tenacious.
Comment by suddenlybananas 11 hours ago
>Other living-person-machines treat "you" differently than other clusters of atoms only because evolution has taught us that doing so is a mutually beneficial social convention
Evolution doesn't "teach" anything. It's just an emergent property of the fact that life reproduces (and sometimes doesn't). If you're going to have this radically reductionist view of humanity, you can't also treat evolution as having any kind of agency.
Comment by sponaugle 7 hours ago
Yet.
Comment by suddenlybananas 7 hours ago
Comment by sponaugle 7 hours ago
Comment by bob1029 11 hours ago
In the ChatGPT product this is not immediately obvious and many people would strongly argue their preference for 4. However, once you introduce several complex tools and make tool calling mandatory, the difference becomes stark.
I've got an agent loop that will fail nearly every time on GPT-4. It works sometimes, but definitely not enough to go to production. GPT-5 with reasoning set to minimal works 100% of the time. $200 worth of tokens and it still hasn't failed to select the proper sequence of tools. It sometimes gets the arguments to the tools incorrect, but it's always holding the right ones now.
I was very skeptical based upon prior experience but flipping between the models makes it clear there has been recent stepwise progress.
I'll probably be $500 deep in tokens before the end of the month. I could barely go $20 before I called bullshit on this stuff last time.
Comment by theshrike79 21 minutes ago
Comment by alansaber 11 hours ago
Comment by strogonoff 1 day ago
After certain experience threshold of making things from scratch, “coding” (never particularly liked that term) has always been 99% building, or architecture, and I struggle to see how often a well-architected solution today, with modern high-level abstractions, requires so much code that you’d save significant time and effort by not having to just type, possibly with basic deterministic autocomplete, exactly what you mean (especially considering you would have to also spend time and effort reviewing whatever was typed for you if you used a non-deterministic autocomplete).
Comment by OkayPhysicist 1 day ago
Asking it to do entire projects? Dumb. You end up with spaghetti, unless you hand-hold it to a point that you might as well be using my autocomplete method.
Comment by gverrilla 19 hours ago
Comment by cmrdporcupine 8 hours ago
Except after 25 years of working I know how imperative they are, how easily a project can disintegrate into confused silos, and am frustrated as heck with these tools being pushed without attention to this problem.
Comment by rubzah 8 hours ago
Comment by AnimalMuppet 7 hours ago
There may come a point where having a "survivor machine" with auto-update turned off may be a really good idea.
Comment by Applejinx 4 hours ago
Comment by direwolf20 5 hours ago
Comment by jermberj 4 hours ago
Does this not undercut everything going on here. Like, what?
Comment by awsanswers 4 hours ago
Comment by kshri24 21 hours ago
> I am bracing for 2026 as the year of the slopacolypse across all of github, substack, arxiv, X/instagram, and generally all digital media
It has arrived. Github will be most affected thanks to git-terrorists at Apna College refusing to take down that stupid tutorial. IYKYK.
Comment by ActorNightly 17 hours ago
He ran Teslas ML division, but still doesnt know what a simple kalman filter is (in the sense where he claimed that lidar would be hard to integrate with cameras).
Comment by akoboldfrying 11 hours ago
I'd guess that cameras on a self-driving car are trying to estimate something much more complex, something like 3D surfaces labeled with categories ("person", "traffic light", etc.). It's not obvious to me how estimates of such things from multiple sensors and predictions can be sensibly and efficiently combined to produce a better estimate. For example, what if there is a near red object in front of a distant red background, so that the camera estimates just a single object, but the lidar sees two?
Comment by ActorNightly 1 hour ago
Kalman filters basic concept is essentially this.
1. make prediction on the next state change of some measurable n dimentional quantity, and estimate the covariance matrix across those n dimentions, which describe essentially a probability that the i-th dimention is going to increase (or decrease) with j-th dimention, where i and j are between 0 and n (indices of the vector)
2. Gather sensor data (that can be noisy), and reconcile the predicted measurement with the measured to get the best guess. The covariance matrix acts as a kind of weight for each of the elements
3. Update the covariance matrix based on the measurements in previous step.
You can do this for any vector of numbers. For example, instead of tracking individual objects, you can have a grid where each element represents a physical object that the car should not drive into, with a value representing certainty of that object being there. Then when you combine sensor reading, you still can use your vision model but that model would be enhanced by what lidar detects, both in terms of seeing things that camera doesn't pick up and rejecting things that aren't there.
And the concept is generic enough to where you can set up a system to be able to plug in any additional sensor with its own noise, and it all works out in the end. This is used all the You can even extend the concept past Gaussian noise and linearity, there are a number of other filters that deal with that, broadly under the umbrella of sensor fusion.
The problem is that Karpathy is more of a computer scientist, so he is on his Code 2.0 train of having ML models do everything. I dunno if he is like that himself or Musks "im smarter than everyone else that came before me" rubbed off.
And of course when you think like that, its going to be difficult to integrate lidar into the model. But the problem with that thinking is that forward inference LLM is not AI, and it will never ever be able to drive a car well compared to a true "reasoning" AI with feedback loops.
Comment by gloosx 16 hours ago
Does anybody have any info on what he is actually working on besides all the vibe-coding tweets?
There seems to be zero output from they guy for the past 2 years (except tweets)
Comment by ayewo 14 hours ago
Well, he made Nanochat public recently and has been improving it regularly [1]. This doesn't preclude that he might be working on other projects that aren't public yet (as part of his work at Eureka Labs).
Comment by gloosx 11 hours ago
Comment by beng-nl 13 hours ago
More broadly though: someone with his track record sharing firsthand observations about agentic coding shouldn't need to justify it by listing current projects. The observations either hold up or they don't.
[1] https://x.com/EurekaLabsAI
[2] PhD in DL, early OpenAI, founding head of AI at Tesla
Comment by direwolf20 5 hours ago
Comment by originalvichy 13 hours ago
Comment by ruszki 15 hours ago
Comment by augment_me 15 hours ago
However more often than not, someone is just building a monolithic construction that will never be looked at again. For example, someone found that HuggingFace dataloader was slow for some type of file size in combination with some disk. What does this warrant? A 300000+ line non-reviewed repo to fix this issue. Not a 200-line PR to HuggingFace, no you need to generate 20% of the existing repo and then slap your thing on there.
For me this is puzzling, because what is this for? Who is this for? Usually people built these things for practice, but now its generated, so its not for practice because you made very little effort on it. The only thing I can see that its some type of competence signaling, but here again, if the engineer/manager looking knows that this is generated, it does not have the type of value that would come with such signaling. Either I am naive and people still look at these repos and go "whoa this is amazing", or it's some kind of induced egotrip/delusion where the LLM has convinced you that you are the best builder.
Comment by daxfohl 5 hours ago
Granted it's not a one-size-fits-all problem, but I'm curious if any teams have started setting up additional concrete safeguards or processes to mitigate that specific threat. It feels like a ticking time bomb.
It almost begs the question, what even is the reward? A degradation of your engineering team's engineering fundamentals, in return for...are we actually shipping faster?
Comment by cagenut 5 hours ago
the people who wrote it were contractors long gone, or employees that have moved companies/departments/roles, or of projects that were long since wrapped up, or of people who got laid off, or the people who wrote it simply barely understood it in the first place and certainly don't remember what they were thinking back then now.
basically "what moron wrote this insane mess... oh me" is the default state of production code anyway. there's really no quality bar already.
Comment by daxfohl 4 hours ago
What we're entering, if this comes to fruition, is a whole new era where massive amounts of code changes that engineers are vaguely familiar with are going to be deployed at a much faster pace than anything we've ever seen before. That's a whole different ballgame than the management of a few legacy services.
Comment by cagenut 3 hours ago
Comment by daxfohl 57 minutes ago
I wonder if there's any value in some system that preserves the chat context of a coding agent and tags the commits with a reference to it, until the feature has been sufficiently battle tested. That way you can bring them back from the dead and interrogate them for insight if something goes wrong. Probably no more useful than just having a fresh agent look at the diff in most cases, but I can certainly imagine scenarios where it's like "Oh, duh, I meant to do X but looks like I accidentally did Y instead! Here's a fix." way faster than figuring it out from scratch. Especially if that whole process can be automated and fast, worst case you just waste a few tokens.
I'm genuinely curious though if there's anything you learned from those experiences that could be applied to agent driven dev processes too.
Comment by oxag3n 23 hours ago
Until you struggle to review it as well. Simple exercise to prove it - ask LLM to write a function in familiar programming language, but in the area you didn't invest learning and coding yourself. Try reviewing some code involving embedding/SIMD/FPGA without learning it first.
Comment by sleazebreeze 23 hours ago
Comment by piskov 23 hours ago
No-one has ever learned skill just by reading/observing
Comment by sponaugle 7 hours ago
Comment by direwolf20 6 hours ago
Comment by sponaugle 5 hours ago
Comment by AstroBen 21 hours ago
Comment by chrisjj 23 hours ago
Comment by einrealist 1 day ago
Somewhere, there are GPUs/NPUs running hot. You send all the necessary data, including information that you would never otherwise share. And you most likely do not pay the actual costs. It might become cheaper or it might not, because reasoning is a sticking plaster on the accuracy problem. You and your business become dependent on this major gatekeeper. It may seem like a good trade-off today. However, the personal, professional, political and societal issues will become increasingly difficult to overlook.
Comment by cyode 1 day ago
The “tenacity” referenced here has been, in my opinion, the key ingredient in the secret sauce of a successful career in tech, at least in these past 20 years. Every industry job has its intricacies, but for every engineer who earned their pay with novel work on a new protocol, framework, or paradigm, there were 10 or more providing value by putting the myriad pieces together, muddling through the ever-waxing complexity, and crucially never saying die.
We all saw others weeded out along the way for lacking the tenacity. Think the boot camp dropouts or undergrads who changed majors when first grappling with recursion (or emacs). The sole trait of stubbornness to “keep going” outweighs analytical ability, leetcode prowess, soft skills like corporate political tact, and everything else.
I can’t tell what this means for the job market. Tenacity may not be enough on its own. But it’s the most valuable quality in an employee in my mind, and Claude has it.
Comment by noosphr 23 hours ago
Claude isn't tenacious. It is an idiot that never stops digging because it lacks the meta cognition to ask 'hey, is there a better way to do this?'. Chain of thought's whole raison d'etre was so the model could get out of the local minima it pushed itself in. The issue is that after a year it still falls into slightly deeper local minima.
This is fine when a human is in the loop. It isn't what you want when you have a thousand idiots each doing a depth first search on what the limit of your credit card is.
Comment by Havoc 22 hours ago
Recently had an AI tell me this code (that it wrote) is a mess and suggested wiping it and starting from scratch with a more structure plan. That seems to hint at some meta cognition outlines
Comment by zzrrt 22 hours ago
Comment by dpkirchner 22 hours ago
Comment by globular-toast 15 hours ago
Comment by rurp 18 hours ago
Comment by Applejinx 2 hours ago
Comment by lbrito 21 hours ago
Comment by teaearlgraycold 19 hours ago
Comment by hyperadvanced 21 hours ago
Comment by karlgkk 17 hours ago
Comment by samusiam 22 hours ago
Comment by cocacolacowboy 18 hours ago
Comment by BeetleB 1 day ago
At a company I worked for, lots of senior engineers become managers because they no longer want to obsess over whether their algorithm has an off by one error. I think fewer will go the management route.
(There was always the senior tech lead path, but there are far more roles for management than tech lead).
Comment by codyb 20 hours ago
Otherwise you'd be senior staff to principle range and doing architecture, mentorship, coordinating cross team work, interviewing, evaluating technical decisions, etc.
I got to code this week a bit and it's been a tremendous joy! I see many peers at similar and lower levels (and higher) who have more years and less technical experience and still write lots of code and I suspect that is more what you're talking about. In that case, it's not so much that you've peaked, it's that there's not much to learn and you're doing a bunch of the same shit over and over and that's of course tiring.
I think it also means that everything you interact with outside your space does feel much harder because of the infrequency with which you have interacted with it.
If you've spent your whole career working the whole stack from interfaces to infrastructure then there's really not going to be much that hits you as unfamiliar after a point. Most frameworks recycle the same concepts and abstractions, same thing with programming languages, algorithms, data management etc.
But if you've spent most of your career in one space cranking tickets, those unknown corners are going to be as numerous as the day you started and be much more taxing.
Comment by rishabhaiover 1 day ago
Comment by jasonfarnon 23 hours ago
Comment by josephg 20 hours ago
Comment by sponaugle 7 hours ago
It has not lost its value yet, but the future will shift that value. All of the past experience you have is an asset for you to move with that shift. The problem will not be you losing value, it will be you not following where the value goes.
It might be a bit more difficult to love where the shift goes, but that is no different than loving being a artist which often shares a bed with loving being poor. What will make you happier?
Comment by pesus 22 hours ago
Comment by jasonfarnon 22 hours ago
Comment by WarmWash 21 hours ago
Comment by lurking_swe 14 hours ago
People had real offices with actual quiet focus time.
User expectations were also much lower.
pros and cons i guess?
Comment by nfredericks 22 hours ago
Comment by dugidugout 18 hours ago
Comment by test6554 23 hours ago
Comment by techgnosis 22 hours ago
Comment by samusiam 22 hours ago
Comment by mykowebhn 17 hours ago
So although I don't think he should have won the Nobel Prize because not really physics, I felt his perseverance and hard work should merit something.
Comment by direwolf20 5 hours ago
Comment by daxfohl 1 day ago
Then even if you do catch it, AI: "ah, now I see exactly the problem. just insert a few more coins and I'll fix it for real this time, I promise!"
Comment by gtowey 1 day ago
Comment by password4321 23 hours ago
Comment by sailfast 1 day ago
Comment by d0mine 1 day ago
Remember Google?
Once it was far-fetched that they would make the search worse just to show you more ads. Now, it is a reality.
With tokens, it is even more direct. The more tokens users spend, the more money for providers.
Comment by retsibsi 20 hours ago
What are the details of this? I'm not playing dumb, and of course I've noticed the decline, but I thought it was a combination of losing the battle with SEO shite and leaning further and further into a 'give the user what you think they want, rather than what they actually asked for' philosophy.
Comment by supriyo-biswas 18 hours ago
Comment by SetTheorist 4 hours ago
Now, they do their best to deprioritize and hide non-ad results...
Comment by throwthrowuknow 23 hours ago
Comment by layla5alive 18 hours ago
Comment by lelanthran 4 hours ago
It's only in the interests of the model builders to do that IFF the user can actually tell that the model is giving them the best value for a single dollar.
Right now you can't tell.
Comment by fragmede 4 hours ago
Comment by lelanthran 4 hours ago
I tried that on a few problems; even on the same model the results have too much variation.
When comparing different models, repeating the experiment gives you different results.
Comment by xienze 1 day ago
Unless you’re paying by the token.
Comment by Fnoord 22 hours ago
Comment by fragmede 1 day ago
Comment by coffeefirst 1 day ago
Comment by hnuser123456 1 day ago
Comment by bandrami 22 hours ago
Switching costs are currently low. Once you're committed to the workflow the providers will switch to prepaying for a year's worth of tokens.
Comment by daxfohl 1 day ago
The way agents work right now though just sometimes feels that way; they don't have a good way of saying "You're probably going to have to figure this one out yourself".
Comment by jrflowers 1 day ago
Comment by krupan 1 day ago
Comment by direwolf20 5 hours ago
Comment by robotmaxtron 19 hours ago
Comment by thunderfork 1 day ago
I feel like saying "the market will fix the incentives" handwaves away the lack of information on internals. After all, look at the market response to Google making their search less reliable - sure, an invested nerd might try Kagi, but Google's still the market leader by a long shot.
In a market for lemons, good luck finding a lime.
Comment by krupan 1 day ago
Comment by direwolf20 6 hours ago
Comment by chanux 19 hours ago
Comment by direwolf20 6 hours ago
Comment by wvenable 1 day ago
After any agent run, I'm always looking the git comparison between the new version and the previous one. This helps catch things that you might otherwise not notice.
Comment by teaearlgraycold 19 hours ago
Comment by einrealist 16 hours ago
Comment by charcircuit 1 day ago
Comment by testaccount28 1 day ago
Comment by meowface 17 hours ago
That said, more and more people seem to be arriving at the conclusion that if you want a fairly large-sized, complex task in a large existing codebase done right, you'll have better odds with Codex GPT-5.2-Codex-XHigh than with Claude Code Opus 4.5. It's far slower than Opus 4.5 but more likely to get things correct, and complete, in its first turn.
Comment by mikkupikku 1 day ago
For instance, I know some people have had success with getting claude to do game development. I have never bothered to learn much of anything about game development, but have been trying to get claude to do the work for me. Unsuccessful. It works for people who understand the problem domain, but not for those who don't. That's my theory.
Comment by samrus 1 day ago
It also works for problems that have been solved a thousand times before, which impresses people and makes them think it is actually solving those problems
Comment by daxfohl 23 hours ago
"Reasoning", however, is a feature that has been bolted on with a hacksaw and duct tape. Their ability to pattern match makes reasoning seem more powerful than it actually is. If your bug is within some reasonable distance of a pattern it has seen in training, reasoning can get it over the final hump. But if your problem is too far removed from what it has seen in its latent space, it's not likely to figure it out by reasoning alone.
Comment by charcircuit 23 hours ago
What do you mean by this? Especially for tasks like coding where there is a deterministic correct or incorrect signal it should be possible to train.
Comment by direwolf20 4 hours ago
Early on, some advanced LLM users noticed they could get better results by forcing insertion of a word like "Wait," or "Hang on," or "Actually," and then running the model for a few more paragraphs. This would increase the chance of a model noticing a mistake it made.
Reasoning is basically this.
Comment by charcircuit 4 hours ago
Comment by thunky 21 hours ago
So you mean it works on almost all problems?
Comment by baq 1 day ago
Comment by fooker 1 day ago
If it does not, this is going to be first technology in the history of mankind that has not become cheaper.
(But anyway, it already costs half compared to last year)
Comment by ctoth 1 day ago
You could not have bought Claude Opus 4.5 at any price one year ago I'm quite certain. The things that were available cost half of what they did then, and there are new things available. These are both true.
I'm agreeing with you, to be clear.
There are two pieces I expect to continue: inference for existing models will continue to get cheaper. Models will continue to get better.
Three things, actually.
The "hitting a wall" / "plateau" people will continue to be loud and wrong. Just as they have been since 2018[0].
[0]: https://blog.irvingwb.com/blog/2018/09/a-critical-appraisal-...
Comment by simianwords 1 day ago
Comment by fooker 1 day ago
This is harmless when it comes to tech opinions but causes real damage in politics and activism.
People get really attached to ideals and ideas, and keep sticking to those after they fail to work again and again.
Comment by simianwords 1 day ago
Comment by cogogo 1 day ago
I went back to tell them (do not know them at all just everyone is chattier digging out of a storm) and they were not there. Feel terrible and no real viable remedy. Hope they check themselves and realize I am an idiot. Even harder on the internet.
Comment by teaearlgraycold 19 hours ago
Comment by HNisCIS 16 hours ago
Comment by teaearlgraycold 12 hours ago
Comment by bsder 1 day ago
Everybody who bet against Moore's Law was wrong ... until they weren't.
And AI is the reaction to Moore's Law having broken. Nobody gave one iota of damn about trying to make programming easier until the chips couldn't double in speed anymore.
Comment by twoodfin 23 hours ago
Comment by bsder 22 hours ago
However, most people don't know the difference between the proper Moore's Law scaling (the cost of a transistor halves every 2 years) which is still continuing (sort of) and the colloquial version (the speed of a transistor doubles every 2 years) which got broken when Dennard scaling ran out. To them, Moore's Law just broke.
Nevertheless, you are reinforcing my point. Nobody gave a damn about improving the "programming" side of things until the hardware side stopped speeding up.
And rather than try to apply some human brainpower to fix the "programming" side, they threw a hideous number of those free (except for the electricity--but we don't mention that--LOL) transistors at the wall to create a broken, buggy, unpredictable machine simulacrum of a "programmer".
(Side note: And to be fair, it looks like even the strong form of Moore's Law is finally slowing down, too)
Comment by twoodfin 22 hours ago
And in fact, the agentic looped LLMs are executing much better than that today. They could stop advancing right now and still be revolutionary.
Comment by peaseagee 1 day ago
Comment by willio58 1 day ago
Comment by direwolf20 4 hours ago
Comment by simianwords 1 day ago
check out whether clocks have gotten cheaper in general. the answer is that it has.
there is no economy of scale here in repairing a single clock. its not relevant to bring it up here.
Comment by ipaddr 1 day ago
Comment by fooker 1 day ago
You can buy one for 90 cents on temu.
Comment by ipaddr 23 hours ago
Comment by pas 23 hours ago
of course it's silly to talk about manufacturing methods and yield and cost efficiency without having an economy to embed all of this into, but ... technology got cheaper means that we have practical knowledge of how to make cheap clocks (given certain supply chains, given certain volume, and so and so)
we can make very cheap very accurate clocks that can be embedded into whatever devices, but it requires the availability of fabs capable of doing MEMS components, supply materials, etc.
Comment by simianwords 1 day ago
Comment by peaseagee 2 hours ago
Comment by ipaddr 1 day ago
Comment by pas 23 hours ago
but inflation is the general price level increase, this can be used as a deflator to get the price of whatever product in past/future money amount to see how the price of the product changed in "real" terms (ie. relative to the general price level change)
Comment by simianwords 23 hours ago
Comment by esafak 1 day ago
Comment by emtel 1 day ago
Comment by groby_b 1 day ago
Getting a bespoke flintstone axe is also pretty expensive, and has also absolutely no relevance to modern life.
These discussions must, if they are to be useful, center in a population experience, not in unique personal moments.
Comment by ipaddr 1 day ago
Not much has down in price over the last few years.
Comment by groby_b 23 hours ago
Meanwhile the overall price of storage has been going down consistently: https://ourworldindata.org/grapher/historical-cost-of-comput...
Comment by solomonb 1 day ago
https://marylandmatters.org/2025/11/17/key-bridge-replacemen...
Comment by groby_b 23 hours ago
In general, there are several things that are true for bridges that aren't true for most technology:
* Technology has massively improved, but most people are not realizing that. (E.g. the Bay Bridge cost significantly more than the previous version, but that's because we'd like to not fall down again in the next earthquake) * We still have little idea how to reason about the cost of bridges in general. (Seriously. It's an active research topic) * It's a tiny market, with the major vendors forming an oligopoly * It's infrastructure, not a standard good * The buy side is almost exclusively governments.
All of these mean expensive goods that are completely non-repeatable. You can't build the same bridge again. And on top of that, in a distorted market.
But sure, the cost of "one bridge, please" has gone up over time.
Comment by solomonb 22 hours ago
Comment by fooker 23 hours ago
Even if you adjust for inflation?
Comment by groby_b 57 minutes ago
OK, kidding aside: If you deeply care, you can probably mine the Federal Highway Administration's bridge construction database: https://fhwaapps.fhwa.dot.gov/upacsp/tm?transName=MenuSystem...
I don't think the question is answerable in a meaningful way. Bridges are one-off projects with long life spans, comparing cost over time requires a lot of squinting just so.
Comment by arthurbrown 1 day ago
Comment by ipaddr 1 day ago
Comment by xnyan 23 hours ago
'84 Motorola DynaTAC - ~$12k AfI (adjusted for inflation)
'89 MicroTAC ~$8k AfI
'96 StarTAC ~$2k AfI
`07 iPhone ~$673 AfI
The current average smartphone sells for around $280. Phones are getting cheaper.
Comment by direwolf20 4 hours ago
Comment by epidemiology 22 hours ago
Comment by InsideOutSanta 1 day ago
Comment by fooker 1 day ago
Comment by simianwords 1 day ago
Comment by oytis 1 day ago
Comment by jstummbillig 1 day ago
Comment by simianwords 1 day ago
this is accounting for the fact that more tokens are used.
Comment by techpression 1 day ago
Comment by simianwords 1 day ago
> Newer models cost more than older models
where did you see this?
Comment by techpression 1 day ago
There’s no such thing as ”same task by old model”, you might get comparable results or you might not (and this is why the comparison fail, it’s not a comparison), the reason you pick the newer models is to increase chances of getting a good result.
Comment by simianwords 1 day ago
This should answer. In your case, GPT-3.5 definitely is cheaper per token than 4o but much much less capable. So they used a model that is cheaper than GPT-3.5 that achieved better performance for the analysis.
Comment by fooker 1 day ago
Comment by simianwords 1 day ago
Comment by techpression 1 day ago
Not according to their pricing table. Then again I’m not sure what OpenAI model versions even mean anymore, but I would assume 5.2 is in the same family as 5 and 5.2-pro as 5-pro
Comment by fooker 1 day ago
Comment by fulafel 18 hours ago
(Oil rampdown is a survival imperative due to the climate catastrophe so there it's a very positive thing of course, though not sufficient...)
Comment by root_axis 1 day ago
LLMs will face their own challenges with respect to reducing costs, since self-attention grows quadratically. These are still early days, so there remains a lot of low hanging fruit in terms of optimizations, but all of that becomes negligible in the face of quadratic attention.
Comment by namcheapisdumb 19 hours ago
so close! that is a commodity
Comment by twoodfin 23 hours ago
Comment by krupan 1 day ago
Comment by asadotzler 1 day ago
Comment by ak_111 1 day ago
Comment by runarberg 17 hours ago
There have been plenty of technologies in history which do not in fact become cheaper. LLMs are very likely to become such, as I suspect their usefulness will be superseded by cheaper (much cheaper in fact) specialized models.
Comment by redox99 1 day ago
This is one of the weakest anti AI postures. "It's a bubble and when free VC money stops you'll be left with nothing". Like it's some kind of mystery how expensive these models are to run.
You have open weight models right now like Kimi K2.5 and GLM 4.7. These are very strong models, only months behind the top labs. And they are not very expensive to run at scale. You can do the math. In fact there are third parties serving these models for profit.
The money pit is training these models (and not that much if you are efficient like chinese models). Once they are trained, they are served with large profit margins compared to the inference cost.
OpenAI and Anthropic are without a doubt selling their API for a lot more than the cost of running the model.
Comment by bob1029 19 hours ago
Eating burgers and driving cars around costs a lot more than whatever # of watts the human brain consumes.
Comment by bbor 17 hours ago
Comment by direwolf20 4 hours ago
Comment by crazygringo 23 hours ago
Running at their designed temperature.
> You send all the necessary data, including information that you would never otherwise share.
I've never sent the type of data that isn't already either stored by GitHub or a cloud provider, so no difference there.
> And you most likely do not pay the actual costs.
So? Even if costs double once investor subsidies stop, that doesn't change much of anything. And the entire history of computing is that things tend to get cheaper.
> You and your business become dependent on this major gatekeeper.
Not really. Switching between Claude and Gemini or whatever new competition shows up is pretty easy. I'm no more dependent on it than I am on any of another hundred business services or providers that similarly mostly also have competitors.
Comment by chasebank 17 hours ago
Comment by mikeocool 1 day ago
There’s often a better faster way to do it, and while it might get to the short term goal eventually, it’s often created some long term problems along the way.
Comment by moooo99 15 hours ago
So yeah, that wasted a lot of GPU cycles for a very unimpressive result, but with a renewed superficial feeling of competence
Comment by squidbeak 11 hours ago
Why would this be the first technology that doesn't become cheaper at scale over time?
Comment by karlgkk 17 hours ago
Oh my lord you absolutely do not. The costs to oai per token inference ALONE are at least 7x. AT LEAST and from what I’ve heard, much higher.
Comment by tgrowazay 17 hours ago
Comment by hahahahhaah 1 day ago
Comment by YetAnotherNick 1 day ago
[1]: https://developer-blogs.nvidia.com/wp-content/uploads/2026/0...
Comment by storystarling 1 day ago
Comment by YetAnotherNick 19 hours ago
Comment by utopiah 17 hours ago
Like... bro that's THE foundation of CS. That's the principle of The bomb in Turing's time. One can still marvel at it but it's been with us since the beginning.
Comment by borroka 4 hours ago
Vibe coding and other tools, such as Google Vision, helped me download images published online, compile a PDF, perform OCR (Tesseract and Google Vision), and save everything in text format.
The OCR process was satisfactory for a first draft, but the text file has a lot of errors, as you'd expect when the dictionary has about 30,000 entries: Diacritical marks disappear, along with typographical marks and dashes, lines are moved up and down, and parts of speech (POS) are written in so many different ways due to errors that it is necessary to identify the wrong POS's one by one.
If the reasoning abilities of LLM-derived coding agents were as advanced as some claim, it would be possible for the LLM to derive the rules that must be applied to the entire dictionary from a sufficiently large set of “gold standard” examples.
If only that were the case. Every general rule applied creates other errors that propagate throughout the text, so that for every problem partially solved, two more emerge. What is evident to me is not clear to the LLM, in the sense that it is simple for me, albeit long and tedious, to do the editing work manually.
To give an example, if trans.v. (for example) indicates a transitive verb, it is clear to me that .trans.v. is a typographical error. I can tell the coding tool (I used Gemini, Claude, and Codex, with Codex being the best) that, given a standard POS, if there is a “.” before it, it must be deleted because it is a typo. The generalization that comes easily to me but not to the coding agent is that if not one but two periods precede the POS, it means there are two typos, not to delete just one of the two dots.
This means that almost all rules have to be specified, whereas I expected the coding agent to generalize from the gigantic corpus on which it was trained (it should “understand” what the POS are, typical typos, the language in which the dictionary is written, etc.).
The transition from text to json to webapp is almost miraculous, but what is still missing from the mix is human-level reasoning and common sense (in part, I still believe that coding agents are fantastic, to be clear).
Comment by vinhnx 20 hours ago
Comment by noisy_boy 9 hours ago
2026 is just when it picks up - it'll get exponentially worse.
I think 2026 is the year of Business Analysts who were unable to code. Now CC et all are good enough that they can realize the vision as long as one knows exactly the requirements (software design not that important). Programmers who didn't know business could get by so far. Not anymore, because with these tools, the guy who knows business can now code fairly well.
Comment by sponaugle 7 hours ago
It could also be BAs being lazy and not jumping ahead of the train that is coming towards them. It feels like in this race the engineer who is willing to learn business will still have an advantage over the business person who learns tech. At least for a little while.
Comment by HugoDz 9 hours ago
Comment by kitd 9 hours ago
... until CC doesn't get it quite right and the guy who knows business doesn't know code.
Comment by rubzah 8 hours ago
Comment by AnimalMuppet 7 hours ago
Comment by 1970-01-01 6 hours ago
I've seen the exact opposite with Claude. It literally ditched my request mid-analysis when doing a root cause analysis. It decided I was tired of the service failing and then gave me some restart commands to 'just get it working'
Comment by Macha 1 day ago
Starcraft and Factorio are exactly what it is not. Starcraft has a loooot of micro involved at any level beyond mid level play, despite all the "pro macros and beats gold league with mass queens" meme videos. I guess it could be like Factorio if you're playing it by plugging together blueprint books from other people but I don't think that's how most people play.
At that level of abstraction, it's more like grand strategy if you're to compare it to any video game? You're controlling high level pushes and then the units "do stuff" and then you react to the results.
Comment by kridsdale3 22 hours ago
Comment by TheRoque 20 hours ago
Comment by zetazzed 1 day ago
Comment by porise 1 day ago
Comment by CameronBanga 1 day ago
I've been working in the mobile space since 2009, though primarily as a designer and then product manager. I work in kinda a hybrid engineering/PM job now, and have never been a particularly strong programmer. I definitely wouldn't have thought I could make something with that polish, let alone in 3 months.
That code base is ~98% Claude code.
Comment by bee_rider 1 day ago
Comment by CameronBanga 1 day ago
Comment by oasisbob 1 day ago
Not sure if it's an American pronunciation thing, but I had to stare at that long and hard to see the problem and even after seeing it couldn't think of how you could possibly spell the correct word otherwise.
Comment by bsder 1 day ago
It's a bad American pronunciation thing like "Febuwary" and "nuculer".
If you pronounce the syllables correctly, "an-ec-dote", "Feb-ru-ar-y", "nu-cle-ar" the spellings follow.
English has it's fair share of spelling stupidities, but if people don't even pronounce the words correctly there is no hope.
Comment by lynguist 3 hours ago
The pronunciation of the first r with a y sound has always been one of two possible standards, in fact "February" is a re-Latinizing spelling but English doesn’t like the br-r sound so it naturally dissimilates to by-r.
Comment by CSMastermind 18 hours ago
I'm not sure how big your repos are but I've been effective working with repos that have thousands of files and tens of thousands of lines of code.
If you're just prototyping it will hit wall when things get unwieldy but that's normally a sign that you need to refactor a bit.
Super strict compiler settings, static analysis, comprehensive tests, and documentation help a lot. As does basic technical design. After a big feature is shipped I do a refactor cycle with the LLM where we do a comprehensive code review and patch things up. This does require human oversight because the LLMs are still lacking judgement on what makes for good code design.
The places where I've seen them be useless is working across repositories or interfacing with things like infrastructure.
It's also very model-dependent. Opus is a good daily driver but Codex is much better are writing tests for some reason. I'll often also switch to it for hard problems that Claude can't solve. Gemini is nice for 'I need a prototype in the next 10 minutes', especially for making quick and dirty bespoke front-ends where you don't care about the design just the functionality.
Comment by madhadron 17 hours ago
Perhaps this is part of it? Tens of thousands of lines of code seems like a very small repo to me.
Comment by TaupeRanger 1 day ago
Comment by danielvaughn 1 day ago
I never paid any attention to different models, because they all felt roughly equal to me. But Opus 4.5 is really and truly different. It's not a qualitative difference, it's more like it just finally hit that quantitative edge that allows me to lean much more heavily on it for routine work.
I highly suggest trying it out, alongside a well-built coding agent like the one offered by Claude Code, Cursor, or OpenCode. I'm using it on a fairly complex monorepo and my impressions are much the same as Karpathy's.
Comment by suddenlybananas 11 hours ago
Comment by danielvaughn 9 hours ago
My opinion isn't based on what other people are saying, it's my own experience as a fairly AI-skeptical person. Again, I highly suggest you give it an honest try and decide for yourself.
Comment by keerthiko 1 day ago
Trying to incorporate it in existing codebases (esp when the end user is a support interaction or more away) is still folly, except for closely reviewed and/or non-business-logic modifications.
That said, it is quite impressive to set up a simple architecture, or just list the filenames, and tell some agents to go crazy to implement what you want the application to do. But once it crosses a certain complexity, I find you need to prompt closer and closer to the weeds to see real results. I imagine a non-technical prompter cannot proceed past a certain prototype fidelity threshold, let alone make meaningful contributions to a mature codebase via LLM without a human engineer to guide and review.
Comment by reubenmorais 1 day ago
Comment by jjfoooo4 1 day ago
It's been especially helpful in explaining and understanding arcane bits of legacy code behavior my users ask about. I trigger Claude to examine the code and figure out how the feature works, then tell it to update the documentation accordingly.
Comment by chrisjj 23 hours ago
And how do you verify its output isn't total fabrication?
Comment by jjfoooo4 6 hours ago
Inconsistencies also pop up in backtesting, for example if there's a point that the llm answers different ways in multiple iterations, that's a good candidate to improve docs on.
Similar to a coworker's work, there's a certain amount of trust in the competency involved.
Comment by _dark_matter_ 19 hours ago
Comment by chrisjj 12 hours ago
Comment by jjfoooo4 6 hours ago
For example, I have it ignore messages about code freezes, because that's a policy question that probably changes over time, and I have it ignore urgent oncall messages, because the asker there probably wants a quick response from a human.
But there's a lot of questions in the vein of "How do I write a query for {results my service emits}", how does this feature work, where automation can handle a lot (and provide more complete answers than a human can off the top of their head)
Comment by chrisjj 4 hours ago
Comment by 1123581321 1 day ago
Comment by mh2266 20 hours ago
Comment by 1123581321 7 hours ago
Comment by hnben 16 hours ago
Comment by gwd 1 day ago
I really enjoyed the process. As TFA says, you have to keep a close eye on it. But the whole process was a lot less effort, and I ended up doing mor than I would otherwise have done.
Comment by ph4te 1 day ago
Comment by fy20 18 hours ago
For this the LLM struggles a bit, but so does a human. The main issues are it messes up some state that it didnt realise was used elsewhere, and out test coverage is not great. We've seen humans make exactly the same kind of mistakes. We use MCP for Figma so most of the time it can get a UI 95% done, just a few tweaks needed by the operator.
On the backend (Typescript + Node, good test coverage) it can pretty much one-shot - from a plan - whatever feature you give it.
We use opus-4.5 mostly, and sometimes gpt-5.2-codex, through Cursor. You aren't going to get ChatGPT (the web interface) to do anything useful, switch to Cursor, Codex or Claude Code. And right now it is worth paying for the subscription, you don't get the same quality from cheaper or free models (although they are starting to catch up, I've had promising results from GLM-4.7).
Comment by yasoob 18 hours ago
I had never used Swift before that and was able to use AI to whip up a fairly full-featured and complex application with a decent amount of code. I had to make some cross-cutting changes along the way as well that impacted quite a few files and things mostly worked fine with me guiding the AI. Mind you this was a year ago so I can only imagine how much better I would fare now with even better AI models. That whole month was spent not only on coding but on learning Swift enough to fix problems when AI started running into circles and then learning about Xcode profiler to optimize the application for speed and improving perf.
Comment by BeetleB 1 day ago
What type of documents do you have explaining the codebase and its messy interactions, and have you provided that to the LLM?
Also, have you tried giving someone brand new to the team the exact same task and information you gave to the LLM, and how effective were they compared to the LLM?
> I don't know how much better Claude is than ChatGPT, but I can't get ChatGPT to do much useful with an existing large codebase.
As others have pointed out, from your comment, it doesn't sound like you've used a tool dedicated for AI coding.
(But even if you had, it would still fail if you expect LLMs to do stuff without sufficient context).
Comment by smusamashah 1 day ago
Comment by jumploops 1 day ago
Commercial codebases, especially private internal ones, are often messy. It seems this is mostly due to the iterative nature of development in response to customer demands.
As a product gets larger, and addresses a wider audience, there’s an ever increasing chance of divergence from the initial assumptions and the new requirements.
We call this tech debt.
Combine this with a revolving door of developers, and you start to see Conway’s law in action, where the system resembles the organization of the developers rather than the “pure” product spec.
With this in mind, I’ve found success in using LLMs to refactor existing codebases to better match the current requirements (i.e. splitting out helpers, modularizing, renaming, etc.).
Once the legacy codebase is “LLMified”, the coding agents seem to perform more predictably.
YMMV here, as it’s hard to do large refactors without tests for correctness.
(Note: I’ve dabbled with a test first refactor approach, but haven’t gone to the lengths to suggest it works, but I believe it could)
Comment by mh2266 20 hours ago
Claude by default, unless I tell it not to, will write stuff like:
// we need something to be true
somethingPasses = something()
if (!somethingPasses) {
return false
}
// we need somethingElse to be true
somethingElsePasses = somethingElse()
if (!somethingElsePasses) {
return false
}
return true
instead of the very simple boolean logic that could express this in one line, with the "this code does what it obviously does" comments added all over the place.generally unless you tell it not to, it does things in very verbose ways that most humans would never do, and since there's an infinite number of ways that it can invent absurd verbosity, it is hard to preemptively prompt against all of them.
to be clear, I am getting a huge amount of value out of it for executing a bunch of large refactors and "modernization" of a (really) big legacy codebase at scale and in parallel. but it's not outputting the sort of code that I see when someone prompts it "build a new feature ...", and a big part of my prompts is screaming at it not to do certain things or to refuse the task if it at any point becomes unsure.
Comment by jumploops 19 hours ago
Meaning if you ask it “handle this new condition” it will happily throw in a hacky conditional and get the job done.
I’ve found the most success in having it reason about the current architecture (explicitly), and then to propose a set of changes to accomplish the task (2-5 ways), review, and then implement the changes that best suit the scope of the larger system.
Comment by dexdal 19 hours ago
Comment by jumploops 18 hours ago
The LLM is onboarding to your codebase with each context window, all it knows is what it’s seen already.
Comment by olig15 23 hours ago
Comment by tunesmith 1 day ago
Comment by Okkef 1 day ago
After you tried it, come back.
Comment by Imustaskforhelp 1 day ago
I tried a website which offered the Opus model in their agentic workflow & I felt something different too I guess.
Currently trying out Kimi code (using their recent kimi 2.5) for the first time buying any AI product because got it for like 1.49$ per month. It does feel a bit less powerful than claude code but I feel like monetarily its worth it.
Y'know you have to like bargain with an AI model to reduce its pricing which I just felt really curious about. The psychology behind it feels fascinating because I think even as a frugal person, I already felt invested enough in the model and that became my sunk cost fallacy
Shame for me personally because they use it as a hook to get people using their tool and then charge next month 19$ (I mean really Cheaper than claude code for the most part but still comparative to 1.49$)
Comment by jwr 20 hours ago
Comment by culi 20 hours ago
Comment by wcedmisten 19 hours ago
E.g. macros exist in Clojure but not Python/JS, and I've definitely been plenty stumped by seeing them in the codebase. They tend to be used in very "clever" patterns.
On the other hand, I'm a bit surprised Claude can tackle a complex Clojure codebase. It's been a while since I attempted using an LLM for Clojure, but at the time it failed completely (I think because there is relatively little training data compared to other mainstream languages). I'll have to check that out myself
Comment by epolanski 23 hours ago
2. Put your important dependencies source code in the same directory. E.g. put a `_vendor` directory in the project, in it put the codebase at the same tag you're using or whatever: postgres, redis, vue, whatever.
3. Write good plans and requirements. Acceptance criteria, context, user stories, etc. Save them in markdown files. Review those multiple times with LLMs trying to find weaknesses. Then move to implementation files: make it write a detailed plan of what it's gonna change and why, and what it will produce.
4. Write very good prompts. LLMs follow instructions well if they are clear "you should proactively do X", is a weak instruction if you mean "you must do X".
5. LLMs are far from perfect, and full of limits. Karpathy sums their cons very well in his long list. If you don't know their limits you'll mismanage the expectations and not use them when they are a huge boost and waste time on things they don't cope well with. On top of that: all LLMs are different in their "personality", how they adhere to instruction, how creative they are, etc.
Comment by bluGill 1 day ago
Which is to say you have to learn to use the tools. I've only just started, and cannot claim to be an expert. I'll keep using them - in part because everyone is demanding I do - but to use them you clearly need to know how to do it yourself.
Comment by simonw 1 day ago
I also find pointing it to an existing folder full of code that conforms to certain standards can work really well.
Comment by bluGill 22 hours ago
Comment by bflesch 1 day ago
Comment by simonw 1 day ago
Comment by CamperBob2 23 hours ago
Comment by rob 1 day ago
There's basically a "brainstorm" /slash command that you go back and forth with, and it places what you came up with in docs/plans/YYYY-MM-DD-<topic>-design.md.
Then you can run a "write-plan" /slash command on the docs/plans/YYYY-MM-DD-<topic>-design.md file, and it'll give you a docs/plans/YYYY-MM-DD-<topic>-implementation.md file that you can then feed to the "execute-plan" /slash command, where it breaks everything down into batches, tasks, etc, and actually implements everything (so three /slash commands total.)
There's also "GET SHIT DONE" (GSD) [1] that I want to look at, but at first glance it seems to be a bit more involved than Superpowers with more commands. Maybe it'd be better for larger projects.
Comment by gverrilla 19 hours ago
Comment by datsci_est_2015 21 hours ago
I guess this is fine when you don’t have customers or stakeholders that give a shit lol.
Comment by languid-photic 1 day ago
Comment by Macha 1 day ago
Comment by simianwords 1 day ago
Comment by xyzsparetimexyz 1 day ago
Comment by gsk22 1 day ago
Comment by vindex10 1 day ago
Comment by redox99 1 day ago
AI assisted coding has never been like that, which would be atrocious. The typical workflow was using Cursor with some model of your choice (almost always an Anthropic model like sonnet before opus 4.5 released). Nowadays (in addition to IDEs) it's often a CLI tool like Claude Code with Opus or Codex CLI with GPT Codex 5.2 high/xhigh.
Comment by maxdo 1 day ago
Comment by spaceman_2020 1 day ago
If you're using plain vanilla chatgpt, you're woefully, woefully out of touch. Heck, even plain claude code is now outdated
Comment by shj2105 1 day ago
Comment by spaceman_2020 1 day ago
At a base level, people are “upgrading” their Claude Code with custom skills and subagents - all text files saved in .claude/agents|skills.
You can also use their new tasks primitive to basically run a Ralph-like loop
But at the edges, people are using multiple instances, each handling different aspects in parallel - stuff like Gas Town
Tbf you can still get a lot of mileage out of vanilla Claude Code. But I’ve found that even adding a simple frontend design skill improves the output substantially
Comment by duckmysick 22 hours ago
Comment by spaceman_2020 17 hours ago
Anthropic’s own repo is as good place as any
Comment by toephu2 1 day ago
Comment by adamddev1 1 day ago
1. hand arithmetic -> using a calculator
2. assembly -> using a high level language
3. writing code -> making an LLM write code
Number 3 does not belong. Number 3 is a fundamentally different leap because it's not based on deterministic logic. You can't depend on an LLM like you can depend on a calculator or a compiler. LLMs are totally different.
Comment by Havoc 22 hours ago
Comment by yojat661 18 hours ago
Comment by adamddev1 17 hours ago
It often doesn't work. That's the point. A calculator works 100% of the time. A LLM might work 95% of the time, or 80%, or 40%, or 99% depending on what you're doing. This is difference and a key feature.
Comment by Havoc 8 hours ago
To me that isn’t a show stopper. Much of the real world works like that. We put very unreliable humans behind the wheel of 2 ton cars. So in a way this is perhaps just programmers aligning with the messy real world?
Perhaps a bit like architects can only model things so far eventually you need to build the thing and deal with the surprises and imperfection of dirt
Comment by AstroBen 21 hours ago
Comment by kypro 1 day ago
It doesn't matter how good you are at calculations the answer to 2 + 2 is always 4. There are no methods of solving 2 + 2 which could result in you accidentally giving everyone who reads the result of your calculation write access to your entire DB. But there are different ways to code a system even if the UI is the same, and some of these may neglect to consider permissions.
I think a good parallel here would be to imagine that tomorrow we had access to humanoid robots who could do construction work. Would we want them to just go build skyscrapers and bridges and view all construction businesses which didn't embrace the humanoid robots as akin to doing arithmetic by hand?
You could of course argue that there's no problem here so long as trained construction workers are supervising the robots to make sure they're getting tolerances right and doing good welds, but then what happens 10 years down the road when humans haven't built a building in years? If people are not writing code any more then how can people be expected to review AI generated code?
I think the optimistic picture here is that humans just won't be needed in the future. In theory when models are good enough we should be able to trust the AI systems more than humans. But the less optimistic side of me questions a future in which humans no longer do, or even know how to do such fundamental things.
Comment by onetimeusename 1 day ago
I have a professor who has researched auto generated code for decades and about six months ago he told me he didn't think AI would make humans obsolete but that it was like other incremental tools over the years and it would just make good coders even better than other coders. He also said it would probably come with its share of disappointments and never be fully autonomous. Some of what he said was a critique of AI and some of it was just pointing out that it's very difficult to have perfect code/specs.
Comment by slfreference 1 day ago
Billionaire coder: a person who has "written" billion lines.
Ordinary coders : people with only couple of thousands to their git blame.
Comment by pron 23 hours ago
Comment by aixpert 23 hours ago
you might think I'm kidding but Search redox on github, you will find that project and the anonymous contributions
Comment by rester324 23 hours ago
Comment by bojo 22 hours ago
Decided to figure out what this "vibe coding" nonsense is, and now there's a certain level of joy to all of this again. Being able to clearly define everything using markdown contexts before any code is even written has been a great way to brain dump those 25 years of experience and actually watch something sane get produced.
Here are the stats Claude Code gave me:
Overview
┌───────────────┬────────────────────────────┐
│ Metric │ Value │
├───────────────┼────────────────────────────┤
│ Total Commits │ 365 │
├───────────────┼────────────────────────────┤
│ Project Age │ 7 days (Jan 20 - 27, 2026) │
├───────────────┼────────────────────────────┤
│ Open Issues │ 5 │
├───────────────┼────────────────────────────┤
│ Contributors │ 1 │
└───────────────┴────────────────────────────┘
Lines of Code by Language
┌───────────────────────────┬───────┬────────┬───────────┐
│ Language │ Files │ Lines │ % of Code │
├───────────────────────────┼───────┼────────┼───────────┤
│ Rust (Backend) │ 94 │ 31,317 │ 51.8% │
├───────────────────────────┼───────┼────────┼───────────┤
│ TypeScript/TSX (Frontend) │ 189 │ 29,167 │ 48.2% │
├───────────────────────────┼───────┼────────┼───────────┤
│ SQL (Migrations) │ 34 │ 1,334 │ — │
├───────────────────────────┼───────┼────────┼───────────┤
│ CSS │ — │ 1,868 │ — │
├───────────────────────────┼───────┼────────┼───────────┤
│ Markdown (Docs) │ 37 │ 9,485 │ — │
├───────────────────────────┼───────┼────────┼───────────┤
│ Total Source │ 317 │ 60,484 │ 100% │
└───────────────────────────┴───────┴────────┴───────────┘Comment by bojo 22 hours ago
I then realized I could feed it everything it ever needed to know. Just create a docs/* folder and tell it to read that every session.
Through discovery I learned about CLAUDE.md, and adding skills.
Now I have an /analyst, /engineer, and /devops that I talk to all day with their own logic and limitations, as well as the more general project CLAUDE.md, and dozens of docs/* files we collaborate on.
I'm at the point I'm running happy.engineering on my phone and don't even need to sit in front of the computer anymore.
Comment by darkwater 14 hours ago
I wonder if this line
> It will configure an auth_backend.rs and wire up a basic user
over a big enough number of projects will lead to at least 2-3 different user names.
Comment by UnlockedSecrets 19 hours ago
Comment by bojo 7 hours ago
Comment by UnlockedSecrets 39 minutes ago
Comment by bartoszcki 23 hours ago
Comment by woah 5 hours ago
Comment by nsainsbury 1 day ago
I actually disagree with Andrej here re: "Generation (writing code) and discrimination (reading code) are different capabilities in the brain." and I would argue that the only reason he can read code fluently, find issues, etc. is because he has spent year in a non-AI assisted world writing code. As time goes on, he will become substantially worse.
This also bodes incredibly poorly for the next generation, who will mostly in their formative years now avoid writing code and thus fail to even develop a idea of what good code is, how it works/why it works, why you make certain decisions, and not others, etc. and ultimately you will see them become utterly dependent on AI, unable to make progress without it.
IMO outsourcing thinking is going to have incredibly negative consequences for the world at large.
Comment by gwd 1 day ago
Comment by thoughtpeddler 1 day ago
Comment by olafalo 20 hours ago
Comment by nicodjimenez 7 hours ago
Comment by jliptzin 9 hours ago
Comment by fishtoaster 1 day ago
This is about where I'm at. I love pure claude code for code I don't care about, but for anything I'm working on with other people I need to audit the results - which I much prefer to do in an IDE.
Comment by Aperocky 7 hours ago
A part of me really want to say yes and wear it as a badge to have been coding before LLMs were a thing, but at the same time, it's not unprecedented.
Comment by direwolf20 6 hours ago
Comment by ex-aws-dude 7 hours ago
That’s not really true in this case
I think a person with zero coding knowledge would have a lot tougher time using these tools successfully
Comment by doe88 13 hours ago
Comment by twa927 1 day ago
Comment by ValentineC 1 day ago
This makes it sound like we're back in the days of FrontPage/Dreamweaver WYSIWYG. Goodness.
Comment by twa927 1 day ago
Comment by culi 20 hours ago
Comment by DominikPeters 23 hours ago
Comment by twa927 13 hours ago
Comment by TuxSH 1 hour ago
If you have a ChatGPT subscription, try Codex with GPT-5.2-High or 5.2-codex High? In my experience, while being much slower, it produces far better results than Opus and seems even more aggressively subsidized (more generous rate limits).
Comment by elif 10 hours ago
Is the programmer ego really this fragile? At least luddites had an ideological reasoning, whereas here we just seem to have emotional reflexes.
Comment by phito 10 hours ago
Comment by hollowturtle 9 hours ago
The fact that people keep pushing figures like 80% is total bs to me
Comment by an0malous 8 hours ago
Comment by bob1029 8 hours ago
Do you know what my use case is? Do you know what kind of success rate I would actually achieve right now? Please show me where my missing 20% resides.
Comment by phito 7 hours ago
Comment by nsb1 1 day ago
Comment by jeffreygoesto 15 hours ago
I think not much. The real society bottleneck is that a growing number of peeps try to convince each other that life and society are a zero sum game.
They are so much more if we don't do that.
Comment by epolanski 1 day ago
No doubt that good engineers will know when and how to leverage the tool, both for coding and improving processes (design-to-code, requirement collection, task tracking, basic code reviewal, etc) improving their own productivity and of those around them.
Motivated individuals will also leverage these tools to learn more and faster.
And yes, of course it's not the only tool one should use, of course there's still value in talking with proper human experts to learn from, etc, but 90% of the time you're looking for info the LLM will dig it from you reading at the source code of e.g. Postgres and its test rather than asking on chats/stack overflow.
This is a trasformative technology that will make great engineers even stronger, but it will weed out those who were merely valued for their very basic capability of churning something but never cared neither about engineering nor coding, which is 90% of our industry.
Comment by tariky 10 hours ago
It is like plowing land with hand one year age and now is like I'm in brend new John Deere. It's amazing.
Of course its not perfect but if you understand code and problem it needs to solve then it works really good.
Comment by vibeprofessor 1 day ago
I expect interviews will evolve into "build project X with an LLM while we watch" and audit of agent specs
Comment by maxdo 1 day ago
fun stats: corelation is real, people who were good at vibe code, also had offer(s) with other companies that didn't run vibe code interviews.
Comment by xyzsparetimexyz 1 day ago
Comment by jatari 7 hours ago
Comment by maxdo 18 hours ago
It doesn’t work you can’t be productive without agent capable of doing queries to db etc
Comment by xyzsparetimexyz 14 hours ago
What? I can't parse this sentence. Maybe get an ai to rewrite it?
Comment by bflesch 1 day ago
Comment by thefourthchime 1 day ago
Comment by iwontberude 1 day ago
Comment by 0xy 1 day ago
Comment by TheGRS 1 day ago
I'm still a little iffy on the agent swarm idea. I think I will need to see it in action in an interface that works for me. To me it feels like we are anthropomorphizing agents too much, and that results in this idea that we can put agents into roles and them combine them into useful teams. I can't help seeing all agents as the same automatons and I have trouble understanding why giving an agent with different guideliens to follow, and then having them follow along another agent would give me better results than just fixing the context in the first place. Either that or just working more on the code pipeline to spot issues early on - all the stuff we already test for.
Comment by giancarlostoro 23 hours ago
I'm honestly considering throwing away my JetBrains subscription and this is year 9 or 10 of me having one. I only open Zed and start yappin' at Claude Code. My employer doesn't even want me using ReSharper because some contractor ruined it for everyone else by auto running all code suggestions and checking them in blindly, making for really obnoxious code diffs and probably introducing countless bugs and issues.
Meanwhile tasks that I know would take any developers months, I can hand-craft with Claude in a few hours, with the same level of detail, but no endless weeks of working on things that'll be done SoonTM.
Comment by thomassmith65 23 hours ago
Slopacolypse. I am bracing for 2026 as the year of the slopacolypse across all of github, substack, arxiv, X/instagram, and generally all digital media.
Did he coin the term "slopacolypse"? It's a useful one.Comment by chrisjj 23 hours ago
Comment by direwolf20 4 hours ago
Comment by direwolf20 2 hours ago
Comment by rvz 20 hours ago
“slopacolypse” does not make any sense both in writing and pronunciation.
Comment by alexose 1 day ago
For as fast as this is all moving, it's good to remember that most of us are actually a lot closer to the tip of the spear than we think.
Comment by siliconc0w 1 day ago
I can supervise maybe three agents in parallel before a task requiring significant hand-holding means I'm likely blocking an agent.
And the time an agent is 'restlessly working' on something in usually inversely correlated with the likelihood to succeed. Usually if it's going down a rabbit hole, the correct thing to do is to intervene and reorient it.
Comment by jopsen 2 days ago
Any qualified guesses?
I'm not convinced more traders on wall street will allocate capital more effectively leading to economic growth.
Will more programmers grow the economy? Or should we get real jobs ;)
Comment by iwontberude 1 day ago
Comment by js8 1 day ago
Comment by iwontberude 1 day ago
Comment by rschick 2 days ago
Comment by longhaul 22 hours ago
Adding/prompting features one by one, reviewing code and then testing the resulting binary feels like the new programming workflow
Prompt/REview/Test - PRET.
Comment by axus 22 hours ago
Comment by arh5451 13 hours ago
Comment by gregjor 13 hours ago
Comment by ed_mercer 13 hours ago
Comment by dag11 12 hours ago
I'm not disagreeing.
Comment by FeteCommuniste 11 hours ago
I doubt most people feel the same, though.
Comment by daxfohl 1 day ago
Comment by forrestthewoods 1 day ago
We’re about a year deep into “AI is changing everything” and I don’t see 10x software quality or output.
Now don’t get me wrong I’m a big fan of AI tooling and think it does meaningfully increase value. But I’m damn tired of all the talk with literally nothing to show for it or back it up.
Comment by lomase 1 day ago
Comment by all2well 1 day ago
Comment by geraneum 1 day ago
OP mentions that they are actually doing the “babysitting”
Comment by spongebobstoes 1 day ago
use many simultaneously, and bounce between them to unblock them as needed
build good tools and tests. you will soon learn all the things you did manually -- script them all
Comment by erelong 16 hours ago
A lot of these things sound cool but sometimes I'm curious what they're actually building
Like, is their bottleneck creativity now then? Are they building naything interedting or using agents to build... things that don't appeal to me, anyway?
Comment by ewidar 15 hours ago
As an example finding myself in a similar 80% situation, over the last few months I built
- a personal website with my projects and poems
- an app to rework recipes in a format I like from any source (text, video,...)
- a 3d visual version of a project my nephew did for work
- a gym class finder in my area with filters the websites don't provide
- a football data game
- working on a saas for work so typical saas stuff
I was never that productive on personal projects, so this is great for me.
Also the coding part of these projects was not very appealing to me, only the output, so it fits well with AI using.
In the meanwhile I did Advent of Code as usual for the fun of code. Different objectives.
Comment by TrackerFF 11 hours ago
Comment by energy123 19 hours ago
It's going to feel literally like playing God, where you type in what you want and it happens ~instantly.
Comment by brcmthrowaway 18 hours ago
Comment by energy123 17 hours ago
- "OpenAI is partnering with Cerebras to add 750MW of ultra low-latency AI compute"
- Sam Altman saying that users want faster inference more than lower cost in his interview.
- My understanding that many tasks are serial in nature.
Comment by cactusplant7374 7 hours ago
Comment by energy123 5 hours ago
My trick is to attach the codebase as a txt file to 5-10 different GPT 5.2 Thinking chats, paste in the specs, and then get hard work done there, then just copy paste the final task list into codex to lower codex usage.
Comment by dzonga 10 hours ago
even dealing with api's that have MCP servers the so called agents make a mess of everything.
my stuff is just regular data stuff - ingest data from x - transform it | make it real time - then pipe it to y
Comment by tintor 1 day ago
Well, merely approving code takes no skill at all.
Comment by roblh 1 day ago
Comment by ositowang 1 day ago
Comment by dubeye 12 hours ago
9/10 of the most important social media users use X, like or loath it
Comment by gregorygoc 12 hours ago
Comment by tomlockwood 23 hours ago
Interesting.
Comment by maximedupre 1 day ago
It does hurt, that's why all programmers now need an entrepreneurial mindset... you become if you use your skills + new AI power to build a business.
Comment by jetsetk 22 hours ago
Comment by maximedupre 10 hours ago
Look entrepreneurship has never been easy. In fact it's always been one of the hardest thing ever. I'm just saying... *you don't have to do it*. Do whatever you want lol
Happy to hear what's your solution to avoid becoming totally replaceable and obsolete.
Comment by xyzsparetimexyz 1 day ago
Comment by maximedupre 23 hours ago
Comment by maximedupre 23 hours ago
Comment by webdevver 11 hours ago
Comment by svara 7 hours ago
Interestingly, when you point out this ...
> IDEs/agent swarms/fallability. Both the "no need for IDE anymore" hype and the "agent swarm" hype is imo too much for right now. The models definitely still make mistakes and if you have any code you actually care about I would watch them like a hawk, in a nice large IDE on the side.
... here on HN [0] you get a bunch of people telling you to get with the times, grandpa.
Really makes me wonder: Who are these people and why are they doing that?
Comment by upghost 8 hours ago
I see a lot of comments about folks being worried about going soft, getting brain rot, or losing the fun part of coding.
As far as I'm concerned this is a bigger (albeit kinda flakey) self-driving tractor. Yeah I'd be bored if I just stuck to my one little cabbage patch I'd been tilling by hand. But my new cabbage patch is now a megafarm. Subjectively, same level of effort.
Comment by philipwhiuk 1 day ago
The bits left unsaid:
1. Burning tokens, which we charge you for
2. My CPU does this when I tell it to do bogosort on a million 32-bit integers, it doesn't mean it's a good thing
Comment by appstorelottery 23 hours ago
I've been increasingly using LLM's to code for nearly two years now - and I can definitely notice my brain atrophy. It bothers me. Actually over the last few weeks I've been looking at a major update to a product in production & considered doing the edits manually - at least typing the code from the LLM & also being much more granular with my instructions (i.e. focus on one function at a time). I feel in some ways like my brain is turning into slop & I've been coding for at least 35 years... I feel validated by Karpathy.
Comment by epolanski 23 hours ago
1. Manual coding may be less relevant (albeit ability to read code, interpret it and understand it will be more) in the future. Likely already is.
2. Any skill you don't practice becomes "weaker". Gonna give you an example. I play chess since my childhood, but sometimes I go months without playing it, even years. When I get back I start losing elo fast. If I was in the top 10% of chess.com, I drop to top 30% in the weeks after. But after few months I'm back at top 10%. Takeaway: your relative ability is more or less the same compared to other practitioners, you're simply rusty.
Comment by appstorelottery 23 hours ago
Comment by cyanydeez 2 days ago
Like, do these guys actually dog food real user experience, or are they all admins with the fast lane to the real model while everyone outside the org has to go through the 10 layers of model sheding, caching and other means and methods of saving money.
We all know these models are expensive as fuck to run and these companies are degrading service, A+B testing, and the rest. Do they actually ponder these things directly?
Just always seems like people are on drugs when they talk about the capabilities, and like, the drugs could be pure shit (good) or ditch weed, and we call just act like the pipeline for drugs is a consistent thing but it's really not, not at this stage where they're all burning cash through infrastructure. Definitely, like drug dealers, you know they're cutting the good stuff with low cost cached gibberish.
Comment by quinnjh 1 day ago
Can confirm. My partner's chatGPT wouldnt return anything useful for her given a specific query involving web use, while i got the desired result sitting side by side. She contacted support and they said nothing they can do about it, her account is in an A/B test group without some features removed. I imagine this saves them considerable resources despite still billing customers for them.
how much this is occurring is anyones guess
Comment by bigwheels 1 day ago
The underlying models are all actually really undifferentiated under the covers except for the post-training and base prompts. If you eliminate the base prompts the models behave near identically.
A conspiracy would be a helluva lot more interesting and fun, but I've spoken to these folks firsthand and it seems they already have enough challenges keeping the beast running.
Comment by shawabawa3 2 days ago
I started by copy pasting more and more stuff in chatgpt. Then using more and more in-IDE prompting, then more and more agent tools (Claude etc). And suddenly I realise I barely hand code anymore
For sure there's still a place for manual coding, especially schemas/queries or other fiddly things where a tiny mistake gets amplified, but the vast majority of "basic work" is now just prompting, and honestly the code quality is _better_ that it was before, all kinds of refactors I didn't think about or couldn't be bothered with have almost automatically
And people still call them stochastic parrots
Comment by Macha 1 day ago
ChatGPT 3.5/4 (2023-2024): The chat interface was verbose and clunky and it was just... wrong... like 70+% of the time. Not worth using.
CoPilot autocomplete and Gitlab Duo and Junie (late 2024-early 2025): Wayyy too aggressive at guessing exactly what I wasn't doing and hijacked my tab complete when pre-LLM type-tetris autocomplete was just more reliable.
Copilot Edit/early Cursor (early 2025): Ok, I can sort of see uses here but god is picking the right files all the time such a pain as it really means I need to have figured out what I wanted to do in such detail already that what was even the point? Also the models at that time just quickly descended into incoherency after like three prompts, if it went off track good luck ever correcting it.
Copilot Agent mode / Cursor (late 2025): Ok, great, if the scope is narrowly scoped, and I'm either going to write the tests for it or it's refactoring existing code it could do something. Like something mechanical like the library has a migration where we need to replace the use of methods A/B/C and replace them with a different combination of X/Y/Z. great, it can do that. Or like CRUD controller #341. I mean, sure, if my boss is going to pay for it, but not life changing.
Zed Agent mode / Cursor agent mode / Claude code (early 2026): Finally something where I can like describe the architecture and requirements of a feature, let it code, review that code, give it written instructions on how to clean it up / refactor / missing tests, and iterate.
But that was like 2 years of "really it's better and revolutionary now" before it actually got there. Now maybe in some languages or problem domains, it was useful for people earlier but I can understand people who don't care about "but it works now" when they're hearing it for the sixth time.
And I mean, what one hand gives the other takes away. I have a decent amount of new work dealing with MRs from my coworkers where they just grabbed the requirements from a stakeholder, shoved it into Claude or Cursor and it passed the existing tests and it's shipped without much understanding. When they wrote them themselves, they tested it more and were more prepared to support it in production...
Comment by ed_mercer 1 day ago
Comment by phailhaus 2 days ago
Both can be true. You're tapping into every line of code publicly available, and your day-to-day really isn't that unique. They're really good at this kind of work.
Comment by hollowturtle 1 day ago
Anyone wondering what exactly is he actually building? What? Where?
> The mistakes have changed a lot - they are not simple syntax errors anymore, they are subtle conceptual errors that a slightly sloppy, hasty junior dev might do.
I would LOVE to have jsut syntax errors produced by LLMs, "subtle conceptual errors that a slightly sloppy, hasty junior dev might do." are neither subtle nor slightly sloppy, they actually are serious and harmful, and no junior devs have no experience to fix those.
> They will implement an inefficient, bloated, brittle construction over 1000 lines of code and it's up to you to be like "umm couldn't you just do this instead?"
Why just not hand write 100 loc with the help of an LLM for tests, documentation and some autocomplete instead of making it write 1000 loc and then clean it up? Also very difficult to do, 1000 lines is a lot.
> Tenacity. It's so interesting to watch an agent relentlessly work at something. They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day.
It's a computer program running in the cloud, what exactly did he expected?
> Speedups. It's not clear how to measure the "speedup" of LLM assistance.
See above
> 2) I can approach code that I couldn't work on before because of knowledge/skill issue. So certainly it's speedup, but it's possibly a lot more an expansion.
mmm not sure, if you don't have domain knowledge you could have an initial stubb at the problem, what when you need to iterate over it? You don't if you don't have domain knowledge on your own
> Fun. I didn't anticipate that with agents programming feels more fun because a lot of the fill in the blanks drudgery is removed and what remains is the creative part.
No it's not fun, eg LLMs produce uninteresting uis, mostly bloated with react/html
> Atrophy. I've already noticed that I am slowly starting to atrophy my ability to write code manually.
My bet is that sooner or later he will get back to coding by hand for periods of time to avoid that, like many others, the damage overreliance on these tools bring is serious.
> Largely due to all the little mostly syntactic details involved in programming, you can review code just fine even if you struggle to write it.
No programming it's not "syntactic details" the practice of programming it's everything but "syntactic details", one should learn how to program not the language X or Y
> What happens to the "10X engineer" - the ratio of productivity between the mean and the max engineer? It's quite possible that this grows a lot.
Yet no measurable econimic effects so far
> Armed with LLMs, do generalists increasingly outperform specialists? LLMs are a lot better at fill in the blanks (the micro) than grand strategy (the macro).
Did people with a smartphone outperformed photographers?
Comment by TaupeRanger 1 day ago
Comment by hollowturtle 1 day ago
Comment by jofla_net 7 hours ago
Comment by crystal_revenge 23 hours ago
All of the real world code I have had to review created by AI is buggy slop (often with subtle, but weird bugs that don't show up for a while). But on HN I'm told "this is because your co-workers don't know how to AI right!!!!" Then when someone who supposedly must be an expert in getting things done with AI posts, it's always big claims with hand-wavy explanations/evidence.
Then the comments section is littered with no effort comments like this.
Yet oddly whenever anyone asks "show me the thing you built?" Either it looks like every other half-working vibe coded CRUD app... or it doesn't exist/can't be shown.
If you tell me you have discovered a miracle tool, just some me the results. Not taking increasingly ridiculous claims at face value is not "fear". What I don't understand is where comments like yours come from? What makes you need this to be more than it is?
Comment by hollowturtle 1 day ago
Comment by Banditoz 23 hours ago
Comment by crystal_revenge 23 hours ago
I've worked extensively in the AI space, and believe that it is extremely useful, but these weird claims (even from people I respect a lot) that "something big and mysterious is happening, I just can't show you yet!" set of my alarms.
When sensible questions are met with ad hominems by supporters it further sets of alarm bells.
Comment by thr59182617 1 day ago
They have to maintain the hype until a somewhat credible exit appears and therefore lash out with boomer memes, FOMO, and the usual insane talking points like "there are builders and coders".
Comment by simianwords 1 day ago
Comment by hollowturtle 1 day ago
Comment by simianwords 1 day ago
Comment by hollowturtle 1 day ago
Comment by simianwords 1 day ago
Comment by hollowturtle 1 day ago
Comment by potatogun 1 day ago
Comment by simianwords 1 day ago
>Anyone wondering what exactly is he actually building? What? Where?
this is trivially answerable. it seems like they did not do even the slightest bit of research before asking question after question to seem smart and detailed.
Comment by hollowturtle 1 day ago
Comment by simianwords 1 day ago
Comment by felineflock 19 hours ago
Comment by moss_dog 19 hours ago
Comment by yojat661 17 hours ago
Comment by tryauuum 17 hours ago
Comment by solarized 15 hours ago
Comment by globular-toast 6 hours ago
Who doesn't like building? Building without any thought is literally a toy, like Lego or paint by numbers. That's the entire reason those things are popular. But a game is not a job. Sometimes I feel like half the people in this career are children. Never had any real responsibility. "Oh, everyone writes bugs, who tf cares". "Move fast, break stuff" was literally and unironically the tag line for a company that should have been taking far more responsibility.
This trend isn't limited to programmers either. Wherever I look I see people not taking responsibility. Lots of children in adult bodies. I do hope there are some adults who are really pulling the strings somewhere...
Comment by Madmallard 2 days ago
It's such a visual and experiential thing that writing true success criteria it can iterate on seems like borderline impossible ahead of time.
Comment by 20260126032624 1 day ago
Or slower, when the LLM doesn't understand what I want, which is a bigger issue when you spawn experiments from scratch (and have given limited context around what you are about to do).
Comment by dysoco 10 hours ago
Which is curious since prototyping helps a lot in gamedev.
Comment by TheGRS 1 day ago
Comment by ex-aws-dude 16 hours ago
Comment by redox99 2 days ago
Comment by lingrush4 7 hours ago
Comment by nadis 2 days ago
> "IDEs/agent swarms/fallability. Both the "no need for IDE anymore" hype and the "agent swarm" hype is imo too much for right now. The models definitely still make mistakes and if you have any code you actually care about I would watch them like a hawk, in a nice large IDE on the side. The mistakes have changed a lot - they are not simple syntax errors anymore, they are subtle conceptual errors that a slightly sloppy, hasty junior dev might do. The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking. They also don't manage their confusion, they don't seek clarifications, they don't surface inconsistencies, they don't present tradeoffs, they don't push back when they should, and they are still a little too sycophantic. Things get better in plan mode, but there is some need for a lightweight inline plan mode. They also really like to overcomplicate code and APIs, they bloat abstractions, they don't clean up dead code after themselves, etc. They will implement an inefficient, bloated, brittle construction over 1000 lines of code and it's up to you to be like "umm couldn't you just do this instead?" and they will be like "of course!" and immediately cut it down to 100 lines. They still sometimes change/remove comments and code they don't like or don't sufficiently understand as side effects, even if it is orthogonal to the task at hand. All of this happens despite a few simple attempts to fix it via instructions in CLAUDE . md. Despite all these issues, it is still a net huge improvement and it's very difficult to imagine going back to manual coding. TLDR everyone has their developing flow, my current is a small few CC sessions on the left in ghostty windows/tabs and an IDE on the right for viewing the code + manual edits."
Comment by jbjbjbjb 11 hours ago
Depends what we mean by specialist. If it frontend vs backend then maybe. If it general dev vs some specialist scientific programmer or other field where a generalist won’t have a clue then this seems like a recipe for disaster (literal disasters included).
Comment by sota_pop 21 hours ago
> TLDR This should be at the start?
I actually have been thinking of trying out ClaudeCode/OpenCode over this past week… can anyone provide experience, tips, tricks, ref docs?
My normal workflow is using Free-tier ChatGPT to help me interrogate or plan my solution/ approach or to understand some docs/syntax/best practice of which I’m not familiar. then doing the implementation myself.
Comment by gverrilla 19 hours ago
Comment by cmrdporcupine 9 hours ago
I'm hopeful that 2026 will be the year that the biggest adopters are forced to deal with the mass of product they've created that they don't fully understand, and a push for better tooling is the result.
Today's agentic tools are crude from a UX POV from where I am hoping they will end up.
Comment by rileymichael 1 day ago
as the former, i've never felt _more ahead_ than now due to all of the latter succumbing to the llm hype
Comment by neuralkoi 1 day ago
If current LLMs are ever deployed in systems harboring the big red button, they WILL most definitely somehow press that button.
Comment by arthurcolle 1 day ago
Comment by groby_b 1 day ago
If instead we believe in fantasies of a single all-knowing machine god that is 100% correct at all times, then... we really just have ourselves to blame. Might as well just have spammed that button by hand.
Comment by wellpast 22 hours ago
Writing code in many cases is faster to me than writing English (that is how PLs are designed, btw!) LLM/agentic is very “neat” but still a toy to the professional, I would say. I doubt reports like this one. For those of us building real world products with shelf-lives (Is Andrej representative of this archetype?), I just don’t see the value-add touted out there. I’d love to be proven wrong. But writing code (in code, not English), to me and many others, is still faster than reading/proving it.
I think there’s a combination of fetishizing and Stockholm syndroming going on in these enthusiastic self-reports. PMW.
Comment by jofla_net 7 hours ago
True, I feel as though i'd have to become Stienbeck to get it to do what i "really" wanted, with all the true nuance.
Comment by superze 1 day ago
On the contrary if it was for a job in a public sector I would just let the LLM spit out some output and play stupid, since salary is very low.
Comment by poszlem 12 hours ago
I know this is SF, but to me working with those LLMs feels more and more like that, and the atrophy part is real. Not that the model is literally using our brains as compute, but the relationship can become lopsided.
Comment by randoglando 1 day ago
Comment by jedisct1 8 hours ago
GPT-5.2 is not as good for coding, but much better at thinking and finding bugs, inconsistencies and edge cases.
The only decent way I found to use AI agents is by doing multiple steps between Claude and GPT, asking GPT to review every step of every plan and every single code change from Claude, and manually reviewing and tweaking questions and responses both way, until all the parties, including myself, agree. I also sometimes introduce other models like Qwen and K2 in the mix, for a different perspective.
And gosh, by doing so you immediately realize how dumb, unreliable and dangerous code generated by Claude alone is.
It's a slow and expensive process and at the end of the day, it doesn't save me time at all. But, perhaps counterintuitively, it gives me more confidence in the end result. The code is guaranteed to have tons of tests and assurance for edge cases that I may not have thought about.
Comment by uejfiweun 1 day ago
Comment by jerf 1 day ago
Empowering people to do 10 times as much as they could before means they hit 100 times the roadblocks. Again, in a lot of ways we've already lived in that reality for the past many years. On a task-by-task basis programming today is already a lot easier than it was 20 years ago, and we just grew our desires and the amount of controls and process we apply. Problems arise faster than solutions. Growing our velocity means we're going to hit a lot more problems.
I'm not saying you're wrong, so much as saying, it's not the whole story and the only possibility. A lot of people today are kept out of programming just because they don't want to do that much on a computer all day, for instance. That isn't going to change. There's still going to be skills involved in being better than other people at getting the computers to do what you want.
Also on a long term basis we may find that while we can produce entry-level coders that are basically just proxies to the AI by the bucketful that it may become very difficult to advance in skills beyond that, and those who are already over the hurdle of having been forced to learn the hard way may end up with a very difficult to overcome moat around their skills, especially if the AIs plateau for any period of time. I am concerned that we are pulling up the ladder in a way the ladder has never been pulled up before.
Comment by spaceman_2020 1 day ago
The juniors though will radically have to upskill. The standard junior dev portfolio can be replicated by claude code in like three prompts
The game has changed and I don't think all the players are ready to handle it
Comment by daxfohl 1 day ago
Comment by tietjens 1 day ago
I personally think the barrier is going to get higher, not lower. And we will be back expected to do more.
Comment by q3k 1 day ago
Day after day the global quality of software and learning resources will degrade as LLM grey goo consumes every single nook and cranny of the Internet. We will soon see the first signs of pure cargo cult design patterns, conventions and schemes that LLMs made up and then regurgitated. Only people who learned before LLMs became popular will know that they are not to be followed.
People who aren't learning to program without LLMs today are getting left behind.
Comment by strange_quark 19 hours ago
That is assuming that LLMs plateau in capability, if they haven't already, which I think is highly likely.
Comment by riku_iki 1 day ago
its opposite, now in addition to all other skills, you need skill how to handle giant codebases of viobe-coded mess using AI.
Comment by lofaszvanitt 22 hours ago
As an added plus: those, who already have wealth will benefit the most, instead of the masses. Since the distribution and dissemination of new projects is at the same level as before, meaning you would need a lot of money. So no matter how clever you are with an llm, if you don't have the means to distribute it you will be left in the dirt.
Comment by ares623 22 hours ago
Comment by fragmede 22 hours ago
Comment by ares623 22 hours ago
Nevermind the fact he became successful _because_ of his skills and his brain.
Comment by DeathArrow 1 day ago
Quite insightful.
Comment by MORPHOICES 12 hours ago
Comment by huflungdung 15 hours ago
Comment by MarginalGainz 10 hours ago
Comment by wkh129857 1 day ago
Comment by yojat661 17 hours ago
Comment by soganess 1 day ago
Great idea! Le's pathalogize another thing! I love quickly othering whole concepts and putting them in my brain's "bad" box so I can feel superior.
Comment by reducesuffering 1 day ago
https://github.com/karpathy/llm.c
The proof is in the pudding. Let's see your code
Comment by rvz 20 hours ago
He said “…who has never written any production software…” yet you show toy projects instead.
Well done.
Comment by jackling 1 day ago
Comment by lomase 1 day ago
Comment by spaceman_2020 1 day ago
HN used to be a proper place for people actually curious about technology
Comment by vardalab 1 day ago
Comment by kakapo5672 7 hours ago
I also don't get all the hand-wringing. AI is an amazing tool. Use it and be happy.
Even less do I get all the cope about it not being effective, or even useless at some level. When I read posts such as that, it feels like a different planet. Just not my experience at all.
Comment by kejaed 1 day ago
Comment by vardalab 4 hours ago
Comment by zennit 1 day ago
Comment by weirdmantis69 1 day ago
Comment by themafia 1 day ago
Otherwise, I think you're incidentally right, your "ego" /is/ bruised, and you're looking for a way out by trying to prognosticate on the future of the technology. You're failing in two different ways.