Believe the Checkbook
Posted by rg81 1 day ago
Comments
Comment by RandallBrown 1 day ago
It always surprises me that this isn't obvious to everyone. If AI wrote 100% of the code that I do at work, I wouldn't get any more work done because writing the code is usually the easy part.
Comment by skybrian 1 day ago
A shift to not writing code (which is apparently sometimes possible now) and managing AI agents instead is a pretty major industry change.
Comment by gopher_space 1 day ago
It's like how every job requires math if you make it far enough.
Comment by keyle 23 hours ago
Comment by linhns 1 day ago
Comment by trollbridge 20 hours ago
Comment by phantasmish 1 day ago
Imperfectly fixing obvious problems in our processes could gain us 20%, easy.
Which one are we focusing on? AI. Duh.
Comment by saghm 9 hours ago
If they could write exactly what I wanted but faster, I'd probably stop writing code any other way at all because that would just be a free win with no downside even though the win might be small! They don't write exactly what I want though, so the tradeoff is whether the amount of time they save me writing it is lost from the extra time debugging the code they wrote rather than my own. It's not clear to me that the code produced by an LLM right now is going to be close enough to correct enough of the time that this will be a net increase in efficiency for me. Most of the arguments I've seen for why I might want to consider investing more of my own time into learning these tools seem to be based on extrapolation of trends to up to this point, but it's still not clear to me that it's likely that they'll become good enough to reach a positive ROI for me any time soon. Maybe if the effort to actually start using them more heavily was lower I'd be willing to try it, but from what I can tell, it would take a decent amount of work for me to get the point where I'm even producing anything close to what I'm currently producing, and I don't really see the point of doing that if it's still an open question if it will ever close the remaining gap.
Comment by RealityVoid 4 hours ago
Never is a very strong word. I'm not a terribly fast typist but I intentionally trained to be faster because at times I wanted to whip out some stuff and the thought of typing it all out just annoyed me since it took too long. I think typing speed matters and saying it doesn't is a lie. At the very least if you have a faster baseline then typing stuff is more relaxing instead of just a chore.
Comment by lolc 1 hour ago
Comment by Quothling 1 day ago
All I had to do was a two line prompt, and accept the pull request. It probably took 10 minutes out of my day, which was mostly the people I was helping explaining what they thought was wrong. I think it might've taken me all day if I had to go through all the code and the documentation and fixed it. It might have taken me a couple of days because I probably would've made it less insane.
For other tasks, like when I'm working on embedded software using AI would slow me down significantly. Except when the specifications are in German.
Comment by xnx 1 day ago
Comment by add-sub-mul-div 1 day ago
Comment by aaroninsf 1 day ago
All OSS has been ingested, and all the discussion in forum like this about it, and the personal blog posts and newsletters about it; and the bug tracking; and theh pull requests, and...
and training etc. is only going to get better and filtering out what is "best."
Comment by al_borland 1 day ago
At best, what I find online are basic day 1 tutorials and proof on concept stuff. None of it could be used in production where we actually need to handle errors and possible failure situations.
Comment by jmtulloss 21 hours ago
Comment by ewoodrich 20 hours ago
There is barely anything that qualifies as documentation that they are willing to provide under NDA for lock-in reasons/laziness (ERPish sort of thing narrowly designed for the specific sector, and more or less in a duopoly).
The difficulty in developing solutions is 95% understanding business processes/requirements. I suspect this kind of thing becomes more common the further you get from a "software company” into specific industry niches.
Comment by add-sub-mul-div 1 day ago
Comment by bibimsz 1 day ago
Comment by gowld 1 day ago
How many hours per week did you spend coding on your most recent project? If you could do something else during that time, and the code still got written, what would you do?
Or are you saying that you believe you can't get that code written without spending an equivalent amount of time describing your judgments?
Comment by kibwen 1 day ago
So reducing the part where I go from abstract system to concrete implementation only saves me time spent typing, while at the same time decoupling me from understanding whether the code actually implements the system I have in mind. To recover that coupling, I need to read the code and understand what it does, which is often slower than just typing it myself.
And to even express the system to the code generator in the first place still requires me to mentally bridge the gap between the goal and the system that will achieve that goal, so it doesn't save me any time there.
The exceptions are things where I literally don't care whether the outputs are actually correct, or they're things that I can rely on external tools to verify (e.g. generating conformance tests), or they're tiny boilerplate autocomplete snippets that aren't trying to do anything subtle or interesting.
Comment by ryandrake 1 day ago
Yes, there is artistry, craftsmanship, and "beautiful code" which shouldn't be overlooked. But I believe that beautiful code comes from solid ideas, and that ugly code comes from flawed ideas. So, as long as the (human-constructed) idea is good, the code (whether it is human-typed or AI-generated) should end up beautiful.
Comment by RunSet 1 day ago
Comment by fragmede 1 day ago
Comment by RandallBrown 1 day ago
My judgement is built in to the time it takes me to code. I think I would be spending the same amount of time doing that while reviewing the AI code to make sure it isn't doing something silly (even if it does technically work.)
A friend of mine recently switched jobs from Amazon to a small AI startup where he uses AI heavily to write code. He says it's improved his productivity 5x, but I don't really think that's the AI. I think it's (mostly) the lack of bureaucracy in his small 2 or 3 person company.
I'm very dubious about claims that AI can improve productivity so much because that just hasn't been my experience. Maybe I'm just bad at using it.
Comment by fragmede 1 day ago
Comment by integralid 17 hours ago
Comment by ehutch79 1 day ago
Comment by jgeada 1 day ago
Speed of typing code is not all that different than the speed of typing English, even accounting for the volume expansion of English -> <favorite programming language>. And then, of course, there is the new extra cost of then reading and understanding whatever code the AI wrote.
Comment by ctoth 1 day ago
Okay, you've switched to English. The speed of typing the actual tokens is just about the same but...
The standard library is FUCKING HUGE!
Every concept that you have ever read about? Every professional term, every weird thing that gestures at a whole chunk of complexity/functionality ... Now, if I say something to my LLM like:
> Consider the dimensional twins problem -- how're we gonna differentiate torque from energy here?
I'm able to ... "from physics import Torque, Energy, dimensional_analysis" And that part of the stdlib was written in 1922 by Bridgman!
Comment by JoshTriplett 20 hours ago
And extremely buggy, and impossible to debug, and does not accept or fix bug reports.
AI is like an extremely enthusiastic junior engineer that never learns or improves in any way based on your feedback.
I love working with junior engineers. One of the best parts about working with junior engineers is that they learn and become progressively more experienced as time goes on. AI doesn't.
Comment by integralid 17 hours ago
And come on: AI definitely will become better as time goes on.
Comment by JoshTriplett 17 hours ago
Comment by rootusrootus 1 day ago
I guess we find out which software products just need to be 'good enough' and which need to match the vision precisely.
Comment by layer8 1 day ago
It’s sort of the opposite: You don’t get to the proper judgement without playing through the possibilities in your mind, part of which is accomplished by spending time coding.
Comment by scott_w 1 day ago
Comment by zmj 22 hours ago
Comment by placebo 17 hours ago
Comment by verbify 17 hours ago
The point is still valid, although I've seen it made many times over.
Comment by mk12 19 hours ago
Comment by duskdozer 15 hours ago
But at this point I'm not confident that I'm not failing to identify a lot of LLM-generated text and not making false positives.
Comment by integralid 17 hours ago
Unlikely. AI keeps improving, and we are already at the point where real people are accused of being AI.
Comment by marbro 6 hours ago
Comment by neilv 1 day ago
Clever pitch. Don't alienate all the people who've hitched their wagons to AI, but push valuing highly-skilled ICs as an actionable leadership insight.
Incidentally, strategy and risk management sound like a pay grade bump may be due.
Comment by zamadatix 1 day ago
> Everyone’s heard the line: “AI will write all the code; engineering as you know it is finished... The Bun acquisition blows a hole in that story.”
But what the article actually discusses and demonstrates by the end of the article is how the aspects of engineering beyond writing the code is where the value in human engineers is at this point. To me that doesn't seem like an example of a revealed preference in this case. If you take it back to the first part of the original quote above it's just a different wording for AI being the code writer and engineering being different.
I think what the article really means to drive against is the claim/conclusion "because AI can generate lots of code we don't need any type of engineer" but that's just not what the quote they chose to set out against is saying. Without changing that claim the acquisition of Bun is not really a counterexample, Bun had just already changed the way they do engineering so the AI wrote the code and the engineers did the other things.
Comment by croes 1 day ago
And what about vibe coding? The whole point and selling point of many AI companies is that you don’t need experience as a programmer.
So they sell something that isn’t true, it’s not FSD for coding but driving assistance.
Comment by imron 1 day ago
The house of the feeble minded: https://www.abelard.org/asimov.php
Comment by zamadatix 1 day ago
Comment by fwip 1 day ago
Comment by zamadatix 8 hours ago
> Tighten the causal claim: “AI writes code → therefore judgment is scarce”
As one of the first suggestions, so it's not something inherent to whether the article used AI in some way. Regardless, I care less about how the article got written and more about what conclusions really make sense.
Comment by fwip 1 day ago
> The Bun acquisition blows a hole in that story.
> That contradiction is not a PR mistake. It is a signal.
> The bottleneck isn’t code production, it is judgment.
> They didn’t buy a pile of code. They bought a track record of correct calls in a complex, fast-moving domain.
> Leaders don’t express their true beliefs in blog posts or conference quotes. They express them in hiring plans, acquisition targets, and compensation bands.
Not to mention the gratuitous italics-within-bold usage.
Comment by JSR_FDED 21 hours ago
I don’t know if HN has made me hyper-sensitized to AI writing, but this is becoming unbearable.
When I find myself thinking “I wonder what the prompt was they used?” while reading the content, I can’t help but become skeptical about the quality of the thinking behind the content.
Maybe that’s not fair, but it’s the truth. Or put differently “Fair? No. Truthful? Yes.”. Ugh.
Comment by conductr 1 day ago
Technically, there’s still a horse buggy whip market, an abacus market, and probably anything else you think technology consumed. It’s just a minuscule fraction of what it once was.
Comment by marcosdumay 1 day ago
All the last productivity multipliers in programming led to increased demand. Do you really think the market is saturated now? And what saturated it is one of the least impactful "revolutionary" tools we got in our profession?
Keep in mind that looking at statistics won't lead to any real answer, everything is manipulated beyond recognition right now.
Comment by conductr 19 hours ago
Also I do hold a belief that most tech companies are taking a cost/labor reduction strategy for a reason, and I think that’s because we’re closing a period of innovation. Keeping the lights on, or protecting their moats, requires less labor.
Comment by 9rx 11 hours ago
This AI craze swooped in at the right time to help hold up the industry and is the only thing keeping it together right now. We're quickly trying to build all the low-hanging fruit for it, keeping many developers busy (although not like it used to be), but there isn't much low-hanging fruit to build. LLMs don't have the breadth of need like previous computing revolutions had. Once we've added chat interfaces to everything, which is far from being a Herculean task, all the low-hanging fruit will be gone. That's quite unlike previous revolutions where we had to build all the software from scratch, effectively, not just slap some lipstick on existing software.
If we want to begin to relive the past, we need a new hardware paradigm that needs all the software rewritten for it again. Not an impossible thought, but all the low-hanging hardware directions have also been picked at this point so the likelihood of that isn’t what it used to be either.
Comment by marcosdumay 6 hours ago
They didn't. But it may be a relevant point that all of that was slow enough to spread that we can't clearly separate them.
Anyway, the idea that any one of those large markets is at saturation point requires some data. AFAIK, anything from mainframe software to phones has (relatively) exploded in popularity every time somebody made them cheaper, so that is a claim that all of those just changed (too recently to measure), without any large thing to correlate them.
> That's quite unlike previous revolutions where we had to build all the software from scratch
We have rewritten everything from scratch exactly once since high-level languages were created in the 70s.
Comment by faxmeyourcode 1 day ago
> Everyone’s heard the line: “AI will write all the code; engineering as you know it is finished.”
Software engineering pre-LLMs will never, ever come back. Lots of folks are not understanding that. What we're doing at the end of 2025 looks so much different than what we were doing at the end of 2024. Engineering as we knew it a year or two ago will never return.
Comment by maccard 1 day ago
I use AI as a smart auto complete - I’ve tried multiple tools on multiple models and I still _regularlt_ have it dump absolute nonsense into my editor - in thr best case it’s gone on a tangent, but in the most common case it’s assumed something (often times directly contradicting what I’ve asked it to do), gone with it, and lost the plot along the way. Of course when I correct it it says “you’re right, X doesn’t exist so we need to do X”…
Has it made me faster? Yes. Had it changed engineering - not even close. There’s absolutely no world where I would trust what I’ve seen out of these tools to run in the real world even with supervision.
Comment by geitir 17 hours ago
Comment by bonesss 16 hours ago
In startups I’ve competed against companies with 10x and 100x the resources and manpower on the same systems we were building. The amount of code they theoretically could push wasn’t helping them, they were locked to the code they actually had shipped and were in a downwards hiring spiral because of it.
Comment by maccard 14 hours ago
Comment by hapless 1 day ago
Comment by recursive 1 day ago
Comment by jollyllama 1 day ago
Comment by TheCraiggers 1 day ago
Comment by kubb 1 day ago
I can’t see how buying a runtime for the sake of Claude Code makes sense.
Comment by drcode 1 day ago
This argument requires us to believe that AI will just asymptote and not get materially better.
Five years from now, I don't think anyone will make these kinds of acquisitions anymore.
Comment by nitwit005 1 day ago
I assume this is at least partially a response to that. They wouldn't buy a company now if it would actually happen that fast.
Comment by 0x3f 1 day ago
That's not what asymptote means. Presumably what you mean is the curve levelling off, which it already is.
Comment by SoftTalker 1 day ago
Comment by 0x3f 1 day ago
Comment by bigstrat2003 1 day ago
It hasn't gotten materially better in the last three years. Why would it do so in the next three or five years?
Comment by bitwize 1 day ago
Comment by Rakshath_1 1 day ago
Comment by barfoure 1 day ago
I don’t know why the acquisition happened, or what the plans are. But it did happen, and for this we don’t have to suspend disbelief. I don’t doubt Anthropic has plans that they would rather not divulge. This isn’t a big stretch of imagination, either.
We will see how things play out, but people are definitely being displaced by AI software doing work, and people are productive with them. I know I am. The user count of Claude Code, Gemini and ChatGPT don’t lie, so let’s not kid ourselves.