Opus 4.5 is the first model that makes me fear for my job
Posted by nomilk 1 day ago
Comments
Comment by jchw 1 day ago
I'm honestly not complaining about the model releases, though. Despite their shortcomings, they are extremely useful. I've found Gemini 3 to be an extremely useful learning aid, so as long as I don't blindly trust its output, and if you're trying to learn, you really ought not do that anyways. (Despite what people and benchmarks say, I've already caught some random hallucinations, it still feels like you're likely to run into hallucinations on a regular basis. Not a huge problem, but, you know.)
Comment by techblueberry 1 day ago
Comment by krackers 1 day ago
Comment by Forgeties79 1 day ago
Comment by pogue 1 day ago
Crypto was just that, a pure grift where they were creating something out of nothing and rugpulling when the hype was highest.
AI is actually creating something, it's generating replacement for artists, for creatives, for musicians, for writers, for programmers. It's literally capable of generating something from _almost_ nothing. Of course, you have to factor in energy usage & etc, but the end user sees none of that. They type a request and it generates an output.
It may be easily identifable slop today, but it's getting better and better at a RAPID rate. We all need to recognize this.
I don't know what to do with the knowledge that it's coming for our jobs. Adapt or die? I don't know...
Comment by krackers 1 day ago
The common thread is that there's no nuanced discussion to be found, technical or otherwise. It's topics optimized for viral engagement.
Comment by pogue 1 day ago
I see what you're saying, that's a bit of a different aspect entirely. I don't know how much people are making from viral posts on Twitter (or fb?) from that kind of thing.
But, outside of those specific platforms, there's quite a bit of discussion on it on reddit and on here has had some of the best. The good tech sites like Ars, Verge, Wired, Register all have excellent realistic coverage of what's going on.
I think if you're only seeing hype I'd ask where you're looking. And on the flip side, there's the very anti-ai crowd who I'm sure might be getting that same kind of reach to their target audience preaching the evils & immortality of it.
Comment by techblueberry 1 day ago
Comment by pogue 21 hours ago
Comment by prymitive 1 day ago
Comment by odla 1 day ago
Comment by heavyset_go 1 day ago
Comment by channel_t 1 day ago
Comment by epolanski 1 day ago
Otherwise, with all due respect, there's very little of value to learn in that subreddit.
Comment by channel_t 1 day ago
Comment by unsupp0rted 22 hours ago
Comment by quantumHazer 1 day ago
1) it’s not impartial
2) it’s useless hype commentary
3) it’s literally astroturfing at this point
Comment by heavyset_go 1 day ago
Comment by hecanjog 1 day ago
Comment by yellow_lead 1 day ago
In threads where I see an example of what the author is impressed by, I'm usually not impressed. So when I see something like this, where the author doesn't give any examples, I also assume Claude did something unimpressive.
Comment by nharada 1 day ago
Comment by markus_zhang 1 day ago
Pick anything else you have a far better likelihood to fall back into manual process, legal wall, or whatever that AI cannot replace easily.
Good job boys and girls. You will be remembered.
Comment by pton_xd 1 day ago
Prompting an AI just doesn't have the same feeling, unfortunately.
Comment by sunshowers 1 day ago
The document is human-crafted and human-reviewed, and it primarily targets humans. The fact that it works for machines is a (pretty neat) secondary effect, but not really the point. And the document sped up the act of doing the refactors by around 5x.
The whole process was really fun! It's not really vibe coding at that point, really (I continue to be relatively unimpressed at vibe coding beyond a few hundred lines of code). It's closer to old-school waterfall-style development, though with much quicker iteration cycles.
Comment by cmarschner 1 day ago
It brings the “what to build“ question front and center while “how to build it“ has become much, much easier and more productive
Comment by markus_zhang 1 day ago
Same thing for science. I don't mind if AI could solve all those problems, as long as they can teach me. Those problems are already "solved" by the universe anyway.
Comment by Hamuko 1 day ago
There's so much half-working AI-generated code everywhere that I'd feel ashamed if I had to ever meet our customers.
I think the thing that gives me the most value is code review. So basically I first review my code myself, then have Claude review it and then submit for someone else to approve.
Comment by markus_zhang 1 day ago
Maybe it's just because my side projects are fairly elementary.
And I agree that AI is pretty good at code review, especially if the code contains complex business logic.
Comment by skybrian 1 day ago
Comment by agumonkey 1 day ago
Comment by harrall 1 day ago
The commonality of people working on AI is that they ALL know software. They make a product that solves the thing that they know how to solve best.
If all lawyers knew how to write code, we’d seem more legal AI startups. But lawyers and coders are not a common overlap, surely nowhere as near as SWEs and coders.
Comment by agumonkey 1 day ago
Comment by neoromantique 1 day ago
Comment by Lionga 1 day ago
Good job AI fanboys and girls. You will be remembered when this fake hype is over.
Comment by markus_zhang 1 day ago
Comment by sidibe 1 day ago
I don't really see why anywhere near the number of great jobs this industry has had will be justifiable in a year. The only comfort is all the other industries will be facing the same issue so accomodations will have to be made.
Comment by markus_zhang 1 day ago
Damn it that I’m only 40+ so I still need to work more or less 15 years even when we live frugally.
Comment by heckintime 1 day ago
Comment by agumonkey 1 day ago
Comment by heckintime 1 day ago
Comment by exabrial 1 day ago
Comment by jsheard 1 day ago
https://www.reddit.com/r/ClaudeAI/comments/1pe6q11/deep_down...
https://www.reddit.com/r/ClaudeAI/comments/1pb57bm/im_honest...
https://www.reddit.com/r/ChatGPT/comments/1pm7zm4/ai_cant_ev...
https://www.reddit.com/r/ArtificialInteligence/comments/1plj...
https://www.reddit.com/r/ArtificialInteligence/comments/1pft...
https://www.reddit.com/r/AI_Agents/comments/1pb6pjz/im_hones...
https://www.reddit.com/r/ExperiencedDevs/comments/1phktji/ai...
https://www.reddit.com/r/csMajors/comments/1pk2f7b/ (cached title: Your CS degree is worthless. Switch over. Now.)
Comment by quantumHazer 1 day ago
Comment by bachmeier 1 day ago
Comment by crystal_revenge 1 day ago
> Taking longer than usual. Trying again shortly (attempt 1 of 10)
> ...
> Taking longer than usual. Trying again shortly (attempt 10 of 10)
> Due to unexpected capacity constraints, Claude is unable to respond to your message. Please try again soon.
I guess I'll have to wait until later to feel the fear...
Comment by wdb 1 day ago
Comment by diavelguru 1 day ago
Comment by giancarlostoro 1 day ago
Comment by uniclaude 1 day ago
That’s a reason why I can’t believe the benchmarks and why I also believe open source models (claiming 200 but realistically struggling past 40k) aren’t only a bit but very far behind SOTA in actual software dev.
This is not true for all software, but there are types of systems or environments where it’s abundantly clear that Opus (or anything with a sub 1m window) won’t cut it, unless it has a very efficient agentic system to help.
I’m not talking about dumping an entire code base in the context, I’m talking about clear specs, some code, library guidelines, and a few elements to allow the LLM to be better than a glorified autocomplete that lives in an electron fork.
Sonnet still wins easily.
Comment by orwin 1 day ago
It's definitely more useful than me the first 5 years of my professional career though, so for people who don't improve fast or for average new grades, this can be a problem.
Comment by Aperocky 1 day ago
If I was only writing code, the fear would be completely justified.
Comment by themafia 1 day ago
> do not know what's coming for us in the next 2-3 years, hell, even next year might be the final turning point already.
What is this based on? Research? Data? Gut feeling?
> but how long will it be until even that is not needed anymore?
You just answered that. 2 to 3 years, hell, even next year, maybe.
> it also saddens me knowing where all of this is heading.
If you know where this is heading why are you not investing everything you have in these companies? Isn't that the obvious conclusion instead of wringing your hands over the loss of a coding job?
It invents a problem, provides a time line, immediately questions itself, and then confidently prognosticates without any effort to explain the information used to arrive at this conclusion.
What am I supposed to take from this? Other than that people are generally irrational when contemplating the future?
Comment by gtowey 1 day ago
Comment by kami8845 1 day ago
Comment by bgwalter 1 day ago
"The overwhelming consensus in this thread is that OP's fear is justified and Opus represents a terrifying leap in capability. The discussion isn't about if disruption is coming, but how severe it will be and who will survive."
My fellow Romans, I come here not to discuss disruption, but to survive!
Comment by Aayush28260 1 day ago
Comment by iSloth 1 day ago
Comment by int32_64 1 day ago
qwen3-coder blew me away.
Comment by AndyKelley 1 day ago
Comment by simonw 1 day ago
Right: if you expect your job as a software developer to be effectively the same shape on a year or two you're in for a bad time.
But humans can adapt! Your goal should be to evolve with the tools that are available. In a couple of years time you should be able to produce significantly more, better code, solving more ambitious profiles and making you more valuable as a software professional.
That's how careers have always progressed: I'm a better, faster developer today than I was two years ago.
I'll worry for my career when I meet a company that has a software roadmap that they can feasibly complete.
Comment by th0ma5 1 day ago
Comment by terabytest 1 day ago
Comment by scosman 1 day ago
Comment by outside1234 1 day ago
Something doesn't square about this picture: either this is the best thing since sliced bread and it should be wildly profitable, or ... it's not, and it's losing a lot of money because they know there isn't a market at a breakeven price.
Comment by simonw 1 day ago
They have several billion dollars of annual revenue already.
Comment by throw310822 1 day ago
Comment by outside1234 1 day ago
If OpenAI is only going to be profitable (aka has an actual business model) if other companies aren't training a competitive model, then they are toast. Which is my point. They are toast.
Comment by ben_w 1 day ago
In principle, I mean. Obviously there's a sense in which it doesn't matter if they only get fined for cross-subsidising/predatory pricing/whatever *after* OpenAI et al run out of money.
I do think this is a bubble and I do expect most or all the players to fail, but that's because I think they're in an all-pay auction and may be incentivised to keep spending way past the break-even point just for a chance to cut their losses.
Comment by outside1234 1 day ago
Comment by ben_w 23 hours ago
But as a gut-check, even if all the people not complaining about it are getting use out of any given model, does this justify the ongoing cost of training new models?
If you could delete the ongoing training costs of new models from all the model providers, all of them look a lot healthier.
I guess I have a question about your earlier comment:
> Google is always going to be training a new model and are doing so while profitable.
While Google is profitable, or while the training of new models is profitable?
Comment by paulddraper 1 day ago
Comment by _wire_ 1 day ago