Claude Code Daily Benchmarks for Degradation Tracking
Posted by qwesr123 3 hours ago
Comments
Comment by ofirpress 2 hours ago
Comment by Davidzheng 2 hours ago
Comment by botacode 1 hour ago
They don't have to be malicious operators in this case. It just happens.
Comment by bgirard 52 minutes ago
It doesn't have to be malicious. If my workflow is to send a prompt once and hopefully accept the result, then degradation matters a lot. If degradation is causing me to silently get worse code output on some of my commits it matters to me.
I care about -expected- performance when picking which model to use, not optimal benchmark performance.
Comment by Aurornis 15 minutes ago
The non-determinism means that even with a temperature of 0.0, you can’t expect the outputs to be the same across API calls.
In practice people tend to index to the best results they’ve experienced and view anything else as degradation. In practice it may just be randomness in either direction from the prompts. When you’re getting good results you assume it’s normal. When things feel off you think something abnormal is happening. Rerun the exact same prompts and context with temperature 0 and you might get a different result.
Comment by novaleaf 13 minutes ago
Comment by altcognito 46 minutes ago
Comment by chrisjj 10 minutes ago
Comment by FL33TW00D 15 minutes ago
e.g
if (batch_size > 1024): kernel_x else: kernel_y
Comment by pertymcpert 10 minutes ago
Comment by megabless123 2 hours ago
Comment by exitb 1 hour ago
Comment by codeflo 1 hour ago
Comment by TedDallas 1 hour ago
“… To state it plainly: We never reduce model quality due to demand, time of day, or server load. …”
So according to Anthropic they are not tweaking quality setting due to demand.
Comment by rootnod3 1 hour ago
And according to Meta, they always give you ALL the data they have on you when requested.
Comment by entropicdrifter 41 minutes ago
However, the request form is on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard'.
Comment by groundzeros2015 34 minutes ago
Comment by AlexandrB 17 minutes ago
Comment by chrisjj 9 minutes ago
Comment by cmrdporcupine 56 minutes ago
I've seen sporadic drops in reasoning skills that made me feel like it was January 2025, not 2026 ... inconsistent.
Comment by quadrature 14 minutes ago
Comment by root_axis 25 minutes ago
Comment by cmrdporcupine 22 minutes ago
these things are by definition hard to reason about
Comment by mcny 1 hour ago
Sure, I'll take a cup of coffee while I wait (:
Comment by lurking_swe 1 hour ago
at least i would KNOW it’s overloaded and i should use a different model, try again later, or just skip AI assistance for the task altogether.
Comment by direwolf20 1 hour ago
Comment by denysvitali 1 hour ago
Comment by chrisjj 1 hour ago
Comment by bpavuk 1 hour ago
welcome to the Silicon Valley, I guess. everything from Google Search to Uber is fraud. Uber is a classic example of this playbook, even.
Comment by copilot_king 1 hour ago
Comment by rootnod3 1 hour ago
Comment by sh3rl0ck 44 minutes ago
Comment by awestroke 1 hour ago
Comment by vidarh 1 hour ago
Comment by seunosewa 9 minutes ago
Comment by chrisjj 1 hour ago
Comment by kingstnap 53 minutes ago
> How do I know which model Gemini is using in its responses?
> We believe in using the right model for the right task. We use various models at hand for specific tasks based on what we think will provide the best experience.
Comment by chrisjj 7 minutes ago
Comment by Wheaties466 1 hour ago
Comment by chrisjj 1 hour ago
Comment by cmrdporcupine 2 hours ago
I don't know if they do this or not, but the nature of the API is such you could absolutely load balance this way. The context sent at each point is not I believe "sticky" to any server.
TLDR you could get a "stupid" response and then a "smart" response within a single session because of heterogeneous quantization / model behaviour in the cluster.
Comment by epolanski 1 hour ago
Comment by cmrdporcupine 1 hour ago
Comment by mohsen1 2 hours ago
How do you pay for those SWE-bench runs?
I am trying to run a benchmark but it is too expensive to run enough runs to get a fair comparison.
Comment by ofirpress 2 hours ago
Comment by Dolores12 2 hours ago
Comment by Deklomalo 2 hours ago
Comment by epolanski 2 hours ago
Comment by sejje 1 hour ago
Comment by plagiarist 55 minutes ago
Comment by mohsen1 2 hours ago
Thanks!
Comment by seunosewa 1 hour ago
Comment by GoatInGrey 1 hour ago
Comment by rootnod3 1 hour ago
"You can't measure my Cloud Service's performance correctly if my servers are overloaded"?
"Oh, you just measured me at bad times each day. On only 50 different queries."
So, what does that mean? I have to pick specific times during the day for Claude to code better?
Does Claude Code have office hours basically?
Comment by copilot_king 1 hour ago
Yes. Now pay up or you will be replaced.
Comment by rootnod3 1 hour ago
Comment by cedws 2 hours ago
Comment by bredren 2 hours ago
It’s a terrific idea to provide this. ~Isitdownorisitjustme for LLMs would be the parakeet in the coalmine that could at least inform the multitude of discussion threads about suspected dips in performance (beyond HN).
What we could also use is similar stuff for Codex, and eventually Gemini.
Really, the providers themselves should be running these tests and publishing the data.
The availability status information is no longer sufficient to gauge the service delivery because it is by nature non-deterministic.
Comment by chrisjj 1 hour ago
Are you suggesting result accuracy varies with server load?
Comment by epolanski 2 hours ago
Comment by dana321 1 hour ago
Aha, so the models do degrade under load.
Comment by antirez 2 hours ago
1. The percentage drop is too low and oscillating, it goes up and down.
2. The baseline of Sonnet 4.5 (the obvious choice for when they have GPU busy for the next training) should be established to see Opus at some point goes Sonnet level. This was not done but likely we would see a much sharp decline in certain days / periods. The graph would look like dominated by a "square wave" shape.
3. There are much better explanations for this oscillation: A) They have multiple checkpoints and are A/B testing, CC asks you feedbacks about the session. B) Claude Code itself gets updated, as the exact tools version the agent can use change. In part it is the natural variability due to the token sampling that makes runs not equivalent (sometimes it makes suboptimal decisions compared to T=0) other than not deterministic, but this is the price to pay to have some variability.
Comment by levkk 1 hour ago
Comment by emp17344 48 minutes ago
Comment by warkdarrior 1 hour ago
Comment by GoatInGrey 55 minutes ago
Comment by eterm 1 hour ago
Why January 8? Was that an outlier high point?
IIRC, Opus 4.5 was released late november.
Comment by pertymcpert 8 minutes ago
Comment by littlestymaar 1 hour ago
Comment by eterm 1 hour ago
A benchmark like this ought to start fresh from when it is published.
I don't entirely doubt the degradation, but the choice of where they went back to feels a bit cherry-picked to demonstrate the value of the benchmark.
Comment by littlestymaar 1 hour ago
If anything it's coherent with the fact that they very likely didn't have data earlier than January the 8th.
Comment by littlestymaar 1 hour ago
How do you define “too low”, they make sure to communicate about the statistical significance of their measurements, what's the point if people can just claim it's “too low” based on personal vibes…
Comment by dmos62 57 minutes ago
Comment by Dowwie 2 hours ago
Comment by preuceian 2 hours ago
Comment by gordonhart 50 minutes ago
Comment by mrbananagrabber 2 hours ago
Comment by ctxc 2 hours ago
It's not my fault, they set high standards!
Comment by Trufa 2 hours ago
Comment by sejje 1 hour ago
It's the only time cussing worked, though.
Comment by mhl47 1 hour ago
Comment by smotched 2 hours ago
Comment by silverlight 2 hours ago
It was probably 3x faster than usual. I got more done in the next hour with it than I do in half a day usually. It was definitely a bit of a glimpse into a potential future of “what if these things weren’t resource constrained and could just fly”.
Comment by yoavsha1 2 hours ago
Comment by cmrdporcupine 2 hours ago
Comment by nlh 35 minutes ago
Comment by svdr 2 hours ago
Comment by dajonker 2 hours ago
Comment by 9cb14c1ec0 8 minutes ago
Comment by kilroy123 1 hour ago
This last week it seems way dumber than before.
Comment by eli 1 hour ago
Anthropic does not exactly act like they're constrained by infra costs in other areas, and noticeably degrading a product when you're in tight competition with 1 or 2 other players with similar products seems like a bad place to start.
I think people just notice the flaws in these models more the longer they use them. Aka the "honeymoon-hangover effect," a real pattern that has been shown in a variety of real world situations.
Comment by Roark66 41 minutes ago
Comment by rustyhancock 1 hour ago
Ultimately I can understand if a new model is coming in without as much optimization then it'll add pressure to the older models achieving the same result.
Nice plausible deniability for a convenient double effect.
Comment by YetAnotherNick 2 hours ago
Comment by kittikitti 1 minute ago
Comment by qwesr123 3 hours ago
Comment by goldenarm 3 hours ago
The larger monthly scale should be the default, or you should get more samples.
Comment by zacmps 2 hours ago
Comment by goldenarm 2 hours ago
Comment by wendgeabos 28 minutes ago
Comment by parquor 57 minutes ago
On HN a few days ago there was a post suggesting that Claude gets dumber throughout the day: https://bertolami.com/index.php?engine=blog&content=posts&de...
Comment by drc500free 25 minutes ago
Comment by jampa 1 hour ago
"You have a bug in line 23." "Oh yes, this solution is bugged, let me delete the whole feature." That one-line fix I could make even with ChatGPT 3.5 can't just happen. Workflows that I use and are very reproducible start to flake and then fail.
After a certain number of tokens per day, it becomes unusable. I like Claude, but I don't understand why they would do this.
Comment by arcanemachiner 1 hour ago
Comment by chrisjj 49 minutes ago
More: probably don't know if they've got a good answer 100% of the time.
It is interesting to note that this trickery is workable only where the best answers are sufficiently poor. Imagine they ran almost any other kind of online service such email, stock prices or internet banking. Occasionally delivering only half the emails would trigger a customer exodus. But if normal service lost a quarter of emails, they'd have only customers who'd likely never notice half missing.
Comment by DanielHall 1 hour ago
Comment by stared 1 hour ago
I would be curious to see on how it fares against a constant harness.
There were thread claiming that Claude Code got worse with 2.0.76, with some people going back to 2.0.62. https://github.com/anthropics/claude-code/issues/16157
So it would be wonderful to measure these.
Comment by Jcampuzano2 1 hour ago
I wouldn't be surprised if the thing this is actually testing is benchmarking just claude codes constant system prompt changes.
I wouldn't really trust this to be able to benchmark opus itself.
Comment by WhitneyLand 1 hour ago
I would suggest adding some clarification to note that longer measure like 30 pass rate is raw data only while the statistically significant labels apply only to change.
Maybe something like Includes all trials, significance labels apply only to confidence in change vs baseline.
Comment by copilot_king 1 hour ago
TikTok used to give new uploaders a visibility boost (i.e., an inflated number of likes and comments) on their first couple of uploads, to get them hooked on the the service.
In Anthropic/Claude's case, the strategy is (allegedly) to give new users access to the premium models on sign-up, and then increasingly cut the product with output from cheaper models.
Comment by chrisjj 31 minutes ago
Anthropic did sell a particular model version.
Comment by elmean 37 minutes ago
Comment by sd9 46 minutes ago
If this measure were hardened up a little, it would be really useful.
It feels like an analogue to an employee’s performance over time - you could see in the graphs when Claude is “sick” or “hungover”, when Claude picks up a new side hustle and starts completely phoning it in, or when it’s gunning for a promotion and trying extra hard (significant parameter changes). Pretty neat.
Obviously the anthropomorphising is not real, but it is cool to think of the model’s performance as being a fluid thing you have to work with, and that can be measured like this.
I’m sure some people, most, would prefer that the model’s performance were fixed over time. But come on, this is way more fun.
Comment by beardsciences 3 hours ago
Comment by chrisjj 29 minutes ago
Is exacerbating this issue ... if the load theory is correct.
Comment by rplnt 1 hour ago
Comment by sciencejerk 2 hours ago
Comment by observationist 1 hour ago
They should be transparent and tell customers that they're trying to not lose money, but that'd entail telling people why they're paying for service they're not getting. I suspect it's probably not legal to do a bait and switch like that, but this is pretty novel legal territory.
Comment by Trufa 2 hours ago
Comment by Uehreka 2 hours ago
Comment by emp17344 46 minutes ago
Comment by giwook 2 hours ago
Comment by observationist 1 hour ago
Just ignore the continual degradation of service day over day, long after the "infrastructure bugs" have reportedly been solved.
Oh, and I've got a bridge in Brooklyn to sell ya, it's a great deal!
Comment by alias_neo 1 hour ago
Forgive me, but as a native English speaker, this sentence says exactly one thing to me; We _do_ reduce model quality, just not for these listed reasons.
If they don't do it, they could put a full stop after the fifth word and save some ~~tokens~~ time.
Comment by Topfi 1 hour ago
Very simple queries, even those easily answered via regular web searching, have begun to consistently not result accurate results with Opus 4.5, despite the same prompts previously yielding accurate results.
One of the tasks that I already thought was fully saturated as most recent releases had no issues in solving it was to request a list of material combinations for fabrics used in bag constructions that utilise a specific fabric base. In the last two weeks, Claude has consistently and reproducibly provided results which deviate from the requested fabric base, making the results inaccurate in a way that a person less familiar with the topic may not notice instantly. There are other queries of this type for other topics I am nerdily familiar with to a sufficient degree to notice such deviations from the prompt like motorcycle history specific queries that I can say this behaviour isn't limited to the topic of fabrics and bag construction.
Looking at the reasoning traces, Opus 4.5 even writes down the correct information, yet somehow provides an incorrect final output anyways.
What makes this so annoying is that in coding tasks, with extensive prompts that require far greater adherence to very specific requirements in a complex code base, Opus 4.5 does not show such a regression.
I can only speculate what may lead to such an experience, but for none coding tasks I have seen regression in Opus 4.5 whereas for coding I did not. Not saying there is none, but I wanted to point it out as such discussions are often primarily focused on coding, where I find it can be easier to see potential regressions where their are none as a project goes on and tasks become inherently more complex.
My coding benchmarks are a series of very specific prompts modifying a few existing code bases in some rather obscure ways, with which I regularly check whether a model does severely deviate from what I'd seen previously. Each run starts with a fresh code base with some fairly simple tasks, then gets increasingly complex with later prompts not yet being implemented by any LLM I have gotten to test. Partly that originated from my subjective experience with LLMs early on, where I found a lot of things worked very well but then as the project went on and I tried more involved things with which the model struggled, I felt like the model was overall worse when in reality, what had changed were simply the requirements and task complexity as the project grew and easier tasks had been completed already. In this type of testing, Opus 4.5 this week got as far and provided a result as good as the model did in December. Of course, past regressions were limited to specific users, so I am not saying that no one is experiencing reproducible regressions in code output quality, merely that I cannot reproduce them in my specific suite.
Comment by dudeinhawaii 33 minutes ago
I didn't "try 100 times" so it's unclear if this is an unfortunate series of bad runs on Claude Code and Gemini CLI or actual regression.
I shouldn't have to benchmark this sort of thing but here we are.
Comment by epolanski 1 hour ago
Comment by PlatoIsADisease 25 minutes ago
They were fighting an arms race that was getting incredibly expensive and realized they could get away with spending less electricity and there was nothing the general population could do about it.
Grok/Elon was left out of this because he would leak this idea at 3am after a binge.
Comment by IshKebab 2 hours ago
Doesn't really work like that. I'd remove the "statistically significant" labelling because it's misleading.
Comment by fragebogen 2 hours ago
Comment by embedding-shape 2 hours ago
Comment by jzig 2 hours ago
Comment by embedding-shape 2 hours ago
Comment by ghm2199 2 hours ago
I would imagine a sort of hybrid qualities of volunteer efforts like wikipedia, new problems like advent of code and benchmarks like this. The goal? It would be to study the collective effort on the affects of usage to so many areas where AI is used.
[MedWatch](https://www.fda.gov/safety/medwatch-fda-safety-information-a...)
[VAERS](https://www.cdc.gov/vaccine-safety-systems/vaers/index.html)
[EudraVigilance](https://www.ema.europa.eu/en/human-regulatory-overview/resea...)
Comment by esafak 1 hour ago
Comment by fernvenue 1 hour ago
Comment by taf2 1 hour ago
Comment by sroerick 2 hours ago
Comment by copilot_king 1 hour ago
TikTok used to give new uploaders a visibility boost (i.e., an inflated number of likes and comments) on their first couple of uploads, to get them hooked on the the service.
In Anthropic/Claude's case, the strategy is (allegedly) to give new users access to the premium models on sign-up, and then increasingly cut the product with output from cheaper models.
Of course, your suggestion (better service for users who know how to speak Proper English) would be the cherry on top of this strategy.
From what I've seen on HackerNews, Anthropic is all-in on social media manipulation and social engineering, so I suspect that your assumption holds water.
Comment by arcanemachiner 1 hour ago
Comment by copilot_king 1 hour ago
Comment by arcanemachiner 22 minutes ago
Comment by turnsout 2 hours ago
I've been using CC more or less 8 hrs/day for the past 2 weeks, and if anything it feels like CC is getting better and better at actual tasks.
Edit: Before you downvote, can you explain how the model could degrade WITHOUT changes to the prompts? Is your hypothesis that Opus 4.5, a huge static model, is somehow changing? Master system prompt changing? Safety filters changing?
Comment by FfejL 2 hours ago
Is CC getting better, or are you getting better at using it? And how do you know the difference?
I'm an occasional user, and I can definitely see improvements in my prompts over the past couple of months.
Comment by rob 2 hours ago
For me I've noticed it getting nothing but better over the past couple months, but I've been working on my workflows and tooling.
For example, I used to use plan mode and would put everything in a single file and then ask it to implement it in a new session.
Switching to the 'superpowers' plugin with its own skills to brainstorm and write plans and execute plans with batches and tasks seems to have made a big improvement and help catch things I wouldn't have before. There's a "get shit done" plugin that's similar that I want to explore as well.
The code output always looks good to me for the most part though and I've never thought that it's getting dumber anything, so I feel like a lot of the improvements I see are because of a skill issue on my part trying to use everything. Obviously it doesn't help there's a new way to do things every two weeks though.
Comment by turnsout 2 hours ago
My initial prompting is boilerplate at this point, and looks like this:
(Explain overall objective / problem without jumping to a solution)
(Provide all the detail / file references / past work I can think of)
(Ask it "what questions do you have for me before we build a plan?")
And then go back and forth until we have a plan.
Compared to my work with CC six months ago, it's just much more capable, able to solve more nuanced bugs, and less likely to generate spaghetti code.
Comment by arcanemachiner 1 hour ago
Comment by billylo 2 hours ago
Comment by gpm 2 hours ago
Comment by billylo 2 hours ago
Thumbs up or down? (could be useful for trends) Usage growth from the same user over time? (as an approximation) Tone of user responses? (Don't do this... this is the wrong path... etc.)
Comment by turnsout 2 hours ago
Comment by fragebogen 2 hours ago
Comment by maximgeorge 1 hour ago