The beginning of scarcity in AI
Posted by gmays 17 hours ago
Comments
Comment by keiferski 6 hours ago
The companies that are entirely AI-dependent may need to raise prices dramatically as AI prices go up. Not being dependent on LLMs for your fundamental product’s value will be a major advantage, at least in pricing.
Comment by andersmurphy 6 hours ago
It's similar to how if you know what you're doing you can manage a simple VPS and scale a lot more cost effectively than something like vercel.
In a saturated market margins are everything. You can't necessarily afford to be giving all your margins to anthropic and vercel.
Comment by prox 3 hours ago
Their might always be llms, but the dependence is an interesting topic.
Comment by Cthulhu_ 1 hour ago
But I'm also afraid / certain that LLMs are able to figure out legacy code (as long as enough fits in their context window), so it's tenuous at best.
Also, funny you mentioned HTML / CSS because for a while (...in the 90's / 2000's) it looked like nobody needed to actually learn those because of tools like Dreamweaver / Frontpage.
Comment by zozbot234 4 hours ago
It's not that clear. Sure, hardware prices are going up due to the extremely tight supply, but AI models are also improving quickly to the point where a cheap mid-level model today does what the frontier model did a year ago. For the very largest models, I think the latter effect dominates quite easily.
Comment by CodingJeebus 1 minute ago
Your company may have the resources to effectively shift to cheaper models without service degradation, but your AI tooling vendors might not. If you pay for 5 different AI-driven tools, that's 5 different ways your upstream costs may increase that you'll need to pass on to customers as well.
Comment by lelanthran 21 minutes ago
> It's not that clear. Sure, hardware prices are going up due to the extremely tight supply, but AI models are also improving quickly to the point where a cheap mid-level model today does what the frontier model did a year ago.
I agree; I got some coding value out of Qwen for $10/m (unlimited tokens); a nice harness (and some tight coding practices) lowers the distance between SOTA and 6mo second-tier models.
If I can get 80% of the way to Anthropic's or OpenAI's SOTA models using 10$/m with unlimited tokens, guess what I am going to do...
Comment by satvikpendem 10 minutes ago
Comment by bcjdjsndon 2 hours ago
Comment by zozbot234 2 hours ago
Comment by chewz 1 hour ago
Inference prices droped like 90 percent in that time (a combination of cheaper models, implicit caching, service levels, different providers and other optimizations).
Quality went up. Quantity of results went up. Speed went up.
Service level that we provide to our clients went up massively and justfied better deals. Headcount went down.
What's not to like?
Comment by oeitho 49 minutes ago
Sadly, this is already happening.
Comment by WarmWash 42 minutes ago
Comment by accrual 1 hour ago
I think more specifically not being dependent on someone else's LLM hardware. IMO having OSS models on dedicated hardware could still be plenty viable for many businesses, granted it'll be some time before future OSS reaches today's SOTA models in performance.
Comment by Cthulhu_ 1 hour ago
On a small scale that's a tragedy, but there's plenty of analysts that predict an economic crash and recession because there's trillions invested in this technology.
Comment by michaelbuckbee 5 hours ago
Comment by bjornroberg 1 hour ago
Comment by onion2k 1 hour ago
Or they'll price the true cost in from the start, and make massive profits until the VC subsidies end... I know which one I'd do.
Comment by andersmurphy 20 minutes ago
Comment by muppetman 5 hours ago
This is the “Building my entire livelihood on Facebook, oh no what?” all over again.
Oh no sorry I forgot, your laptops LLM can draw a potato, let me invest in you.
Comment by lioeters 3 hours ago
Comment by rybosworld 1 hour ago
> We just had a realization during a demo call the other day
These tools have been around for years now. As they've improved, dependency on them has grown. How is any organization only just realizing this?
That's like only noticing the rising water level once it starts flooding the second floor of the house.
Comment by keiferski 8 minutes ago
Comment by sevenzero 2 hours ago
Comment by keiferski 2 hours ago
And I don't really mean new businesses that are entirely built around LLMs, rather existing ones that pivoted to be LLM-dependent – yet still have non-LLM-dependent competitors.
Comment by sevenzero 2 hours ago
Comment by bdangubic 2 hours ago
Comment by sevenzero 2 hours ago
Comment by finaard 5 hours ago
It's just another instance of cloud dependency, and people should've learned something from that over the last two decades.
Comment by keiferski 3 hours ago
So we thought, hmm, “wonder if they are increasing prices to deal with AI costs,” and then projected that into a future where costs go up.
We don’t have this dependence ourselves, so this seems to be a competitive advantage for us on pricing.
Comment by strife25 4 hours ago
Comment by anonyfox 3 hours ago
So its all a house of cards now, and the moment the bubble bursts is when local open inference has closed the gap. looks like chinese and smaller players already go hard into this direction.
Comment by zozbot234 2 hours ago
Many users will also seek to go local as insurance against rug pulls from the proprietary models side (We're not quite sure if the third-party inference market will grow enough to provide robust competition), but ultimately if you want to make good utilization of your hardware as a single user you'll also be pushed towards mostly running long batch tasks, not realtime chat (except tiny models) or human-assisted coding.
Comment by michaelje 5 hours ago
Hyperscalers are spending a fortune so we think AI = API, but renting intelligence is a business model, not a technical inevitability.
Shameless link to my post on this: https://mjeggleton.com/blog/AIs-mainframe-moment
Comment by sidewndr46 2 hours ago
Comment by dmazin 17 hours ago
* harness design
* small models (both local and not)
I think there is tremendous low hanging fruit in both areas still.
Comment by com2kid 16 hours ago
The US has a problem of too much money leading to wasteful spending.
If we go back to the 80s/90s, remember OS/2 vs Windows. OS/2 had more resources, more money behind it, more developers, and they built a bigger system that took more resources to run.
Mac vs Lisa. Mac team had constraints, Lisa team didn't.
Unlimited budgets are dangerous.
Comment by tasoeur 6 hours ago
Comment by coldtea 5 hours ago
Comment by busfahrer 5 hours ago
Can you elaborate on this? Is this something that companies would train themselves?
Comment by phist_mcgee 7 hours ago
Comment by aldanor 2 hours ago
As a recent example in AI space itself. China had scarce GPU resources, quite obvious why => DeepSeek training team had to invent some wheels and jump through some hoops => some of those methods have since become 'industry standard' and adopted by western labs who are now jumping through the same hoops despite enjoying massive computeresources, for the sake of added efficiency.
Comment by cesarvarela 16 hours ago
Comment by lpcvoid 8 hours ago
Comment by drra 7 hours ago
Comment by dataviz1000 16 hours ago
Comment by Ifkaluva 16 hours ago
Comment by dataviz1000 15 hours ago
> Users should re-tune their prompts and harnesses accordingly.
I read this in the press release and my mind thought it meant test harness. Then there was a blog post about long running harnesses with a section about testing which lead me to a little more confusion.
Yes, the word 'harness' is consistently used in the context as a wrapper around the LLM model not as 'test harness'.
Comment by dboreham 7 hours ago
Comment by ElFitz 6 hours ago
Basically a clever wrapper around the Anthropic / OpenAI / whatever provider api or local inference calls.
Comment by codybontecou 16 hours ago
Comment by christkv 8 hours ago
Comment by KaiserPro 3 hours ago
Infra is always limited, even at hyper scalers. This leads to a bunch of tools dfofr caching, profiling and generally getting performance up, not to mention binpacking and all sorts of other "obvious" things.
Comment by sph 26 minutes ago
Not bad for a coffee break of effort.
Comment by losvedir 1 hour ago
I think maybe infra is limited only at hyperscalers. For the rest of us it's just how much capacity to we want to rent from the hyperscalars.
It's kind of a recent cloud-native mindset, since back in the day when you ran your own hardware scaling and capacity was always top of mind. Looks like AI compute might be like that again, for the time being.
Comment by malshe 3 hours ago
Comment by wg0 16 hours ago
Whoever running and selling their own models with inference is invested into the last dime available in the market.
Those valuations are already ridiculously high be it Anthropic or OpenAI to the tune of couple of trillion dollars easily if combind.
All that investment is seeking return. Correct me if I'm wrong.
Developers and software companies are the only serious users because they (mostly) review output of these models out of both culture and necessity.
Anywhere else? Other fields? There these models aren't any useful or as useful while revenue from software companies by no means going to bring returns to the trillion dollar valuations. Correct me if I'm wrong.
To make the matter worst, there's a hole in the bucket in form of open weight models. When squeezed further, software companies would either deploy open weight models or would resort to writing code by hand because that's a very skilled and hardworking tribe they've been doing this all their lives, whole careers are built on that. Correct me if I'm wrong.
Eventually - ROI might not be what VCs expect and constant losses might lead to bankruptcies and all that build out of data centers all of sudden would be looking for someone to rent that compute capacity result of which would be dime a dozen open weight model providers with generous usage tiers to capitalize on that available compute capacity owners of which have gone bankrupt and can't use it any more wanting to liquidate it as much as possible to recoup as much investment as possible.
EDIT: Typos
Comment by solenoid0937 8 hours ago
Anthropic's is far more reasonable.
It makes no sense to lump these two companies together when talking about valuation. They have completely different financial dynamics
Comment by wg0 8 hours ago
Comment by ElFitz 6 hours ago
I onboarded marketing on a premium team Claude seat yesterday. And one of our sales vibecoded an internal tool in the last three weeks using Claude Code that they now use every day. I wouldn’t have imagined it a month ago. We still had to take care of deployment for him, but things are moving fast.
Comment by solenoid0937 8 hours ago
Comment by drra 7 hours ago
Comment by wg0 6 hours ago
Note - this is just the revenue not the profit. No salaries, no compute paid for. Just plain revenue. Profit would be way less.
But even that - if we take it to $24 billion/year and we take a 10x multiple, the company is barely valued at $240 billon dollar, lets be generous and make it double at $480 billion and then round it up to $500 billion for a nice round number.
Far far from the $800 billion valuation Anthropic is looking at.
Only a matter of time.
EDIT: Fixed math
Comment by steveklabnik 2 hours ago
Comment by solenoid0937 41 minutes ago
Comment by classified 2 hours ago
Shush, don't tell that to the AI coding acolytes.
Comment by christkv 8 hours ago
Comment by sdevonoes 5 hours ago
Comment by siliconc0w 1 hour ago
Comment by 0xbadcafebee 32 minutes ago
The scarcity isn't long-term. Like all manufactured products, they'll ramp up production and flood the market with hardware, people will buy too much, market will drop. Boom and bust.
We're also still in the bubble. Eventually markets will no longer bear the lack of productivity/profit (as AI isn't really that useful) and there will be divestment and more hardware on the market as companies implode. Nobody is making 10x more from AI, they are just investing in it hoping for those profits which so far I don't think anyone has seen, other than in the companies selling the AI to other companies.
But more importantly, the models and inference keeps getting more efficient, so less hardware will do more in the future. We already have multiple models good enough for on-device small-scale work. In 5 years consumer chips and model inference will be so good you won't need a server for SOTA. When that happens, most of the billions invested in SOTA companies will disappear overnight, which'll leave a sizeable hole in the market.
Comment by latentframe 1 hour ago
Comment by frigg 2 hours ago
Comment by 2001zhaozhao 15 hours ago
(note: I don't expect this to actually happen until the AI gets good enough to either nearly entirely replace humans or solve cooperation, but the long term trend of scarce AI will go towards that direction)
Comment by ttul 9 hours ago
Comment by henry2023 16 hours ago
Comment by jakeinspace 16 hours ago
Comment by odo1242 16 hours ago
Comment by thelastgallon 8 hours ago
Comment by jerf 54 minutes ago
Comment by ElFitz 6 hours ago
It’s still a useful proxy for resources allocation and viability.
Comment by tucnak 3 hours ago
Comment by ElFitz 2 hours ago
While we could reason in "performance / watt" and "performance / people", "performance / whatever other resource involved", and "performance / opportunity cost of allocating these resources to this use case and not another", "performance / whatever unit of stable-ish currency" is a convenient and often "good enough" approximation that somewhat encapsulates them all.
A simplification, like any model, but still useful.
Comment by thelastgallon 9 hours ago
Comment by hvb2 7 hours ago
Years, is like a lifetime for AI at this point...
Comment by dyauspitr 28 minutes ago
Comment by thelastgallon 6 hours ago
This is true of nearly everything (except money). I'm not sure of the point you are trying to make.
Comment by Miraste 16 hours ago
Comment by CuriouslyC 16 hours ago
Comment by leptons 16 hours ago
Comment by odo1242 16 hours ago
What does this mean? I didn't understand the analogy.
Comment by digitalsushi 14 hours ago
Comment by leptons 9 hours ago
Comment by thelastgallon 9 hours ago
Comment by ElFitz 6 hours ago
Comment by 1828838383 3 hours ago
Comment by utopiah 7 hours ago
It's one thing to "sell" free or symbolically cheap stuff, it's another to have an actual client who will do the math and compare expenditure vs actually delivered value.
Comment by classified 2 hours ago
Which means that the hype production will be driven up another few notches to make people doubt their rational findings and keep them in irrational territory just a tad longer. Every minute converts to dollars spent on tokens.
Comment by tim333 6 hours ago
I thought there'd been a shortage of cheap GPUs since ChatGPT took off and also before that in various crypto booms. I'm not sure it's a new thing.
Comment by the_gipsy 4 hours ago
Comment by chatmasta 3 hours ago
And that’s not considering the software innovation that can happen in the meantime.
Comment by Bengalilol 2 hours ago
Regarding "innovation", I agree with your idea. I even think that the major innovation will be to transpose models locally, using reduced infrastructures that will still be sufficient for the majority of use cases.
Comment by vessenes 17 hours ago
For instance, at some point, could Coreweave field a frontier team as it holds back 10% of its allocations over time? Pretty unusual situation.
Comment by dist-epoch 16 hours ago
Comment by vessenes 12 hours ago
Comment by bcjdjsndon 2 hours ago
Comment by com2kid 16 hours ago
Open Weight models are 6 months to a year behind SOTA. If you were building a company a year ago based on what AI could do then, you can build a company today with models that run locally on a user's computer. Yes that may mean requiring your customers to buy Macbooks or desktops with Nvidia GPUs, but if your product actually improves productivity by any reasonable amount, that purchase cost is quickly made up for.
I'll argue that for anything short of full computer control or writing code, the latest Qwen model will do fine. Heck you can get a customer service voice chat bot running in 8GB of VRAM + a couple gigs more for the ASR and TTS engine, and it'll be more powerful than the hundreds of millions spent on chat bots that were powered by GPT 4.x.
This is like arguing the age of personal computing was over because there weren't enough mainframes for people to telnet into.
It misses the point. Yes deployment and management of personal PCs was a lot harder than dumb terminal + mainframe, but the future was obvious.
Comment by space_fountain 16 hours ago
Comment by rstuart4133 12 hours ago
I'd be surprised if it isn't true for your use cases. If you give GLM-5.1 and Optus 4.6 the same coding task, they will both produce code that passes all the tests. In both cases the code will be crap, as no model I've seen produces good code. GLM-5.1 is actually slightly better at following instructions exactly than Optus 4.6 (but maybe not 4.7 - as that's an area they addressed).
I've asked GLM-5.1 and Opus 4.6 to find a bug caused by a subtle race condition (the race condition leads to a number being 15172580 instead of 15172579 after about 3 months of CPU time). Both found it, in a similar amount of time. Several senior engineers had stared at the code for literally days and didn't find it.
There is no doubt the models do vary in performance at various tasks, but we are talking the difference between Ferrari vs Mercedes in F1. While the differences are undeniable, this isn't the F1. Things take a year to change there. The performance of the models from Anthropic and OpenAI literally change day by day, often not due to the model itself but because of the horsepower those companies choose to give them on the day, or them tweaking their own system prompts. You can find no end of posts here from people screaming in frustration the thing that worked yesterday doesn't work today, or suddenly they find themselves running out of tokens, or their favoured tool is blocked. It's not at all obvious the differences between the open-source models and the proprietary ones are worse than those day to day ones the proprietary companies inflict on us.
Comment by frodowtf2 8 hours ago
I'm wondering if you have actually used claude code because results are not so catastrophic as you describe them.
Comment by rstuart4133 8 hours ago
if (foo == NULL) {
log_the_error(...);
goto END;
}
END:
free(foo);
If you don't know C, in older versions that can be a catastrophic failure. (The issue is so serious in modern C `free(NULL)` is a no-op.) If it's difficult to get a `FOO == NULL` without extensive mocking (this is often the case) most programmers won't do it, so it won't be caught by unit tests. The LLMs almost never get unit test coverage up high enough to catch issues like this without heavy prompting.But that's the least of it. The models (all of them) are absolutely hopeless at DRY'ing out the code, and when they do turn it into spaghetti because they seem almost oblivious to isolation boundaries, even when they are spelt out to them.
None of this is a problem if you are vibe coding, but you can only do that when you're targeting a pretty low quality level. That's entirely appropriate in some cases of course, but when it isn't you need heavy reviews from skilled programmers. No senior engineer is going to stomach the repeated stretches of almost the "same but not quite" code they churn out.
You don't have to take my word for it. Try asking Google "do llm's produce verbose code".
Comment by random_human_ 7 hours ago
Comment by rstuart4133 6 hours ago
`free(NULL)` is harmless in C89 onwards. As I said, programmers freeing NULL caused so many issues they changed the API. It doesn't help that `malloc(0)` returns NULL on some platforms.
If you are writing code for an embedded platform with some random C compiler, all bets on what `free(NULL)` does are off. That means a cautious C programmer who doesn't know who will be using their code never allows NULL to be passed to `free()`.
In general, most good C programmers are good because they suffer a sort of PTSD from the injuries the language has inflicted on them in the past. If they aren't avoiding passing NULL to `free()`, they haven't suffered long enough to be good.
Comment by lelanthran 4 hours ago
If your compiler chokes on `free(NULL)` you have bigger problems that no LLM (or human) can solve for you: you are using a compiler that was last maintained in the 80s!
If your C compiler doesn't adhere to the very first C standard published, the problem is not the quality of the code that is written.
> If they aren't avoiding passing NULL to `free()`, they haven't suffered long enough to be good.
I dunno; I've "suffered" since the mid-90s, and I will free NULL, because it is legal in the standard, and because I have not come across a compiler that does the wrong thing on `free(NULL)`.
Comment by random_human_ 6 hours ago
Comment by rstuart4133 3 hours ago
Oh yes, you probably will see errors elsewhere. If you are lucky it will happen immediately. But often enough millions of executed instructions later, in some unrelated routine that had its memory smashed. It's not "fun" figuring out what happened. It could be nothing - bit flips are a thing, and once you get the error rate low enough the frequency of bit flips and bugs starts to converge. You could waste days of your time chasing an alpha particle.
I saw the author of curl post some of this code here a while back. I immediately recognised the symptoms. Things like:
if (NULL == foo) { ... }
Every 2nd line was code like that. If you are wondering, he wrote `(NULL == foo)` in case he dropped an `=`, so it became `(NULL = foo)`. The second version is a syntax error, whereas `(foo = NULL)` is a runtime disaster. Most of it was unjustified, but he could not help himself. After years of dealing with C, he wrote code defensively - even if it wasn't needed. C is so fast and the compilers so good the coding style imposes little overhead.Rust is popular because it gives you a similar result to C, but you don't need to have been beaten by 10 years of pain in order to produce safe Rust code. Sadly, it has other issues. Despite them, it's still the best C we have right now.
Comment by incrudible 6 hours ago
I always found myself writing verbose copypasta code first, then compress it down based on the emerging commonalities. I think doing it the other way around is likely to lead to a worse design. Can you not tell the LLM to do the same? Honest question.
Comment by rstuart4133 5 hours ago
I do pretty much the same thing, which is to say I "write code using a brain dump", "look for commonalities that tickle the neurons", then "refactor". Lather, rinse, and repeat until I'm happy.
> Can you not tell the LLM to do the same?
You can tell them until you're blue in the face. They ignore you.
I'm sure this is a temporary phase. Once they solve the problem, coding will suffer the same fate as blacksmiths making nails. [0] To solve it they need to satisfy two conflicting goals - DRY the code out, while keeping interconnections between modules to a minimum. That isn't easy. In fact it's so hard people who do it well and can do it across scales are called senior software engineers. Once models master that trick, they won't be needed any more.
By "they" I mean "me".
[0] Blacksmiths could produce 1,000 or so a day, but it must have been a mind-numbing day even if it paid the bills. Then automation came along, and produced them at over a nail per second.
Comment by lelanthran 4 hours ago
I found it exceptionally good, because:
a) The agent doesn't need to read the implementation of anything - you can stuff the entire projects headers into the context and the LLM can have a better birds-eye view of what is there and what is not, and what goes where, etc.
and
b) Enforcing Parse, don't Validate using opaque types - the LLM writing a function that uses a user-defined composite datatype has no knowledge of the implementation, because it read only headers.
Comment by com2kid 16 hours ago
Write code? No. Use frontier models. They are subsidized and amazing and they get noticably better ever few months.
Literally anything else? Smaller models are fine. Classifiers, sentiment analysis, editing blog posts, tool calling, whatever. They go can through documents and extract information, summarize, etc. When making a voice chat system awhile back I used a cheap open weight model and just asked it "is the user done speaking yet" by passing transcripts of what had been spoken so far, and this was 2 years ago and a crappy cheap low weight model. Be creative.
I wouldn't trust them to do math, but you can tool call out to a calculator for that.
They are perfectly fine at holding conversations. Their weights aren't large enough to have every book ever written contained in them, or the details of every movie ever made, but unless you need that depth and breadth of knowledge, you'll be fine.
Comment by space_fountain 14 hours ago
Comment by com2kid 14 hours ago
Open weight models have those same issues. They are otherwise fine.
You can hook them up to a vector DB and build a RAG system. They can answer simple questions and converse back and forth. They have thinking modes that solve more complex problems.
They aren't going to discover new math theorems but they'll control a smart home and manage your calendar.
Comment by dyauspitr 23 minutes ago
Comment by dist-epoch 16 hours ago
Comment by ethan_smith 27 minutes ago
Comment by Bengalilol 2 hours ago
I know it may sound ridiculous, but it could actually become a way to break away from the business models that have been developed over the past few decades. Broadly speaking, this even amounts to saying that the biggest victims of AI could be the companies that bet on AI as a service.
Yet I know my vision is way too idealistic but I'm coming to imagine that a human brain, although less efficient in the long run, remains a reliable way to control the resulting costs and could even turn out to be more advantageous and more readily available than its silicon-based counterpart.
Comment by 20after4 2 hours ago
Comment by piokoch 7 hours ago
Comment by stupefy 17 hours ago
Comment by vessenes 17 hours ago
Also - turbine blades limit power, according to Elon.
Between them - we cannot chip fabs past a certain rate, and we cannot stand up the datacenter to run these desired chips past a certain rate. Different people believe one or the other is the 'true' current bottleneck. The turbine supply chain scaling looks much more tractable -- EUV is essentially the most complicated production process humans have ever devised.
Comment by utopiah 7 hours ago
- clean room, itself needing the infrastructure for it (size, airCo, filtering, electricity) and the staff to run and maintain that basically empty space - wafers to "print" on, so that's a lot of water and logistic to manipulate them (so infrastructure for clean water and all chemicals) also with dedicated staff - finally staff who would be able to design something significantly better than NVIDIA, Intel, Broadcom, IBM, etc while (and arguably that's the trickiest part IMHO) being able to get it good enough as at a scale that can be manufactured from their own fab.
so I'm wondering who can afford this kind of setup that can only then make use of ASML machines.
Comment by Marazan 6 hours ago
Fabs are some of the most complex chemical engineering sites (dealing with some of the most dangerous substances) in the world. So don't underestimate the complexity of this part.
Comment by utopiah 2 hours ago
Comment by andai 16 hours ago
Comment by Tanjreeve 8 hours ago
Comment by ls612 17 hours ago
Comment by vessenes 17 hours ago
Comment by juliansimioni 16 hours ago
Comment by Miraste 16 hours ago
Comment by mattas 17 hours ago
If I am a grocery store that pays $1 for oranges and sells them for $0.50, I can't say, "I don't have enough oranges."
Comment by FloorEgg 17 hours ago
'If I am a grocery store that pays $1 for oranges and sells them for $0.50, I can't say, "I don't have enough oranges."'
How about 'if I'm a grocery store and I see no limit on demand for oranges at $.50 but they are currently $1, I can say 'if oranges were cheaper I could sell orders of magnitude more of them'.
Buying oranges for $1 and selling for $0.5 is an investment into acquiring market share and customer relationships and a gamble on the price of oranges falling in the future.
Comment by 0x3f 16 hours ago
The whole setup rests on this, and it seems mythical to me. These guys have basically equivalent products at this point.
Comment by eloisant 2 hours ago
Comment by lelanthran 4 hours ago
It's a delusion that customers are going to remain with the behemoths when a Qwen model run by an independent is $10/m, unlimited usage.
This is not a market that can be locked-in with network effects, and the current highly-invested players have no moat.
Comment by deepseasquid 1 hour ago
But labs arent buying oranges — theyre buying the only orchard on the island, hoping it yields a fruit no ones grown yet. Burning $1B to net $500M isnt "I have too few oranges." Its "Im betting the farm Ill find a new one."
Both can be irrational. Theyre irrational in different ways.
Comment by TeMPOraL 16 hours ago
Comment by earthnail 17 hours ago
Comment by 0x3f 16 hours ago
Comment by vessenes 17 hours ago
"I built a ship to go to the Indies and bring back tea."
"Bro, the ship cost 100,000 pounds sterling and only brought back 50,000 pounds of tea. I don't care if you paid 12,500 pounds for the tea itself, you're losing money."
There is a very rational reason labs are spending everything they can get for more compute right now. The tea (inference) pays 60%+ margins. And that is rising. And that number is AFTER hyper scalars make their margins. There is an immense amount of profit floating around this system, and strategics at the edge believing they can build and control the demand through combined spend on training and inference in the proper ratios.
Comment by SpicyLemonZest 16 hours ago
Could they be accurate? Sure, I think people who claim this is impossible are overconfident. But I would encourage anyone who assumes they must be right to read a history of the Worldcom scandal. It's really quite easy for a person who wants to be making money (or an LLM who's been instructed to "run the accounts make no mistakes"!) to incorrectly categorize costs as capital investments when nobody's watching carefully.
Comment by mystraline 12 minutes ago
How convenient, especially since everything has some LLM slop interaction.
But that rug isnt going to pull itself!
Comment by czk 16 hours ago
Comment by itmitica 16 hours ago
It remains to be seen what new wave of AI system or systems will replace it, making the whole current architecture obsolete.
Meanwhile, they are milking it, in the name of scarcity.
Comment by byyoung3 16 hours ago
Comment by eloisant 2 hours ago
Comment by yalogin 16 hours ago
Comment by i_think_so 9 hours ago
One person replies "yes". Another replies "no".
This concludes our press conference.
<3 HN
Comment by stronglikedan 16 hours ago
Comment by dist-epoch 16 hours ago
Comment by isawczuk 17 hours ago
There is a 2-3years still before ASIC LLM inferences will catch up.
Comment by observationist 16 hours ago
It won't make sense for ASIC LLMs to manifest until things start to plateau, otherwise it'll be cheaper to get smarter tokens on the cloud for almost all use cases.
That said, a 10 trillion parameter model on a bespoke compute platform overcomes a lot of efficiency and FOOM aspects of the market fit, so the angle is "when will models that can be run on an asic be good enough that people will still want them for various things even if the frontier models are 10x smarter and more efficient"
I think we're probably a decade of iteration on LLMs out, at least, and the entire market could pivot if the right breakthrough happens - some GPT-2 moment demonstrating some novel architecture that convinces the industry to make the move could happen any time now.
Comment by vessenes 17 hours ago
Comment by Morromist 16 hours ago
Its like being back in 1850 and you build the world's first amusement park where the rides are free or very cheap. People are like Amusement parks are the next big thing since Steam Boats! And tons of other rich people start to build huge amusement parks everywhere. The people who are skilled at making amusement park rides will increase their prices, and since the first amusement parks are free so they can get the public going to them demand will be huge.
But how sustainable is that? - well obviously we know from history that amusement parks did, in fact, take over the world and most people spent virtually all their time and money at amusement parks - I think the Crimean War was even fought over some religious-based theme park in Israel - until moving pictures came out, so it worked out for them, but for AI?
Comment by LogicFailsMe 4 hours ago
Comment by throwaway290 6 hours ago
Comment by PessimalDecimal 2 hours ago
Comment by paulddraper 16 hours ago
1. Supply can scale. You can point to COVID/supply-chain shocks, but the problem there is temporary changes. No one spins up a whole fab to address a 3 month spike. Whereas AI is not a temporary demand change.
2. Models are getting more efficient. DeepSeek V3 was 1/10th the cost of contemporary ChatGPT. Open weight models get more runnable or smarter every month. Cutting edge is always cutting edge, but if scarcity is real, model selection will adjust to fit it.
Comment by Lapalux 17 hours ago
Comment by hemangjoshi37a 4 hours ago
Comment by SadErn 17 hours ago