AI was not invented, it arrived
Posted by fcpguru 21 hours ago
Comments
Comment by nospice 19 hours ago
And second, this article is almost certainly AI-written, so the joke is on us for engaging with it.
Comment by visarga 19 hours ago
Comment by yannyu 19 hours ago
It's a shallow, post-hoc, mystic rationalization that ignores all the work in multiple fields that actually converged to get us to this point.
Comment by danaris 18 hours ago
What AI out there now is coming up with ideas for articles?
Comment by realitydrift 20 hours ago
At scale, any compression system faces a tradeoff between entropy and fidelity. As these models absorb more language and feedback, meaning doesn’t just get reproduced, it slowly drifts. Concepts remain locally coherent while losing alignment with their original reference points. That’s why hallucination feels like the wrong diagnosis. The deeper issue is long run semantic stability, not one off mistakes.
The arrival moment wasn’t when the system got smarter, but when it became a dominant mediator of meaning and entropy started accumulating faster than humans could notice.
Comment by qlm 19 hours ago
Comment by echelon 19 hours ago
It's the result of stochastic hill climbing of a vast reservoir of talented people, industry, and science. Each pushing the frontiers year by year, building the infra, building the connective tissue.
We built the collection of requirements that enabled it through human curiosity, random capitalistic process, boredom, etc. It was gaming GPUs for goodness sake that enabled the scale up of the algorithms. You can't get more serendipitous than that. (Perhaps some of the post-WWII/cold war tech even better qualifies for random hill climbing luck. Microwave ovens, MRI machines, etc. etc.)
Machine learning is inevitable in a civilization that has evolved intelligence, industrialization, and computation.
We've passed all the hard steps to this point. Let's see what's next. Hopefully not the great filter.
Comment by hnhg 19 hours ago
Comment by echelon 19 hours ago
Maybe you give it to the authors of a few papers, but even then you'll struggle to capture even a fraction of the necessary preconditions.
The successes also rely on observing the failures and the alternative approaches. Do we throw out their credit as well?
The list would be longer than the human genome paper.
Comment by qlm 19 hours ago
Comment by throw310822 19 hours ago
Compute and transformers are a substratum, but the stuff that developed on it through training isn't made according to our design.
Comment by tim333 14 hours ago
And the headline is vague enough that you could read many meanings into it.
My take would be going back to Turing, he could see AI in the future was likely and the output of a Turing complete system is kind of a mathematical function - we just need the algorithms and hardware to crank through it which he thought we might have 50 years on but it's taken nearer 75.
The "intelligence did not get installed. It condensed" stuff reads like LLM slop.
Comment by tomxor 19 hours ago
Not really, it's called discovery, aka science.
This weird framing is just perpetuating the idea of LLMs being some kind of magic pixie dust. Stop it.
Comment by cubefox 19 hours ago
Comment by kreetx 19 hours ago
Sure, you don't know what the exact constellation of a trained model will be upfront. But similarly you don't know what, e.g, the average age of some group of people is until you compute it.
Comment by cubefox 19 hours ago
Comment by kreetx 59 minutes ago
Comment by visarga 19 hours ago
Comment by cubefox 19 hours ago
Comment by littlestymaar 19 hours ago
When we built nuclear powerplant we had no idea what really mattered for safety or maintenance, or even what day-to-day operations would be like, and we discovered a lot of things as we ran them (which is why we have been able to keep expanding their lifetime much longer than they were planned for).
Same for airplanes: there's tons of empirical knowledge about them, and people are still trying to build better models for why things that works do works the way they do (a former roommate of mine did a PhD on modeling combustion in jet engines, and she told me how much of the details were unknown, despite the technology being widely used for the past 70 years).
By the way, this is the fundamental reason why waterfall often fails, we generally don't understand enough about something before we build it and use it extensively.
Comment by cubefox 18 hours ago
ML model ≈ bird
Comment by tptacek 19 hours ago
Comment by raincole 19 hours ago
Hell, people said Lisp is an "AI programming language."
The lesson here might be that people say unhinged things about the new technology they hype for.
Comment by happytoexplain 19 hours ago
X is not Y. It's X.
Comment by tptacek 19 hours ago
Comment by phplovesong 19 hours ago
Comment by myhf 19 hours ago
Comment by beders 19 hours ago
The author probably just means LLMs. And that's really all you need to know about the quality of this article.
Comment by rdiddly 18 hours ago
Comment by empiko 19 hours ago
No AI researcher from 2010 would predict that transformer architecture (if we could send them the description back in time), SGD, and Web crawling could lead to a very coherent and useful LMs.
Comment by kreetx 19 hours ago
Comment by tomrod 19 hours ago
Comment by amelius 19 hours ago
This all happened without anyone even looking for a way to create intelligence.
The biggest step in AI was the invention of the artificial neural network. However, it is still a copy of nature's work, and in fact you could argue that even the inventor is nature's work. So there's a big argument in favor of "it arrived".
Comment by tomrod 18 hours ago
We invented AI. That the structure of a neuron inspired one subsystem architecture framework offers nothing essentialist or sacrosanct to the whole enterprise.
Sticks were our first clubs, but we don't limit our design and engineering for tools or weapons to the nature of trees. We extract good principles and invent the form as well as, often, the function.
Comment by qlm 19 hours ago
I recently bought whey protein powder that doesn't come from milk. It was synthesized by human-engineered microbes. Did this invention "arrive"?
Comment by kreetx 19 hours ago
Comment by Rikudou 19 hours ago
Granted, I only managed to read two and half paragraphs before deciding it's not worth my time, but the argument that we didn't teach it irony is bullshit: we did exactly that by feeding it text with irony.
Comment by echelon 19 hours ago
Individual researchers and engineers are pushing forward the field bit by bit, testing and trying, until the right conditions and circumstances emerge to make it obvious. Connections across fields and industries enable it.
Now that the salient has emerged, everyone wants to control it.
Capital battles it out for the chance to monopolize it.
There's a chance that the winner(s) become much bigger than the tech giants of today. Everyone covets owning that.
The battle to become the first multi-trillionaire is why so much money is being spent.
Comment by fromMars 19 hours ago
I think the framing is dead on.
Comment by bgwalter 19 hours ago
After everyone has been exposed to the patterns, idioms and mistakes of the parrots only the most determined (or monetarily invested) people are still impressed.
Emergence? Please, just because something has blinkenlights and humming fans does not mean it's intelligent.
Comment by throw310822 19 hours ago
Comment by bgwalter 19 hours ago
[1] They steal it though to produce bad imitations.
Comment by throw310822 19 hours ago
I don't think so, have you tried?
Comment by kreetx 19 hours ago
Comment by throw310822 19 hours ago
"After everyone has been exposed to the patterns, idioms and mistakes of the parrots only the most determined (or monetarily invested) people are still impressed."
Claude: Cynical, dismissive, condescending.
Comment by kreetx 5 hours ago
Comment by rpdillon 16 hours ago
* Rather than the curious "What is it good at? What could I use it for? We instead get "It's not better than me!". That lacks insight and is intentionally sidestepping the point that it has utility for a lot of people who need coding work done.
* Using a bad analogy protected by scare quotes to make an invalid point that suggests a human would be able to argue with a photocopier or a philosophical treatise. It's clearly the case that humans can only argue with an LLM, due to the interactive nature of the dialogue.
* The use of the word "steal" to indicate theft of material when training AI models, again intentionally conflating theft with copyright infringement. But even that suggestion is not accurate: Model training is currently considered fair use and court findings were already trending in this direction. So even the suggestion it's copyright infringement doesn't hold water. Piracy of material would invalidate that, but that's not what happening in the case of bgwalters code, I don't expect. I expect bgwalter published their code online and it was scraped.
Agree with the sibling comment, posting Claude's assessment that mirrors this analysis. Dismissive and cynical is a good way to put it.
Comment by ripped_britches 19 hours ago
Hold my beer
Comment by sleepybrett 19 hours ago