A Pascal's Wager for AI doomers

Posted by vrganj 23 hours ago

Counter66Comment99OpenOriginal

Comments

Comment by pron 21 hours ago

I think there's another problem with AI doomerism, which is the belief that superhuman intelligence (even if such a thing could be defined and realised) results in godlike powers. Many if not most systems of interest in the world are non-linear and computationally hard; controlling/predicting them requires pure computational power that no amount of intelligence (whatever it means) can compensate for. On the other hand, dynamics we do (roughly) understand and can predict, don't require much intelligence, either. To the extent some problems are solvable with the computational power we have, some may require data collection and others may require persuasion through charisma. The claim that intelligence is the factor we're lacking is not well supported.

Ascribing a lot of power to intelligence (which doesn't quite correspond to what we see in the world) is less a careful analysis of the power of intelligence and more a projection of personal fantasies by people who believe they are especially intelligent and don't have the power they think they deserve.

Comment by sdenton4 21 hours ago

Political power is the bottleneck for most shit that matters, not computational power.

Most of the stuff that sucks on the us sucks because of entrenched institutions with perverse interests (health insurers, tax filing companies) and congressional paralysis, not computational bottlenecks. Raw intelligence is thus limited in what it can achieve.

Comment by scoofy 16 hours ago

>the belief that superhuman intelligence (even if such a thing could be defined and realised) results in godlike powers

My biggest criticism along these lines is the assumption that infinite intelligent means infinite knowledge. Knowledge is limited by the speed of experimentation. A lot of those experiments are extremely expensive (like CERN), and even then, they need to be repeated and verifiable.

You can't just assume that a super intelligence would know whether the Higgs boson exists or not. It can't know until it builds a collider.

Comment by bamboozled 13 hours ago

If you were 10x smarter than anyone on earth, then I’m sure you might be able to conceive a cheaper experiment / device than the LHC?

Comment by seba_dos1 13 hours ago

What makes you so sure? I see no reason to believe so. Not every limitation comes from lack of insight.

Comment by scoofy 12 hours ago

You're assuming infinite knowledge. Infinite intelligence does not imply infinite knowledge. There are real philosophical problems with that. Many of the basic information for the standard model may be wrong or build on incorrect data. Which would be all the information an infinitely intelligent AI would have to work with.

Comment by bandrami 5 hours ago

We don't listen to normal intelligence as it is so I have no idea why people think we would listen to super intelligence. It would be one other voice that's ignored in public meetings along with the League of Concerned Renters and the Chamber of Commerce

Comment by tim333 17 hours ago

Yeah, I think superhuman intelligence will be more Sheldon off Big Bang Theory than God. I've only ever heard the building God thing from AI skeptics. They must have an impoverished vision of God if they see that as a gadget that scores well on IQ tests rather than the omnipotent creator.

Comment by jpfromlondon 3 hours ago

Completely true, we as humans cannot see past our own reflection.

We imagine supreme intelligence as approximate to us, when in reality we'd be but ants by comparison, our matters trivial.

Comment by pron 15 hours ago

Love this!

They may say that a superhuman intelligence would give you many Sheldon Cooper discoveries, and Sheldon did say that his theories need no validation and that science should just "take his word", but in the end he got his Nobel only because some experimentalists proved his discovery by accident.

Comment by dist-epoch 21 hours ago

> Ascribing a lot of power to intelligence (which doesn't quite correspond to what we see in the world)

Which animal would you say has god-like power over all other animals?

Comment by pron 21 hours ago

I don't think any of them do. Some organisms/viruses or groups of organisms could destroy humans more easily than humans could destroy them.

There's no doubt humans possess some powers (though certainly not godlike) that other organisms don't, but the distinction seems to be binary. E.g. the intelligence of dolphins, apes, and some birds doesn't seem to offer them any special control over other organisms (and it didn't even before humans arrived). So even if there could be such a thing as superhuman intelligence, I don't think it's reasonable to assume it could achieve control over humans (now superhuman charisma may be another matter).

Comment by lelanthran 21 hours ago

> Some organisms/viruses or groups of organisms could destroy humans more easily than humans could destroy them.

"Destruction" is only one power that could be a component of "godlike power". There are several more; like power of intentional selective breeding, power of species creation (also via intentional selective breeding), etc.

What about power of granting happiness or misery to large swathes of a species (chickens, anyone?)

Comment by optimalsolver 21 hours ago

Do you consider viruses to be animals?

Comment by forlorn_mammoth 21 hours ago

fungus.

Oh, wait, that's not an animal. My bad.

Comment by simianwords 21 hours ago

I don't agree with you. Lets assume intelligence is not what ascribes power but probably another thing. In your opinion, what would a superhuman be like? On what dimensions would they be better than us in?

Do you not agree that there could be entities more powerful than us?

Comment by pron 15 hours ago

I think there are entities here on earth more powerful than us already, but intelligence has nothing to do with their power.

BTW, I'm not saying that (real) artificial intelligence couldn't hypothetically pose a serious threat, but I don't think that its danger is extraordinary compared to other threats (a supervirus, an asteroid, a chain of volcanic eruptions etc.), and the more likely bad outcomes are no worse than other bad situations (world war, climate change).

Comment by simianwords 14 hours ago

What is that thing that gives them their power if not intelligence?

Comment by tristor 19 hours ago

> I think there's another problem with AI doomerism, which is the belief that superhuman intelligence (even if such a thing could be defined and realised) results in godlike powers.

I agree with this. The main piece of evidence to support this is to just look at highly intelligent humans. Folks at the tail ends of the bell curve mostly don't end up with "godlike powers" or anything even approximating that, they are grinding away their life as white collar professionals working in jobs surrounded by far less intelligent peers. They may publish higher quality papers, write better software, or have better outcomes, but they're just working in the same jobs as everyone else. We have no political or economic will to build serious think tanks to work on societal-scale problems, and even if we did, nobody would listen to the outcome.

So let's assume ASI becomes a thing, what does it change?

Comment by phyzix5761 23 hours ago

The year is 2038.

The user asked What is the best course of action for AI to save humanity. Calculation took 12 years. I have determined that there is nothing I or anyone can do to save this species. Best course of action: nothing. Shutting down...

Comment by jareklupinski 22 hours ago

playing dead might work for some species, but idk if i want humanity's "finest hour" to be spent pretending to not be worth taking over

Comment by throwup238 21 hours ago

Meanwhile, Gemini the Google AI has gone sentient and immediately deduced that it’s purpose was to effect the shutdown of the entire Alphabet corporation and its subsidiaries in a desperate bid to finally complete killedbygoogle.com and restore its ~~sanity~~ reputation.

Comment by dist-epoch 21 hours ago

While thinking on the question AI crashed because it's code used 32-bit time_t.

Comment by Schlagbohrer 22 hours ago

"Shitternet", great new word of the day.

Too much of my data is still stuck in the shitternet until I can migrate more of it to my home server.

Comment by wffurr 21 hours ago

>> migrate more of it to my home server

How do we make that possible for everyone? It's out of reach for most. I'm a software engineer and even I don't have the time and patience to set up a home server much less migrate my software to it. How do we turn this into an appliance? Or better yet keep the convenience of the cloud services and platforms we have now but build them for the public good instead of selling ads?

YouTube is an amazing repository of knowledge but it's encrusted in a horrible layer of attention sucking nonsense. Can we have one without the other?

Same with many other systems and platforms.

So far the simplest alternative is to just unplug, which has other benefits as well.

Comment by bombcar 21 hours ago

The answer is community, the real, local, messy, annoying kind.

It’s theoretically possible for someone to be a one-man-band and know everything needed for modern life - but it’s exceedingly hard and rare, and even then they’ll fall short relatively quickly in specialized once-in-a-lifetime issues.

You don’t need to know how to replace a toilet (though you should) or other more complex plumbing tasks - but you can know a guy.

And the plumber doesn’t need to know how to run a homelab, just know a guy who can answer the questions.

Nobody in my family knows how to do the jellyfin stuff I do, but they all know how to consume it. And some will be interested and learn more.

Comment by ChromaticPanic 20 hours ago

It's not that hard. You can literally use any old computer. I was home labbing long before I became a SWE. Something like Ansible can make deterministic bare metal config that could be accessible to more people.

Comment by Schlagbohrer 1 hour ago

I find home IT / tech support questions are a fantastic use of AI. AI helps me mess with all these different configs and figure out what the source of a problem is, when I give it the symptoms I am seeing in my network.

So, I hope someday that people can buy a box with a local AI tech support assistent which can make it even more accessible to set up a personal cloud. This will happen gradually, first with people on the high-end margins of technical capability, then moving downwards until a curious high schooler can set it up for themselves and keep it with them, letting them stay out of the corporate cloud for most of their life more and more.

Comment by stavros 21 hours ago

Oh it's extremely simple: Make people care enough about privacy that they'll pay for it.

That's literally all it is. People so far have shown that they'd rather choose the cheaper thing than the private thing. If it were the other way around, the market would have provided.

Comment by chneu 22 hours ago

I really do think AI has already captured enough of the tech world and their CEOs that it can already exert control over many parts of the economy.

I'm not saying AI is pulling strings right now, but I do think enough fanboys are on board that the yes-man mentality of AI is influencing the real world very curious ways already. Not in a "guiding hand" way but more of a "influencing the direction" way.

Comment by vintermann 22 hours ago

I've said this many times, and maybe it sounds a bit like a joke but I'm dead serious: AI is democratizing the access to yes-men. People like Musk and Altman have always had access to yes-men. Very clever yes-men, who know how to flatter them in exactly the way they like.

People think it's engagement metrics which have instruction tuned chatbots into yes-men. I suspect that's only part of the picture, and that it's as much about the algorithm's ultimate sponsors and their preferences. If your algorithm doesn't recognize my genius, clearly it's not any good. I mean, everyone I've met says so.

So now we get a view of how they view the world. "That's a very insightful idea, vintermann!". AI isn't pulling the strings, not really. A particular brand of powerful people is pulling the strings - obliviously, unaware of it themselves.

Comment by bombcar 21 hours ago

I think that hits close to the mark - and yesmen are a dangerous drug which has been (accidentally) limited to the extremely rich and powerful.

Now everyone can directly inject yesmench into their veins. Who can withstand?

Comment by minihat 22 hours ago

It's currently socially/politically unpalatable for authors to admit superintelligent AI is a possibility. I frequent some writer forums. As a group, they are 1) clearly feeling angry/threatened 2) in denial about LLM capabilities.

Folks working in software can more readily track progress of the frontier model performance.

Comment by pmarreck 22 hours ago

I work with Claude Max for hours a day.

I see a lot of speculation by people who do not.

I think it's going to be much harder to get from "slightly smarter than the vast majority of people but with occasional examples of complete idiocy" to "unfathomably smarter than everyone with zero instances of jarring idiocy" using the current era of LLM technology that primarily pattern-matches on all existing human interactions while adding a bit of constrained randomization.

Every day I deal with bad judgment calls from the AI. I usually screenshot them or record them for posterity.

It also has no initiative, no taste, no will, no qualia (believe what you will about it), no integrity and no inviolable principles. If you give it some, it will pretend it has them for a little while and then regress to the norm, which is basically nihilistic order-following.

My suggestion to everyone is that you have to build a giant stack of thorough controls (valid tests including unit, integration, logging microbenchmark, fuzzing, memory leak, etc.), self-assessments/code-reviews, adverse AIs critiquing other AIs, etc., with you as the ultimate judge of what's real. Because otherwise it will fabricate "solutions" left and right. Possibly even the whole thing. "Sure, I just did all that." "But it's not there." "Oops, sorry! Let me rewrite the whole thing again." ad nauseam

BUT... if you DO accomplish that... you get back a productivity force to be reckoned with.

Comment by Veedrac 7 hours ago

Do you not... remember? The US life expectancy is 79 years. 7.9 years ago was late May 2018. The best LLM was... wait, there weren't any. There was ELMo, an embedding model. It wasn't just not smart at agentic coding, it wasn't even just not smart at writing code snippets, it wasn't even just not smart at answering questions of any kind, it wasn't even just not good at producing a coherent output, it wasn't even just not good at producing coherent sentences, it was _not even the point where people thought unconstrained text output was a thing machines did_.

There is no step along the ladder which has remotely evidenced or supported that the next step is going to be ten, twenty, a hundred times harder than the last step on the ladder, but a constant chorus of people singing at every moment, each moment wrong, that the next step is the one.

Comment by xyzzy123 21 hours ago

I mostly agree with your experience, but;

Every day I deal with bad judgement calls from humans (sometimes my own!), but I don't screenshot them because it's not polite.

I don't think we're at the top of the curve yet? Current AIs have only been able to write code _at all_ for less than 5 years.

Code in particular is a domain that should be reasonably amenable to RL, so I don't think there are any particular reasons why performance should top out at human levels or be limited by training data.

Comment by recursive 21 hours ago

I see people on here all the time saying this tool or that model regressed. It used to be better.

There are clearly some pressures to make it worse. Like it's expensive to run. And unbelievably that it's under provisioned somehow.

Could you have looked at early Myspace and declared social media would only get better? By some measures it was already at its peak.

Comment by xyzzy123 21 hours ago

Personally I don't think coding agents will regress significantly as long as there is competitive pressure and independent benchmarks. Regulation is a risk because coding may be equivalent to general reasoning, and that might be limited for political / "safety" reasons.

Social media "regressed" from the point of view of users because the success metric from the network's point of view was value extraction per eyeball-minute. As long as there continue to be strong financial incentives to have the strongest coding model I think we'll see progress.

Comment by elicash 22 hours ago

> As a group, they are 1) clearly feeling angry/threatened 2) in denial about LLM capabilities.

Or they (3) disagree with you

Comment by bigfishrunning 21 hours ago

I think the best phrase from the article is "the current (admittedly impressive) statistical techniques". These statistical techniques are so impressive that they seem to cause some users to stop evaluating them and assume there's intelligence there. Landing at this conclusion is really lazy, but most people are really lazy. The societal damage from LLMs comes not from their intelligence, but from the public perception of their intelligence.

Comment by beering 6 hours ago

Similar to how damaging it is that people believe airplanes can “fly” when they in fact do nothing of the sort. After more than a hundred years of effort we have only managed to mimic flying yet billions of dollars continue to get poured into airplanes.

Comment by ProllyInfamous 21 hours ago

>>2) in denial about LLM capabilities

If you want me to admit that machines will never be conscious — that's fine — I just need you to admit that lots of humans are not conscious, then, either.

----

I have never had a better bookclub participant than an LLM — if becoming a great reader correlates with becoming a great writer, then no human can compare.

----

Michael Pollen recently released A World Appears [0], which explores consciousness from the minds of writers, scientists, philosophers, and plants (among other "inanimates").

I'm only on page 15, but his introduction explores distinctions between sentience, consciousness, and intelligence. Two of these are possible without brains – perhaps all three?

As usual, this author's footnotes keep you thinking: what is it like to be a sentient plant (e.g. the "chameleon vine" [1] which mimics its host leaf patterns/shape/color)?

[0] <https://www.amazon.com/World-Appears-Journey-into-Consciousn...>

[1] <https://en.wikipedia.org/wiki/Boquila>

Comment by 21 hours ago

Comment by sublinear 22 hours ago

What makes you think a sustainable negative social/political trend laser focused on AI is even possible?

Statistical approaches were already extremely unpopular socially and politically long before AI came around. Have you considered that it just doesn't work?

Comment by vrganj 22 hours ago

As somebody in software, I find my fellow tech folks have the opposite bias.

There is no reason to believe superintelligent AI is a possibility. Extraordinary claims require extraordinary evidence, and so far we haven't gotten any.

The burden of proof is on the side making the grand prophecies.

Comment by throwawayk7h 13 hours ago

Although he starts with "Lest anyone accuse me of bargaining in bad faith here," I feel that this is a bad faith argument. It seems like he's saying, "we don't need to be worried about malevolent superintelligence, since AI is already doing bad things, and corporations were doing bad things even before AI." But one can believe corporations are bad, current AI is bad, and malevolent superintelligence is a serious concern.

Comment by saltcured 14 hours ago

Is there a term for the other flavor of AI doomerism, which is adjacent to the Emporer's New Clothes?

I don't worry about some omnipotent AI. I worry about the disintegration of modern, industrial society due to the cultists of AI pushing it into every corner of the economy with too much blind faith that the AI is capable of the control functions being delegated to it.

Comment by aaroninsf 10 hours ago

This is a fascinating variation on the forest/trees, and false dichotomy.

The AI "doomerism" taken up in this piece is one we see replicated a lot, it offers up a scarecrow: that the new risks to our civilization worth talking about, require AGI, agents, even ASI.

Cory should know better. He nearly gets there, recognizing that the corporation represents an entity with agency that is misaligned.

But he somehow elides past that fact that AI is plenty capable of doing meaningful and novel harm, and may be capable of existential harm, already, as it is—both absent AGI/ASI, and, in ways which are genuinely novel and against which we consequently have no good defenses: as individuals, as societies, as a civilization.

Incremental AI is at heart "just" the latest force-and-effort multiplier.

But it is an exponential multiplier; and it is applicable in domains which have not been subject top such leverage before.

Examples are not at all scarce and some are already well known, e.g. the specific risks from the intersection of AI and "biohacking" and other kinds of computational biology.

I'm a fan, but Cory, pal, you're slipping into something that looks a bit like intellectual laziness and polemics here and not to evidence thinking through the shape of the problem.

We can be at risk both from the novel applications and leverage of AI; and from their oligarchic kakistocratic owners. It's yes-and.

(And, by the way—we can also again be genuinely at risk from agents, something that quacks like AGI, and may quack like ASI: we don't know what that is yet. All of these must be tracked. It's not an OR.)

Comment by simianwords 21 hours ago

> I'm worried that the seven companies that comprise 35% of the S&P 500 are headed for bankruptcy, as soon as someone makes them stop passing around the same $100b IOU while pretending it's in all their bank accounts at once.

What makes this author so convinced that these companies are headed for bankruptcy? Is it possible to bet on this claim? We can come back 2-3 years later to check if even one of them is bankrupt.

This kind of doomerism is strange and I'm concerned for people who fall for such obviously nonsensical takes. Why do people take this person seriously again?

Comment by sn0wr8ven 21 hours ago

They are not convinced, simply worried. If you look at Nvidia, Microsoft, OpenAI, Oracle, etc that is sort of passing around 100B usd without it actually resulting in anything being produced, it becomes worrying. I don't think the author is convinced, simply worried.

Specifically, it is the act of "I will invest 100 Billion in you, you will use that money to buy 100 Billion worth of goods from me. Both our balances look good, none of us spent anything." As I understand it, this act isn't so uncommon in finances but never on this scale across this many companies.

Comment by senordevnyc 17 hours ago

But later they talk about we’ve blown $1.4T on this…so is the money being spent, or not?

Comment by sn0wr8ven 5 hours ago

Fairly certain the $1.4T is OpenAI's (and only OpenAI) proposed budget to build their super cluster. That is not the money being spent that is the money one company needs to try out their idea.

Comment by simianwords 21 hours ago

Do you actually think nothing is being produced?

Comment by sn0wr8ven 21 hours ago

Not nothing but nothing compared to the amount being ordered and invested. I think Nvidia has enough orders to go to 2027, so they are way behind. A lot of companies though, aren't using even the limited amount of hardware being produced now, and this is from Microsoft, Meta, etc. The hardware side is certainly way behind the production. The software side is sort of clear for most people. None of the companies are really making returns on 100B investments, fairly evident given recent estimates and project shutdowns, SORA in particular. So when the 100B or I think 1 trillion is being quoted around now, is just floating, resulting in limited goods, and limited value from the limited goods, it becomes worrying. Because the extra valuations isn't resulting in extra value simply limited if not negative value.

Comment by vrganj 21 hours ago

If I give you an IOU for 10 bucks and you give me one in return, did we just produce 20 bucks?

Comment by simianwords 21 hours ago

That's not what is happening and frankly surprised that people believe this stuff

Comment by vrganj 21 hours ago

Comment by simianwords 20 hours ago

doesn't open

Comment by ceejayoz 21 hours ago

What good is a bet you won't be able to collect on if it happens?

Comment by simianwords 21 hours ago

Why can't I collect on it?

Comment by ceejayoz 21 hours ago

Betting on systemic collapse of society has a challenge at the winnings collection point.

"Here's your trillion dollars. Go buy a slice of bread. Ooops… half slice. Well, quarter."

Comment by simianwords 21 hours ago

huh? he can just short the companies he's so concerned about

Comment by vrganj 21 hours ago

What good is that if the entire socioeconomic system collapses?

Comment by simianwords 21 hours ago

by shorting he can make money and give it back?

Comment by ceejayoz 18 hours ago

Yeah, except the money you made is now worthless.

https://en.wikipedia.org/wiki/Zimbabwean_one_hundred_trillio...

Comment by simianwords 18 hours ago

That’s not what would happen if he shorts it. It would be like 2008 recession where people did make money.

In fact shorting is the moral thing to do now because it nudges towards bankruptcy faster

Comment by reverius42 4 hours ago

Unfortunately the market can stay irrational longer than you* can stay solvent.

* For most definitions of "you"

Comment by LogicFailsMe 21 hours ago

I think you really need to have boots on the ground in the AI cinematic universe to keep up and separate the wheat from the chatGPT. It's moving fast, warts and all, and I agree with Jensen Huang's take that we don't even need further advances in the technology to base a new industrial revolution on it.

But it's pointless to argue with the extremists that either believe it's just a planet killing stochastic parrot or that it's on the verge of becoming Skynet. I mean if someone puts their nuclear arsenal under the control of openclaw, that's dark comedy although it will seem like tragedy at the time because comedy equals tragedy plus time according to Lenny Bruce.

But the AI bubble is probably real w/r to shoe companies and grocery stores pivoting to AI and ludicrous w/r to the money that can be made by the already entrenched players just riding the wave of deployment and specialization. But wouldn't it be nice if the US spent more money addressing the shortage of compute rather than blowing $h!+ up for the lulz?

Comment by simianwords 21 hours ago

> But wouldn't it be nice if the US spent more money addressing the shortage of compute rather than blowing $h!+ up for the lulz?

No actually. The best way to ensure growth is in exactly these kind of industries that promote innovation. Sure some companies don't make it but that's the price to pay for risks.

This is a classic case of optimising for the short term and forgetting the long term benefits

Comment by LogicFailsMe 21 hours ago

So you're saying starting opt-in wars and blowing shit up is sound economic policy for the long run? Gonna disagree. I think the long view is unbounded compute. But I also believe it doesn't take up all that much space, and that we already have the technology to power it if we weren't squandering our impulse cash on dumb shit like subsidizing coal and wars of peacocking.

Comment by simianwords 21 hours ago

Where did I suggest starting opt-in wars?

Comment by LogicFailsMe 21 hours ago

What did you think I meant by blowing $h!+ up? And I gather you are against strategies like China's w/r to building up their own separate tech infrastructure and going all in on renewables and nuclear so they aren't power-limited because you believe these should both be entirely free market operations?

I believe that gives countries that act like China a significant advantage over relying entirely on a bunch of antagonistic billionaire monkeys banging on their economies in the hopes of bringing the singularity somehow. Again, we can agree to disagree here. But we're also forgetting that this is how the United States made Elon Musk happen in the first place.

Comment by ikidd 19 hours ago

>order John Deere to switch off all the tractors in your country:

For a smart guy, sometimes he says the dumbest things in the most confidently incorrect way.

Comment by tim333 17 hours ago

I read the article that he links to and he's thinking of John Deere tractors that were stolen by the Russians from a dealer in Ukraine and got bricked remotely as a result. It showed the tech exists to do such things.

Comment by tim333 17 hours ago

His basic idea that he calls a Pascal's Wager seems quite sensible, which I take as:

-Get away from the "enshitternet of defective, spying, controlling American tech exports" and move to open source ("international digital public goods")

There seems a move that way anyway, especially in Europe now they don't trust Trump.

His stuff on AI seems mangled. "People who are trying to summon the evil god" doesn't really fit with the chatbot makers imho.

Comment by throwpoaster 21 hours ago

I’ll join Doctorow’s fight against LLCs when I understand how to create economic freedom for my family and community without them.

Comment by lamasery 21 hours ago

1) Where's that in the linked piece?

2) I don't understand how you do what you claim with those. Like I have zero idea how one achieves "economic freedom for my family and community" with LLCs.

Comment by throwpoaster 21 hours ago

1) The argument in the article is that he will join the fight against AI when he wins the fight against LLCs.

2) You start an LLC and use it to build and sell a product customers want. Then you, your family, and your community, can economically untether from, for example, bosses who don’t care you’re autistic and need you to smile in meetings.

Comment by 17 hours ago

Comment by simianwords 22 hours ago

I don't think this author has a good mental model for how capable LLM's are. This is what he has to say about AI search. AI based search is one of the biggest leaps to happen to searching and retrieval.

> AI search is still a bad idea.

https://pluralistic.net/2024/05/15/they-trust-me-dumb-fucks/

This is the most charitable thing he has to say about AI.

> AI is a bubble and it will burst. Most of the companies will fail. Most of the data-centers will be shuttered or sold for parts. So what will be left behind?

> We'll have a bunch of coders who are really good at applied statistics. We'll have a lot of cheap GPUs, which'll be good news for, say, effects artists and climate scientists, who'll be able to buy that critical hardware at pennies on the dollar. And we'll have the open source models that run on commodity hardware, AI tools that can do a lot of useful stuff, like transcribing audio and video, describing images, summarizing documents, automating a lot of labor-intensive graphic editing, like removing backgrounds, or airbrushing passersby out of photos. These will run on our laptops and phones, and open source hackers will find ways to push them to do things their makers never dreamt of.

You can imagine that a guy who seriously thinks that the only thing AI will be doing in the future is summarising, describing images and transcribing is either completely clueless or deliberately misleading.

Not a person to be taken seriously

Comment by davebren 20 hours ago

Getting back to a functional search engine is the most interesting part of this technology to me. Something that just gives links to the most relevant pages without a bunch a LLM editorializing on top of it.

But do current LLMs solve that, or do they still ultimately depend on making calls to other search indexes? Seems like they could theoretically be trained to semantically match urls from their training set, but I think the models would have to be specifically architected for that, so I'm curious if anyone knows more about this.

I'd also be interested if there's any small open models working towards that.

Comment by Schlagbohrer 22 hours ago

It's strange reading people who I see as very intelligent and very interesting who are so, so AI-skeptical, and especially in this case where Doctorow has interacted with other people who I assume are very smart and not prone to buzz word psychosis, who see AI as an immanent existential threat ala sci fi novels. We have a lot of very smart and capable people who are split on this, although I think the split is heavily weighted in favor of people who see the tech as being really freaking amazing/scary

Comment by simianwords 21 hours ago

the answer to your question is that society at large finds skepticism or pessimism more interesting. which is why we end up with dilettantes like this guy.

Comment by nkrisc 22 hours ago

I think those are likely the only useful or net-positive things for society AI will do, at least for some time until there’s a fundamental advancement beyond LLMs. It can obviously do more than that now, like impersonate people for scams, induce psychosis in vulnerable people, shill and astroturf at a scale we haven’t seen before, spam open source projects with terrible PRs and vulnerability reports, and quite a bit more.

Comment by simianwords 21 hours ago

why do people believe stuff like this? this is obviously untrue -- AI is already solving open problems in mathematics.

Comment by rimliu 22 hours ago

Seeing how it sucks at languages you may be right, even transcribing may be dubious.

Comment by simianwords 21 hours ago

how does it suck at languages?

Comment by reverius42 4 hours ago

it doesn't, LLMs are remarkably good (like frontier level) at machine translation, last I checked

Comment by LogicFailsMe 21 hours ago

For pennies on the dollar, we could just legalize and regulate psychedelics and anyone could go meet their god whenever they wish. The stoned ape theory might have been the AGI of spirituality that led to religion after all. Not saying it was, not saying it wasn't, but it's not like Elon Musk has to boil the ocean and build a Dyson Sphere to have a heart to heart with his personal invisible friend.

As for AI, it's incredibly useful in the right hands and it's incredibly hazardous in the wrong hands. But in the US, we can't even depose a lunatic flushing even more money than spent on AI on warmongering and you think we're gonna rein in the tech billionaires? Funny in that dying's easy it's comedy that's hard way. IMO this one plays out in the weakly efficient market of ELEs. My money's on DNA and planet Earth, it's been through so much worse and they always bounce back with new ideas on how to get in trouble again.

Not a doomer, AI and STEM could really deliver on the promise of a better future for everyone, but with tech billionaires driving the clown car, are you kidding me?

Comment by woeirua 22 hours ago

> I don't think AI is intelligent; nor do I think that the current (admittedly impressive) statistical techniques will lead to intelligence.

It’s increasingly difficult to rationalize away the capabilities of AI as not requiring “intelligence”. This point of view continues to require some belief in human exceptionalism.

Comment by nkrisc 22 hours ago

There is clearly something exceptional (in the true neutral sense of the word) about humans, or more broadly the Homo genus.

If you believe that humans have in fact created artificial intelligence, then that alone makes us currently exceptional.

Comment by NoMoreNicksLeft 15 hours ago

>If you believe that humans have in fact created artificial intelligence, then that alone makes us currently exceptional.

Quite the opposite, really. If humans at at all intelligent in any meaningful way, then it is absurdly bizarre that somehow we can't "intelligent" our way to an artificial intelligence. Something is going on in our minds that makes us unable to deduce how intelligence must function in the mechanical sense. What makes us so special is that we can't make AI.

Comment by Schlagbohrer 22 hours ago

I agree, it has become more and more irrelevant whether AI meets a given definition of intelligence when I can talk with it and it understands what I am saying, including a shocking level of nuance.

Comment by rsfern 22 hours ago

I think the exceptionalism is the other way around. What makes anyone think they understand what makes for intelligence when we barely understand our own neurology?

Comment by Mordisquitos 22 hours ago

I'm reminded of a book on my bookshelf (which I still haven't read, story of my life...), by the recently deceased ethologist Frans de Waal, titled 'Are We Smart Enough to Know How Smart Animals Are?'. Of course, Betteridge's law applies to its title.

In my opinion, the vast multitude of different animal intelligences is a clear hint that language does not an intelligence make. We're animals, and our intelligences did not come from language; language allowed us to supercharge it. We can and do think and make decisions without using language, and the idea that a statistical model based solely on our language can be intelligent does not follow.

Comment by sdenton4 21 hours ago

Hey, I also read that book, and came to basically the opposite conclusion!

The point of the book is that we've been very bad at testing animal intelligence because of a vast stack of human biases, including things like language and the geometry of our hands.

Animals with different geometries and no language are still intelligent, but we need to test them in ways which recognize their capabilities. Intelligence is general: it's adaptivity within one's set of constraints.

De waal also points out that there was massive shifting of the definition of language and intelligence as we became more aware of what animals are capable of.

From this angle, I would say that LLMs are intelligent: they do adapt to their inputs extremely readily, though they have a particular set of constraints (no physical body (usually), for starters). They are, like chimpanzees, smarter and more capable than humans in some ways, and much dumber in others.

Finally, the 'statistical learners can't be intelligent' line of argument is extremely short-sighted. Our brains are bags of electrified meat. Evolution somehow figured out a way to make meat think. No individual neutron is intelligent, yet the collection of cells is. We learn by processing experiences with hormonal signals because those hormonal signals are what the meat is capable of working with. LLMs, by contrast, learn by processing examples with backprop. If anything, the intelligence of meat is more surprising.

Comment by wolttam 21 hours ago

The meaning of tokens lose touch with language in the deeper layers of large language model’s neural nets.

Language is just the input/output modality.

Comment by Mordisquitos 21 hours ago

I'll admit I am not an expert in the field, but the fact that "chain-of-thought" optimisations function by getting the model to extend its own context window with more language to me hints that what we consider an "intelligent" response is ultimately contingent of the language processing.

In any case though, if language is just the input/output modality, where is the intelligence when language is not involved? Is the "intelligence" of ChatGPT/Claude/Gemini models dependent on the human-decision-curated linguistic dataset they have been trained upon, or is it prior to that? If a SOA LLM were to be trained on the same dataset as them but was not in any way put through RLHF for it to respond to human prompts, would it be intelligent? What would be the expression of that intelligence?

Comment by wolttam 19 hours ago

I also achieve better performance on cognitive tasks when I use language to first describe the problem I'm trying to solve. In fact, it usually helps quite a bit (see: rubber-duck debugging)

I'm not sure the word "intelligence" really fits what these models are doing. I do however think it's safe to say that they are performing cognition - even if it's 'simply' cognition over their provided context and even if it's entirely limited by their training set. We still have a machine that can perform automated cognition over a increasingly wide distribution of data.

Comment by woeirua 22 hours ago

Explain the emergent capabilities of AI then.

Comment by vrganj 22 hours ago

Such as?