The Abstraction Fallacy: Why AI Can Simulate but Not Instantiate Consciousness

Posted by LopRabbit 19 hours ago

Counter28Comment40OpenOriginal

Comments

Comment by in-silico 13 hours ago

From my observations, there are generally four camps in the machine consciousness discussion:

1. People who haven't really thought about it, and assume they're conscious because they talk like a human.

2. People who haven't really thought about it, and assume they can't be conscious because humans are obviously somehow special. This appears to be the largest group, and is linked to our religiously rooted culture in which human exceptionalism is the default.

Those first two groups comprise the majority of people, and are not worth engaging with.

3. People who have thought about it, and came to the conclusion that they might be conscious, usually for computationalism/functionalism reasons. This is the group that I place myself in.

4. People who have thought about it, and came to the conclusion that they can't be conscious, usually for biological naturalist reasons. This seems to be the predominant group on Hacker News (among those who discuss it).

Comment by sunrunner 13 hours ago

I'm not sure I'd agree that people in groups 1 and 2 aren't worth engaging with.

The interesting bit to do for both cases is look at the 'they talk like a human' and 'are obviously somehow special' parts, separate the ideas of language, intelligence (memory, fluidity, abstract reasoning), _aliveness_ (as a biological process) and finally ideas about metacognition and theory of mind, and see whether their idea of consciousness as a super-bundle of the above (which is how I assume a lot of default ideas about consciousness are) actually sticks, or whether it falls apart when beings can have a subset of those properties but not all.

Also, I nominate myself to be in the 'People who have thought about it and are becoming more doubtful that I myself am conscious, and the question might be moot.' group.

Comment by in-silico 13 hours ago

I'm curious about your doubting your own consciousness statement, given that "we humans are conscious" is pretty axiomic to its definition and one of the few pieces that most agree with.

Comment by Kim_Bruning 13 hours ago

Take a look at Daniel Dennett, for starters!

If you're looking for one of the genuine angles on this:

Consciousness is horrendously under-defined, to the point some people go something like "you know, at this point I figure we'd be better off not having this word at all. "

Some days that's me, with a headache.

Comment by in-silico 13 hours ago

So it's more of a semantic argument than an actual rejection of the idea that you experience qualia/sentience/something?

Comment by reverius42 12 hours ago

Dennett's whole thing is the rejection of qualia. See https://web.ics.purdue.edu/~drkelly/DennettQuiningQualia1988...

Comment by Kim_Bruning 12 hours ago

You'd have to define those terms operationally first, somehow, before I could give you an honest reply. Most people can't -and those who do disagree- which suggests something structural.

[It can be done. But it'll be dirty]

Comment by joquarky 12 hours ago

What exactly is the "you" in your sentence?

Comment by thfuran 10 hours ago

What about group 5: Actually, we're just simulating consciousness too.

Comment by FloorEgg 8 hours ago

Assuming 3. Maybe in order to reproduce human level consciousness one would need to treat at least most human cells as neurons, and reconstruct all the diversity of neuron types and their signalling mechanisms.

If human consciousness is reproducible, maybe we will long underestimate the depth and diversity it uses to model reality the way it does.

Comment by kbelder 9 hours ago

I would place myself in 3, with the caveat that I don't think any current llms or other programs/dataset/relationships are close to conscious. It's certainly possible in the future, though.

Atoms arranged into a brain generate consciousness. There's no reason to think atoms in other arrangements can't. Brains aren't magic, just well optimized.

Comment by in-silico 7 hours ago

What would have to change about future systems to make you think they're conscious in a way that modern systems aren't?

That is to say, what evidence would you need from a system in order to think that it's conscious?

Comment by joquarky 12 hours ago

Yep, #2 feels like geocentrism all over again.

Comment by Kim_Bruning 13 hours ago

Am I the only person who is confused by there being a philosophy called "biological naturalism", which is not the science?

Comment by Nevermark 10 hours ago

“Natural” is a word often used in opposition to science.

It really has 1000 meanings. Usually whatever the speaker wants it to mean.

Comment by LeCompteSftware 12 hours ago

As someone who places themselves in #4, at some point the people in #3 need to accept a bit of scientific humility. The reason we are "biological naturalists" is that we can point to hundreds of thousands of conscious species on planet Earth which are not humans, and whose consciousness clearly has nothing to do with an ability to say "Forsooth, I am a conscious thinking being." AI folks have been ignoring this since Alan Turing! And it's not a coincidence that humanity has yet to build a robot which is convincingly smarter than a cockroach.

If you grant that humans are conscious, then surely domestic cats are as well. It is simply irrational to talk about Claude's "consciousness" without actually engaging with this: cats, humans, pigeons, fish, etc etc all share some common features we associate with consciousness (I don't mean sensory awareness, I mean the fuzzy cognitive concept). Claude really does not. In fact Claude doesn't even have much in common with uncontacted hunter-gatherers! Claude imitates the solipsism of formally educated human philosophers.

It is uncharitable and curmudgeonly but totally scientific to dismiss people in camp #3 as unserious and not worth engaging with: they ignore scientific criticism and don't provide any themselves, it's just a mishmash of sci-fi-adjacent philosophy. There's nothing "functional" about ignoring animals and there's nothing scientific about waving your hands and saying "computationalism." That's certainly how I feel. I know this isn't a very nice comment. But I am so sick of AI folks thinking they can ignore animals and still have an honest conversation about machine consciousness. It's just sci-fi ghost stories.

Comment by Kim_Bruning 12 hours ago

Oh dear, just a short while after me saying I was confused by the term too.

Are you sure you're a <biological naturalist>? [1] Which is to say, do you adhere to Searle's position about syntax not leading to semantics?

Or is it more like: You're scientifically inclined, and thus you accept Ethology[2] or Neuroscience[3] as being empirically rigorous studies of animal behavior and cognition respectively?

Incidentally, Alan Turing's 1950 imitation game paper was actually pretty Ethological if you look it up. He immediately replaces the question "can machines think" with a more practical operationalization: the famous imitation game.

[1] https://en.wikipedia.org/wiki/Biological_naturalism

[2] https://en.wikipedia.org/wiki/Ethology

[3] https://en.wikipedia.org/wiki/Neuroscience

[4] https://en.wikipedia.org/wiki/Computing_Machinery_and_Intell...

Comment by Kim_Bruning 11 hours ago

(ps. A quick search gives me the impression <biological naturalism> arguably rejects much of biology's findings on animal cognition. My mail is in my user description if you'd like me to dig up the relevant literature for you.)

Comment by reverius42 12 hours ago

What is the evidence that non-human animals have the "fuzzy cognitive concept" we call consciousness, but Claude "really does not"?

I personally have not been ignoring animal consciousness in how I think about the possibility of AI consciousness and I don't see how animals having consciousness means that AI can't.

Comment by in-silico 11 hours ago

What about robots? Not necessarily humanoid robots, but the classic RL demonstrations that can scurry around and achieve simple goals?

In the computational functionalist argument, the thing that we share with cats, pigeons, and robots (and in some ways Claude) is the fact that we react to our environment in a way that requires computation.

I myself lean (without confidence) towards weak panpsychism, where a lot of things down from humans to cats to fish to trees to bacteria are in some way sentient. We all have in common a computationally driven sense/"think"/act cycle, and that is where it derives from.

Comment by grantcas 8 hours ago

[dead]

Comment by mstank 17 hours ago

Glad to see Searle's Chinese Room mentioned early on in the paper. "Syntax is not sufficient for semantics," no matter how much compute we throw at the problem.

My very amateur view is that until the underlying compute architecture and substrate resembles artificial biology more than silicon, we wont get there.

The latest advances in AI have given me even more appreciation of biology and evolution. It's incredible what the human brain can do with about 20 watts of power, barely enough to power a lightbulb, in comparison to what it takes to run even our most basic LLM models.

Comment by Kim_Bruning 13 hours ago

Hofstadter and Dennett have taken great pains to try to debunk Searle. No love lost in that corner of the philosophical world.

Comment by diablozzq 16 hours ago

Consciousness is a property of humans biology - and quite clearly not a requisite to intelligence.

I say clearly as at some point we reach proof by construction. As in, we already built intelligence because the system already completes tasks that require intelligence.

We are so far into what would have been science fiction five years ago and the goal posts have moved so far.

For anyone who disagrees, I challenge you to prove deep learning systems cannot solve <task with specific outcome humans can solve but not AI> given sufficient data and compute.

I think the strongest sign we have true intelligence already is no one has built any benchmark that AI cannot solve.

Yes, our current robotics lags AI, so we don’t have the equivalent of the human body to give our deep learning systems. Thus, it’s expected AI will be limited in physical scenarios.

Second, hallucinations are present in humans. We are highly biased to ignore all the misspoken words in everyday life as we have error correction built into normal conversations. How often do you have to have someone repeat or rephrase something?

It just doesn’t make sense to me.

It’s like there are people out there whose belief systems are incompatible with this tech existing.

Sure, it has limitations due to training data. It has limitations with no physical body. It cannot combine training and inference the same way a human does. But none of those are measures of intelligence or required to be intelligent.

Comment by joquarky 12 hours ago

I only disagree with your first sentence:

> Consciousness is a property of humans biology

You're assuming consciousness is a product of biology rather than attracted to biology.

Comment by lukev 16 hours ago

"intelligence" is not well defined. LLMs are throwing this into high relief with how "spiky" their capability curve is. Yes, they can solve some crazy hard problems with enough compute and thinking tokens. Yes, they also fall down in the dumbest ways without an ability to self-correct... despite how "smart" they are, human supervision remains absolutely critical for any system of importance.

But I don't think the takeaway is "humans are intelligent and LLMs are not", it's that our vocabulary for talking about the intersection of language, cognition and compute is not up for the task.

Comment by diablozzq 14 hours ago

Intelligence was supposedly well defined, but folks kept getting their definitions wrecked by modern LLMs so we had to move the goal posts.

No true Scottsman fallacy.

Comment by jwpapi 11 hours ago

Challenge: Make money online

Comment by duped 15 hours ago

I cannot express concisely how deeply I disagree with all of this.

It is not just uninteresting that computer programs can be written to accomplish information tasks, it's intellectually dishonest to anthropomorphize machines and algorithms to characterize it as consciousness.

> no one has built any benchmark that AI cannot solve

"Be human."

Comment by diablozzq 14 hours ago

no one cares if LLMs are humans. They will never be by definition.

My point still stands

The crux of my argument is Consciousness is irrelevant to any AI debates. It’s not necessary to perform tasks we previously deemed only humans could do.

Comment by Kim_Bruning 15 hours ago

I'm partial to bioinformatics as per Pauline Hogeweg's definition; which explicitly has computation as a property of life.

This approach actually makes testable (and tested) scientific predictions.

This makes Searle-derived papers super-weird for me; since from my perspective they seem to disprove the existence of life. (and it makes the name of the philosophy "biological naturalism" very ironic to me :-P )

(for extra irony, Turing actually went into biology late in his life. See: Turing 1952 "The Chemical Basis of Morphogenesis" )

Comment by kbelder 9 hours ago

I'm disappointed that Searle's paper is still influential, at least out in the general culture. It's nonsense, and at face value, would disprove consciousness in humans unless you accept some mystic indefinable soul into the mix. Or quantum magic, which is just as mystic.

Comment by jwpapi 11 hours ago

I think the question goes more into ourselves as it goes into AI, we don’t know exactly how our own intelligence and conciousness works and therefore it’s very tough to impossible to compare to AI intelligence and conciousness.

Are we just autocomplete machines with sufficient enough variable pseudo-randomized input?

Comment by tmvphil 12 hours ago

> To fully understand the difference between the embodied robot running an algorithm on a chip and the biological mapmaker, we need to remember that for the latter, subjective experience is a given, not because of abstract information processing, but because of a specific, metabolically constituted physical reality.

Total drivel. Consciousness in biological systems is "a given" because of metabolism?

Comment by jdmoreira 16 hours ago

This is the complete opposite of Hofstadter's "Strange Loop" hypothesis, which intuitively makes much more sense to me.

Comment by defterGoose 12 hours ago

It's the pervasive theme in the book, but never really given a conceptual grounding further than "this sort of looks like recursion or can be modelled circularly so it's a strange loop". The vagueness of it reveals itself as being "more intuitive", because a vaguer pattern will have more matches. I don't remember Hofstadter digressing on whether these loops work "in reverse" either, which is sort of what the author here is denying. Basically positing that f doesn't have a well-defined inverse.

Comment by yogthos 16 hours ago

The paper makes a huge assumption that only thermodynamic constitutions can produce consciousness. The assumptions seems completely unsubstantiated given that thermodynamics are just states and states are replicable. The whole Chinese Room idea is pure sophistry as well. Both Dennett and Hofstadter address it quite well in Consciousness Explained and I am a Strange Loop respectively.

Comment by emp17344 12 hours ago

You know that Dennett and Hofstadter aren’t the beginning and end of Philosophy of Mind, right? Calling Searle’s Room “complete sophistry” is hilariously misguided, considering the vast majority of academic philosophers consider it valid: https://survey2020.philpeople.org/survey/results/5002#

Comment by Kim_Bruning 11 hours ago

You'll need to unpack that survey for us a bit. There's a lot going on and the wording is very terse.

Comment by emp17344 10 hours ago

It’s a large survey of academic philosophers on famous philosophical arguments. In this case, the question is asking whether philosophers agree with Searle and believe the Chinese room does not understand Chinese, or disagree with Searle and believe the room does understand Chinese.

Comment by Kim_Bruning 9 hours ago

I actually agree that the room does not understand chinese too; because that's the only possible thing that could happen in real life.

That doesn't mean I agree with Searle though!

It depends on how the question is asked. Again, the wording is very terse so I can't determine what the people thought they were answering. Possibly you have a better insight?

Comment by grantcas 8 hours ago

[dead]