The Abstraction Fallacy: Why AI Can Simulate but Not Instantiate Consciousness
Posted by LopRabbit 19 hours ago
Comments
Comment by in-silico 13 hours ago
1. People who haven't really thought about it, and assume they're conscious because they talk like a human.
2. People who haven't really thought about it, and assume they can't be conscious because humans are obviously somehow special. This appears to be the largest group, and is linked to our religiously rooted culture in which human exceptionalism is the default.
Those first two groups comprise the majority of people, and are not worth engaging with.
3. People who have thought about it, and came to the conclusion that they might be conscious, usually for computationalism/functionalism reasons. This is the group that I place myself in.
4. People who have thought about it, and came to the conclusion that they can't be conscious, usually for biological naturalist reasons. This seems to be the predominant group on Hacker News (among those who discuss it).
Comment by sunrunner 13 hours ago
The interesting bit to do for both cases is look at the 'they talk like a human' and 'are obviously somehow special' parts, separate the ideas of language, intelligence (memory, fluidity, abstract reasoning), _aliveness_ (as a biological process) and finally ideas about metacognition and theory of mind, and see whether their idea of consciousness as a super-bundle of the above (which is how I assume a lot of default ideas about consciousness are) actually sticks, or whether it falls apart when beings can have a subset of those properties but not all.
Also, I nominate myself to be in the 'People who have thought about it and are becoming more doubtful that I myself am conscious, and the question might be moot.' group.
Comment by in-silico 13 hours ago
Comment by Kim_Bruning 13 hours ago
If you're looking for one of the genuine angles on this:
Consciousness is horrendously under-defined, to the point some people go something like "you know, at this point I figure we'd be better off not having this word at all. "
Some days that's me, with a headache.
Comment by in-silico 13 hours ago
Comment by reverius42 12 hours ago
Comment by Kim_Bruning 12 hours ago
[It can be done. But it'll be dirty]
Comment by joquarky 12 hours ago
Comment by thfuran 10 hours ago
Comment by FloorEgg 8 hours ago
If human consciousness is reproducible, maybe we will long underestimate the depth and diversity it uses to model reality the way it does.
Comment by kbelder 9 hours ago
Atoms arranged into a brain generate consciousness. There's no reason to think atoms in other arrangements can't. Brains aren't magic, just well optimized.
Comment by in-silico 7 hours ago
That is to say, what evidence would you need from a system in order to think that it's conscious?
Comment by joquarky 12 hours ago
Comment by Kim_Bruning 13 hours ago
Comment by Nevermark 10 hours ago
It really has 1000 meanings. Usually whatever the speaker wants it to mean.
Comment by LeCompteSftware 12 hours ago
If you grant that humans are conscious, then surely domestic cats are as well. It is simply irrational to talk about Claude's "consciousness" without actually engaging with this: cats, humans, pigeons, fish, etc etc all share some common features we associate with consciousness (I don't mean sensory awareness, I mean the fuzzy cognitive concept). Claude really does not. In fact Claude doesn't even have much in common with uncontacted hunter-gatherers! Claude imitates the solipsism of formally educated human philosophers.
It is uncharitable and curmudgeonly but totally scientific to dismiss people in camp #3 as unserious and not worth engaging with: they ignore scientific criticism and don't provide any themselves, it's just a mishmash of sci-fi-adjacent philosophy. There's nothing "functional" about ignoring animals and there's nothing scientific about waving your hands and saying "computationalism." That's certainly how I feel. I know this isn't a very nice comment. But I am so sick of AI folks thinking they can ignore animals and still have an honest conversation about machine consciousness. It's just sci-fi ghost stories.
Comment by Kim_Bruning 12 hours ago
Are you sure you're a <biological naturalist>? [1] Which is to say, do you adhere to Searle's position about syntax not leading to semantics?
Or is it more like: You're scientifically inclined, and thus you accept Ethology[2] or Neuroscience[3] as being empirically rigorous studies of animal behavior and cognition respectively?
Incidentally, Alan Turing's 1950 imitation game paper was actually pretty Ethological if you look it up. He immediately replaces the question "can machines think" with a more practical operationalization: the famous imitation game.
[1] https://en.wikipedia.org/wiki/Biological_naturalism
[2] https://en.wikipedia.org/wiki/Ethology
[3] https://en.wikipedia.org/wiki/Neuroscience
[4] https://en.wikipedia.org/wiki/Computing_Machinery_and_Intell...
Comment by Kim_Bruning 11 hours ago
Comment by reverius42 12 hours ago
I personally have not been ignoring animal consciousness in how I think about the possibility of AI consciousness and I don't see how animals having consciousness means that AI can't.
Comment by in-silico 11 hours ago
In the computational functionalist argument, the thing that we share with cats, pigeons, and robots (and in some ways Claude) is the fact that we react to our environment in a way that requires computation.
I myself lean (without confidence) towards weak panpsychism, where a lot of things down from humans to cats to fish to trees to bacteria are in some way sentient. We all have in common a computationally driven sense/"think"/act cycle, and that is where it derives from.
Comment by grantcas 8 hours ago
Comment by mstank 17 hours ago
My very amateur view is that until the underlying compute architecture and substrate resembles artificial biology more than silicon, we wont get there.
The latest advances in AI have given me even more appreciation of biology and evolution. It's incredible what the human brain can do with about 20 watts of power, barely enough to power a lightbulb, in comparison to what it takes to run even our most basic LLM models.
Comment by Kim_Bruning 13 hours ago
Comment by diablozzq 16 hours ago
I say clearly as at some point we reach proof by construction. As in, we already built intelligence because the system already completes tasks that require intelligence.
We are so far into what would have been science fiction five years ago and the goal posts have moved so far.
For anyone who disagrees, I challenge you to prove deep learning systems cannot solve <task with specific outcome humans can solve but not AI> given sufficient data and compute.
I think the strongest sign we have true intelligence already is no one has built any benchmark that AI cannot solve.
Yes, our current robotics lags AI, so we don’t have the equivalent of the human body to give our deep learning systems. Thus, it’s expected AI will be limited in physical scenarios.
Second, hallucinations are present in humans. We are highly biased to ignore all the misspoken words in everyday life as we have error correction built into normal conversations. How often do you have to have someone repeat or rephrase something?
It just doesn’t make sense to me.
It’s like there are people out there whose belief systems are incompatible with this tech existing.
Sure, it has limitations due to training data. It has limitations with no physical body. It cannot combine training and inference the same way a human does. But none of those are measures of intelligence or required to be intelligent.
Comment by joquarky 12 hours ago
> Consciousness is a property of humans biology
You're assuming consciousness is a product of biology rather than attracted to biology.
Comment by lukev 16 hours ago
But I don't think the takeaway is "humans are intelligent and LLMs are not", it's that our vocabulary for talking about the intersection of language, cognition and compute is not up for the task.
Comment by diablozzq 14 hours ago
No true Scottsman fallacy.
Comment by jwpapi 11 hours ago
Comment by duped 15 hours ago
It is not just uninteresting that computer programs can be written to accomplish information tasks, it's intellectually dishonest to anthropomorphize machines and algorithms to characterize it as consciousness.
> no one has built any benchmark that AI cannot solve
"Be human."
Comment by diablozzq 14 hours ago
My point still stands
The crux of my argument is Consciousness is irrelevant to any AI debates. It’s not necessary to perform tasks we previously deemed only humans could do.
Comment by Kim_Bruning 15 hours ago
This approach actually makes testable (and tested) scientific predictions.
This makes Searle-derived papers super-weird for me; since from my perspective they seem to disprove the existence of life. (and it makes the name of the philosophy "biological naturalism" very ironic to me :-P )
(for extra irony, Turing actually went into biology late in his life. See: Turing 1952 "The Chemical Basis of Morphogenesis" )
Comment by kbelder 9 hours ago
Comment by jwpapi 11 hours ago
Are we just autocomplete machines with sufficient enough variable pseudo-randomized input?
Comment by tmvphil 12 hours ago
Total drivel. Consciousness in biological systems is "a given" because of metabolism?
Comment by jdmoreira 16 hours ago
Comment by defterGoose 12 hours ago
Comment by yogthos 16 hours ago
Comment by emp17344 12 hours ago
Comment by Kim_Bruning 11 hours ago
Comment by emp17344 10 hours ago
Comment by Kim_Bruning 9 hours ago
That doesn't mean I agree with Searle though!
It depends on how the question is asked. Again, the wording is very terse so I can't determine what the people thought they were answering. Possibly you have a better insight?
Comment by grantcas 8 hours ago