What is consciousness? What is sentience?

A whole two pages before the Chinese Room got brought up? Tch.

Now if you put wheels on your AI and place it on a treadmill, does it hard takeoff?

Speaking of which, Blindsight is a pretty good science fiction story that suggests that sentience might not be all that beneficial.

Intelligence describes behavior. Turing recognized this, and argued that it could be defined as something that imitates human behavior. In fact, his paper was titled “Computing Machinery and Intelligence”.

But consciousness is not a behavior, and it’s not really addressed by Turing. It’s quite possible that a lot of sentient creatures are not particularly intelligent. Likewise, it’s possible that an extremely intelligent creature or machine is still not sentient.

I don’t believe that it really is possible for those things to be true. I believe consciousness is merely an emergent property of intelligence.

There is a famous thought experiment by John Searle that highlights some of the issues surrounding mind, consciousness, and understanding. I’ll just go ahead and quote the Wiki entry:

Searle’s Chinese Box argument has been so thoroughly thrashed at this point that it seems unnecessary to delve into exactly why it’s clearly wrong… Hell, you posted the link to the wiki page, which covers a number of the arguments against it. Searle’s counterarguments are essentially just handwaving, saying, “How can a bunch of little common objects be conscious?!” But the answer is obvious, in that for a human a bunch of common chemicals has become conscious. It also ignores the fact that the “bunch of scraps of paper” in Searle’s room includes a fucking magical book that contains all the rules necessary to perfectly answer any possible statement about anything.

That’s ultimately the entire basis for Searle’s argument… it’s effectively an attempt to distract you away from seeing the major flaws by obfuscating them as “common objects”. But there’s nothing “common” about the magical translation/answering ruleset he imagines.

If something behaves intelligently, then it’s intelligent, and there’s really nothing beyond that. Anything beyond that, and you’re getting into the realm of a “soul” or some kind of mind that exists separately from the body and the algorithms performed by it. You can certainly beleieve in such a thing, but there’s no RATIONAL reason to believe in it.

On some level, it comes down to the desire of humans to believe that we are special little snowflakes… That we have some divine element that transcends these meat-bags that we exist as. The desire to believe this only becomes stronger as we learn more about how we work, and it becomes more and more obvious that we’re just really complex organic machines.

You can certainly beleieve in such a thing, but there’s no RATIONAL reason to believe in it.

Subjective experience is intrinsically different from intelligence. Consider a variant of the Chinese room:

A congenitally blind man has a computer. You send him a picture of a famous place, and he uses a pattern matching program to find a very similar image in Flickr, and then reads the description. He can do this with sufficient accuracy to be indistinguishable from someone describing the picture.

Forget about intelligence for a moment. When presented with a picture of a famous place, is the blind man’s subjective experience the same as yours?

Searle’s Chinese Box argument has been so thoroughly thrashed at this point

It’s been thrashed, but the matter is hardly settled.

And if the Chinese room argument has been thrashed, then the Turing Test had been crucified. Not long ago, a chatbot convinced a majority of human observers that it was a teenage human merely by avoiding questions and being rude.

If anything, it suggests that a few weird tricks are all it takes to play the imitation game.

You can certainly beleieve in such a thing, but there’s no RATIONAL reason to believe in it.

Well, I would say that believing that consciousness is an emergent property of intelligent is the most rational position to take.

Otherwise, you are left with the belief that there is some sort of “magic sauce”, or “soul”, that cannot be observed in any empirical manner, but which comprises your consciousness.

It’s more rational to believe that your conciousness is merely a property of the algorithms being executed by the organic computer which is your body.

Subjective experience is intrinsically different from intelligence.

I don’t believe it is. There is no reason to believe it is.

A congenitally blind man has a computer. You send him a picture of a famous place, and he uses a pattern matching program to find a very similar image in Flickr, and then reads the description. He can do this with sufficient accuracy to be indistinguishable from someone describing the picture.

Forget about intelligence for a moment. When presented with a picture of a famous place, is the blind man’s subjective experience the same as yours?

No, but you’re making the same mistake as Searle did. You are describing a system, and then ignoring the entirety of the system when trying to describe the “mind”.

The “mind” that is describing the system is not simply the blind man. It is the combination of the blind man and the program. Just because one particular subcomponent may not have an “understanding” of what is happening, that doesn’t preclude the combination from understanding.

I can take any individual subcomponent of your body… the lens of your eye, your retina, your optic nerve, the vision center of your brain, the part of your brain where you’ve build up mental models of the things in the world, your speech center, your vocal cords, etc. None of them are conscious of those pictures, on their own. But when you combine them, they form a system which is capable of perceiving and describing those things.

Does the blind man and the computer have an experience that is the same as mine? No. But neither is the experience of another sighted person.

And if the Chinese room argument has been thrashed, then the Turing Test had been crucified. Not long ago, a chatbot convinced a majority of human observers that it was a teenage human merely by avoiding questions and being rude.

Well, I would argue that it didn’t REALLY pass the Turing test, in that it merely passed a finite representation of the test, without addressing the core principles of the test. That is, while it demonstrated that in a fixed scenario it is possible to trick humans, I’m not sure it really addresses the core notion of the Turing test, which is that a computer is able to mimic human cognitive behavior in the general case. Given enough time, the system would likely have failed and been “caught”. The tricks employed just extended that period of time. It’s akin to the scene in Bladerunner where Decker is giving the test to Rachel. It takes twice as many questions to figure out she’s not a human… but eventually it does. Now, in that case, it’s not really a test for intelligence or consciousness, as the replicants are pretty clearly both. It’s a test of some sort of human-centric emotional reaction.

Searle’s chinese box argument, however, has numerous logical holes which allow it to be torn to shreds. It’s simply not a sound argument.

No, you are left with the possibility that our current scientific theories cannot explain consciousness. It is a property that will require a new theory.

It’s hubris to argue that either something can be explained by known physical laws, or is “magic”. For most of our existence, humans have faced phenomena they could not describe with the science of their time.

For example, before we had nuclear physics, there was no way to explain what powered the sun. You might have argued that it was an “emergent property” of a known process like chemical combustion, because the only other possible explanation is “magic”. But both explanations would be wrong. Understanding the sun required discovery of a type of interaction that nobody anticipated, nuclear fusion.

Not only that, but some types of matter undergo spontaneous nuclear reactions, and some don’t. Not because uranium-238 is a “special snowflake”. But it is qualitatively different in this respect from some other atoms.

Does the blind man and the computer have an experience that is the same as mine? No. But neither is the experience of another sighted person.

If we are looking at the same picture, describe it the same way, but have different subjective experiences, then what accounts for the difference?

No, you are left with the possibility that our current scientific theories cannot explain consciousness. It is a property that will require a new theory.

It’s hubris to argue that either something can be explained by known physical laws, or is “magic”. For most of our existence, humans have faced phenomena they could not describe with the science of their time.

But believing that there must be some unobserved force driving things is essentially the same as believing in magic.

Your consciousness can be explained as an emergent property of your physical form. There is no reason to believe that it somehow transcends it, other than religious preconceptions of a soul.

If we are looking at the same picture, describe it the same way, but have different subjective experiences, then what accounts for the difference?

The fact that your body is different from mine. Your neo-cortex contains pathways which reflect the sum of your experiences throughout your life, while mine reflect a totally different set of experiences. Those differences result in different perceptions of new experiences, as every new experience triggers and is linked to memories of the past.

Not true at all.

Until very recently, the Higgs boson was unobserved. But physicists believed that it might be responsible for explaining why things have mass. That doesn’t mean they believed in magic.

The fact that your body is different from mine. Your neo-cortex contains pathways which reflect the sum of your experiences throughout your life, while mine reflect a totally different set of experiences. Those differences result in different perceptions of new experiences, as every new experience triggers and is linked to memories of the past.

But if internal state can determine subjective experience, then it must be possible that some internal states are associated with minimal or absent subjective experience.

Not true at all.

Until very recently, the Higgs boson was unobserved. But physicists believed that it might be responsible for explaining why things have mass. That doesn’t mean they believed in magic.

But the higgs boson is essentially a thing which MUST exist, in order for the observed data and their explanations to be correct.

No such proof structure exists for the notion of consciousness you are talking about.

What you are talking about is a soul. Whether you want to use that particular word or not, you are suggesting that some sort of soul exists which is the ultimate root of our consciousness. But you have no actual reason to believe that to be the case.

Where does such a thing exist? What form must it have?

For the higgs boson, such questions had definite answers. For the soul, you do not.

But if internal state can determine subjective experience, then it must be possible that some internal states are associated with minimal or absent subjective experience.

I’m sorry, you’ll have to go further with this, or better explain its relevance to the conversation. You lost me here.

The data do not depend on any explanation. They are just data. I am conscious. There’s your data. Your only data, pretty much.

You’re right that if the Higgs boson did not exist, then the accepted particle model would have to be substantially revised or thrown out. So what? Models get thrown out all the time. The luminiferous aether was also a a thing that MUST exist, in order for explanations of the 19th century to be correct. But it doesn’t exist, and they weren’t correct.

No such proof structure exists for the notion of consciousness you are talking about.

My notion of consciousness is that it can’t be explained by our current understanding of biology and physics. We have ideas, without any good empirical support.

We also can’t explain what happens in the singularity of a black hole. Or what, if anything, preceded the Big Bang. We have ideas, without any good empirical support.

That doesn’t mean these things will never be understandable, or in the realm of magic. It just means we have to acknowledge our ignorance. We didn’t have the technology to explain the source of sunlight in the 17th century, and we don’t have the technology to explain consciousness now.

My guess is that any real investigation will have to await a method of directly accessing consciousness, i.e. telepathy. Which may or may not even be possible.

But you have no actual reason to believe that to be the case.

And what reason do you have to believe that “emergent properties” are responsible for consciousness?

The usual argument is something like “Humans have massive networks. Massive neural networks have emergent properties, which we don’t fully understand … So, what else could be responsible?”

That amounts to “I can’t handle the possibility of more than one unexplained neural phenomenon. They must be one and the same”. Right there with the folks who insist that consciousness comes down to quantum mechanics. It’s barely a step above magic.

What you are talking about is a soul.

Not really. A soul is a supernatural entity, which by definition cannot ever be explained by science. It is also distinct from matter, which is not necessarily true of consciousness.

Just for fun, consider an alternate hypothesis.

The brain generates a particular chemical at areas of neural activity. In small amounts, this chemical is inert. At a sufficient concentration, the chemical undergoes a particular reaction that causes consciousness.

Sound ridiculous? Maybe. But you can replace “generates a chemical” with “generates an electrical signal” to get a homologue of the current “emergent properties” theory.

You lost me here.

Ok.

You started by claiming that anything that could pass a Turing test is conscious. In other words, it’s impossible to have a non-sentient “zombie” that perfectly imitates human behavior. Consciousness can be established merely by examining input and output into a black box.

But then you pointed out that my consciousness is not the same as your consciousness, because of what’s inside that box. Even if we act the same, we may have different memories and experiences. When I look at an apple and tell you “this is an apple”, I am subjectively experiencing something different from when you look at it and say the same thing. In that case, it’s possible that a machine could look at and apple, say “this is an apple”, and subjectively experience nearly nothing, or nothing at all.

A more general note on the Turing test: it’s meant only to be a test of intelligent behavior, but it’s so elegant that it’s tempting to apply it to other settings. What if it were impossible to distinguish between someone in love with you, and a machine imitating someone in love with you? Is the machine really in love with you?

But at some point, the Turing method has to break down. Imagine the latest version of Flight Simulator, with a fully rendered cockpit. No, make that cockpit a physical mockup, indistinguishable from a 767. Hydraulics provide haptic feedback, so it even feels like you are flying. Even pilots can’t tell the difference. When you use the sim, are you really flying a plane?

Fast forward a few hundred years, and there is a holographic simulation of the passenger compartment behind you, with AIs that are indistinguishable from human passengers? Are you really in an airplane?

You step out of the cabin, and are surrounded by a simulation of Omaha. You take a simulated cab to your mom’s house and talk to an AI that imitates her perfectly. Indistinguishable from the real thing.

Have you really been to Omaha? Did you really talk to your mom?

Oh good, I’m smoking pot on the porch with a bunch of crunchy philosophy minors at an undergrad kegger again.

If reality is an illusion than that 1.6 GPA I have is totally not real. Also, we have no free will, so there was nothing I could do about it!

My notion of consciousness is that it can’t be explained by our current understanding of biology and physics. We have ideas, without any good empirical support.

But what is your notion of consciousness then? I’m not really certain what you are trying to argue at this point.

Searle’s (incorrect) suggestion was that machines simply could not possibly be conscious. That no matter what they were capable of doing, that they would never be conscious because they were just a bunch of unintelligent parts.

But this presupposes that our sentience is not in fact a property of our own bodies, since we are also merely a combination of unintelligent parts. It presupposes some magical soul which cannot be measured empirically, and which cannot be tested for existence. If it were, it would merely be another part.

If something is not actually sentient, despite acting exactly the same way as a sentient being, then how do you know anyone is sentient? What does sentience even mean then, if it has zero impact on your interactions with the world? What would be its purpose? From an evolutionary perspective, why would such a thing exist, if it has ZERO impact on your manifested behavior?

Such a thing either doesn’t exist, or doesn’t matter if it does.

I’ll have to wait till I’m at a proper keyboard to reply to some of the things in this thread, but here’s an interesting piece I stumbled on a few weeks ago. It strongly supports the idea that consciousness is piecemeal.

https://m.facebook.com/notes/blake-ross/aphantasia-how-it-feels-to-be-blind-in-your-mind/10156834777480504/

One of my pinball teammates has aphantasia, evidently. He’s also quite good at math & has a very different kind of intuition about it.

In the meantime, suffice it to say that I strongly agree with krazykrok.

I am arguing that we do not have enough data to know what consciousness is. In particular, we don’t have nearly enough data to conclude that it’s an emergent property of neural activity or a quantum process, to name the current fads.

Searle’s (incorrect) suggestion was that machines simply could not possibly be conscious. That no matter what they were capable of doing, that they would never be conscious because they were just a bunch of unintelligent parts.

Not at all.

He argued that you couldn’t prove that a machine is conscious merely by examining its output.

You can still have a conscious machine, provided you know how to build one. But that requires knowing more than how to imitate a human.

But this presupposes that our sentience is not in fact a property of our own bodies, since we are also merely a combination of unintelligent parts. It presupposes some magical soul which cannot be measured empirically, and which cannot be tested for existence. If it were, it would merely be another part.

No, it presupposes that we don’t know what gives our bodies the property of consciousness. It could be an unknown chemical reaction. It could be an unknown nuclear process. It could be an unknown quantum property. It might even turn out to be a network property of neural activity. Or it might be some property of matter that is as unfathomable to us as radioactivity was to Newton.

When we found out what makes our bodies sentient, we can know how to test for it and measure it. And only then can we know if it’s even possible to create a sentient machine.

If something is not actually sentient, despite acting exactly the same way as a sentient being, then how do you know anyone is sentient?

Well, right now you don’t. We just assume that other people are sentient, and then scratch our heads and wonder about everything else.

Just like we wonder whether life exists anywhere besides Earth. Wouldn’t it be silly to apply the Turing test to exoplanets? Would you really conclude that an exoplanet must contain life, since our current observations from light-years away happen to show the same properties as Earth? Because that’s precisely the same reasoning you are using for consciousness.

We simply don’t have the right tools to answer these questions. Not very satisfying, but science never claims it has all the answers. Not yet, at least.

He argued that you couldn’t prove that a machine is conscious merely by examining its output.

No, this isn’t really what he argued.

Searle’s argument was that a machine could never drive semantic meaning from syntactical operations.

He was recurring the notion of strong AI, that you could make a computer that could be conscious through a sequence of operations. He didn’t really care about Turing’s test. He argued that no machine, no matter what, could be intelligent. The point of the Chinese room, with its magical set of translation rules, is just a set of rules and didn’t really “understand” Chinese.

Ultimately, his argument fails because it kind of presupposes that only humans can understand anything, essentially just falling into circular reasoning. The entire room “understands” Chinese as well as any human.

He simply handwaves this away, saying that anyone who accepts the system argument is buying into some kind of ideology. This statement is hilarious given that it’s clearly Searle himself who is ideologically blind to the obvious logical failure of his thought experiment.

Searle places himself in a room with the magical rules about answering any question in Chinese (he ignores the fact that any such set of rules would be absolutely immense, given it would have to not only have the rules of translation, but also some kind of rules that figure out meaningful answers to any statement or question asked).

He then, in his Chinese room, uses that set of rules to answer questions in Chinese. He then says that he is, in this case, an AI. But he doesn’t understand anything that he is doing, since he is just converting symbols to other symbols. Thus, he concludes, no computer prefab that is just converting symbols to symbols can ever grasp actual semantic meaning.

But what he failed to understand was that the system of himself AND the magical translation rules understand Chinese just fine. In his room, he isn’t the entirety of the “mind” being tested. He was just one part of it. The fact that he didn’t understand Chinese doesn’t cast doubt on whether the overall program did, and more than the fact your foot can’t understand English means that you don’t.

Well, right now you don’t. We just assume that other people are sentient, and then scratch our heads and wonder about everything else.

But you are still assuming that there is some kind of magic special sauce, which is not some physical object performing some function.
And you have no actual reason to think that. You assume that a human is intelligent, despite being given the exact same evidence.

Wrong. I’ll go ahead and quote him on this subject (my emphasis).

Let me point out that you are implicitly supporting Searle when you mock the existence of “magical translation rules”.

Those rules are literally the source code of an AI, printed out and painstakingly analyzed line by line. If you think it’s impossible to actually write such a book, then it’s impossible to write a AI program that responds in Chinese. If the book is “magic”, then AI source code is magic.

In his room, he isn’t the entirety of the “mind” being tested. He was just one part of it.

Yes, he understood that. And he countered it, long ago:

Suppose the man memorizes the codebase and can perform the necessary calculations by hand whenever someone passes him a note written in Chinese. The book is thrown away.

Now, all by himself, he can respond to any written Chinese statement just as a native speaker would. There is no “system”. Just a man. But would this man understand Chinese in the same way a Chinese speaker would?

But you are still assuming that there is some kind of magic special sauce, which is not some physical object performing some function.

No. I do think there is a physical property that causes consciousness in physical things. But we don’t know what that property is, or what things it applies to.

I am not sure why you keep equating “things we don’t know” with “magic”. It’s not the rational approach.

Wrong. I’ll go ahead and quote him on this subject (my emphasis).

Dude, read the sentences around the one you quoted for clarification of what i meant. That’s why i wrote them. But to further explain, yes, I’m aware that Searle admitted that you could have “some other kind of machine”, but the thrust of his argument was based in what i described, hinged on the inability to gain semantic understanding from symbolic processing. But he didn’t explain why this is the case. Or rather, he tried to and failed.

Let me point out that you are implicitly supporting Searle when you mock the existence of “magical translation rules”.

Again, not really when you read the entirety of my argument.

What makes then magical is that he essentially described it as some kind of trivial book of simple rules he could follow. When, in reality, such a ruleset would be an insanely complex algorithm that would likely impossible for any human to actually follow, certainly in a timely manner.

I actually considered that making light of it was going to trigger the reaction you gave.

Those rules are literally the source code of an AI, printed out and painstakingly analyzed line by line. If you think it’s impossible to actually write such a book, then it’s impossible to write a AI program that responds in Chinese. If the book is “magic”, then AI source code is magic.

What’s impossible is for you to execute that code by hand in a timely manner, and provide answers in a reasonable time.

Yes, he understood that. And he countered it, long ago:

No, he didn’t. He tried to counter it, and failed.

Suppose the man memorizes the codebase and can perform the necessary calculations by hand whenever someone passes him a note written in Chinese. The book is thrown away.

Now, all by himself, he can respond to any written Chinese statement just as a native speaker would. There is no “system”. Just a man. But would this man understand Chinese in the same way a Chinese speaker would?

Yes, clearly… he would understand Chinese. He would have fully internalized this immensely complex processing algorithm. That’s basically what is in your brain. If you Have fully internalized that process, you have now learned Chinese. Searle makes this kind of silly suggestion that you have memorized this sort complex set of rules, but that it’s somehow inside your mind but not actually part of your mind. Again, his counterargument here is kind of a joke.

I mean, aside from all this, you realize that the Chinese room argument isn’t really considered serious at this point, right? That is pretty much accepted as refuted at this point? Even his counterarguments have been thoroughly countered, and he was left just handwaving them away.

No. I do think there is a physical property that causes consciousness in physical things. But we don’t know what that property is, or what things it applies to.

Yeah, that’s called magic.

You have no empirical evidence to suggest the existence of such a property, or any way to test for its existence.

Further, to reiterate what i pointed out previously, since this magical force apparently does not affect manifested behavior in any way, why does it exist? What purpose does it serve? Why would you have evolved such a thing? Indeed, since it has no impact on behavior at all, HOW could you have evolved it, given it is only your manifested behavior which actually plays a role in your evolutionary fitness?

This is a thought experiment. Arguing that it’s impractical means you are missing the point.

Anyway, you are assuming that the necessary code must be so complicated that no human can memorize it. Since there is no such code, you don’t have any evidence for this. Maybe a breakthrough in psychology or a specialized new programming language would make it quite easy.

After all, humans can learn a new language in months to years. They can write a simple chatbot in weeks. Why should memorizing a chatbot that uses a foreign language be completely beyond human ability? Step one: when someone says xie xie, you say bu ke qi. And so on.

But if it helps, just imagine that a really smart Vulcan memorized the code. A Vulcan who never learned Chinese.

What’s impossible is for you to execute that code by hand in a timely manner, and provide answers in a reasonable time.

There is no human that can execute code in real time, but there is no computer AI that can mimic a Chinese speaker. If we are only talking about what is currently feasible, then we might as well not talk about AI at all. They are all terrible. End of discussion.

Incidentally, neither the Turing Test nor the Chinese Room demands real time responses. You can imagine sending letters via snail mail, waiting months for a response, and years for the experiment to end. Like an old-school chess game by mail. It doesn’t change the outcome in the slightest.

Yes, clearly… he would understand Chinese. He would have fully internalized this immensely complex processing algorithm. That’s basically what is in your brain. If you Have fully internalized that process, you have now learned Chinese.

And yet he wouldn’t understand a word of what he was writing.

In fact (in another argument by Searle) imagine that this man is receiving letters in English and Chinese. He writes back normally in English, and then consults his memorized codebook for Chinese. As a result, he is sending completely contradictory messages, like “I’m voting for Hillary Clinton” in English and “I’m voting for Donald Trump” in Chinese. Without even realizing it.

This is incompatible with “understanding” one’s actions. It’s practically the definition of insanity.

I mean, aside from all this, you realize that the Chinese room argument isn’t really considered serious at this point, right?

It’s not refuted. Not at all.

It’s certainly controversial, and certainly some people think Searle is wrong. But some people think he is right, and some think he is partly right.

For instance, the Churchlands (about as big a name as you find in this field) agree that the Chinese Room does not understand Chinese. Spoiler: they think that this does not necessarily doom other types of AI, and of course the debate does not end there.

You have no empirical evidence to suggest the existence of such a property, or any way to test for its existence.

Huh?? My only claim is that consciousness exists in at least one person. That’s it. I assume you agree. If you disagree, then you don’t believe that consciousness exists at all. What’s the point of your argument?

I don’t make any claims that we can test for its existence. In fact, I explicitly stated that we can’t, at least not now.

I assume it’s a physical property and not magic, but of course I don’t know for sure. Neither do you.

If you believe consciousness exists in machines, then the burden of proof is on you. What is your empirical evidence?

If you propose that consciousness in humans and machines is equivalent, then again the burden of proof is on you. What is your empirical evidence?

What purpose does it serve? Why would you have evolved such a thing?

Let me just stop you here. This is the teleological fallacy.

Not everything that exists serves a purpose or evolved for a purpose.