What is consciousness? What is sentience?

Of course thought experiments typically involve a lot of impractical elements. However, sometimes in the waving away of impracticalities, the thought experiment waves away something important. I think it’s important that the system he describes can’t come within a factor of a million of the performance he describes. Just asserting that is does misleads the intuition.

In the broad strokes, I think Searle is attacking the position that “If, on close inspection, the system acts like it understands X, then it probably does understand X.” So he provides a counterexample of a system that seems to understand X, but once you look “inside”, it clearly doesn’t. I think then that it is relevant that the system he describes could not actually act like it understands X. If the counterexample is impossible, it’s not really a counterexample.

Why not? Because it’s not fast enough? Keep in mind that the Turing never specified response speed. In fact, the whole premise of a Turing machine is a system that could theoretically be emulated by hand with the same results.

More generally, the human in the Chinese room is obviously a metaphor for a CPU executing a program. If you would impute understanding to a CPU but not to a human doing precisely the same thing, then you are just giving magical powers to a CPU.

This is a thought experiment. Arguing that it’s impractical means you are missing the point.

Well, yes and no. In this case, its impracicality plays some perhaps non-trivial role since it screws with the success of the system in doing what it is supposed to do (pass the turing test). However, for the sake of argument, this could be ignored.

However, the reason why the impracticality is important, is that Searle’s presentation kind of depends upon ignoring this impracticality.

When people, correctly, pointed out that the overall system absolutely does understand Chinese, Searle’s only response was that that didn’t make sense, because it was “just a bunch of bits of paper.” This was kind of where he fell into circular logic, he essentially presumes that “the bits of paper” can’t possible understand anything. But this is essentially the hypothesis he is trying to prove.

The reality is, “the bits of paper” are a freaking mountain of code, as you correctly pointed out. But when you then look at it that way, it highlights the circular logic of Searle’s counterargument.

His whole argument is essentially, “In this room, it’s just me and some paper, and I don’t understand chinese!” Well sure… in his experiment, he’s largely immaterial. It’s the complex rulebook which contains the knowledge of Chinese, and all of the memory structures necessary to make semantically meaningful answers. Searle never effectively addresses this, just criticizing those who believe it as being blinded by their ideology.

And yet he wouldn’t understand a word of what he was writing.

Yes he would. He would understand it as well as anyone.

This is the fault with Searle’s counterargument… He suggests that you could somehow perfectly internalize that complete ruleset, but then not actually “understand” it. That it’d somehow be compartmentalized away from the rest of your brain.

Again, this is effectively dependent upon you not really thinking about what the “bits of paper” that comprise that ruleset actually are. Because they are a supremely complex set of rules about how to process any given statement, and how to create a new answer from a vast memory store of not only gramatical rules but also experiences.

This is, essentially, what would happen to you if you learned chinese as a human.

The idea that internalization of those rules doesn’t comprise understanding chinese is essentially just another circular argument. It depends upon you assuming that those rules don’t inherently include understanding, despite the fact that no definition is ever given about what those rules are.

For instance, the Churchlands (about as big a name as you find in this field) agree that the Chinese Room does not understand Chinese. Spoiler: they think that this does not necessarily doom other types of AI, and of course the debate does not end there.

Sorry, but I don’t think that most folks in the fields of cognitive science and AI actually agree with the Chinese Room. The arguments that have been put up against it are absolutely solid, and forced Searle to fall back upon handwaving. Maybe the Churchlands wrote some piece a quarter of a century ago saying they agreed with him, but they don’t comprise a majority opinion here.

Ultimately, you’re free to agree with his handwaving, but that’s really all it is. His counterarguments were clearly refuted, and he was left with absolutely no meanginful response except what was essentially an ad hominem attack on those who had defeated him.

Again, at this point the argument has been so thoroughly hashed out that us repeating it here is kind of pointless. You or anyone else can just go look at the whole arugment. It’s not like it’s progressing any further at this point. It’s done.

Huh?? My only claim is that consciousness exists in at least one person. That’s it. I assume you agree. If you disagree, then you don’t believe that consciousness exists at all. What’s the point of your argument?

Yes, I agree that consciousness exists in me. I also believe that it exists in you, because you behave as a conscious being.

I don’t make any claims that we can test for its existence. In fact, I explicitly stated that we can’t, at least not now.

But that’s the thing. That’s what makes your definition of consciousness essentially pointless. You’ve defined it as something which cannot be tested for, and has no impact on the environment. It’s nothing.

I assume it’s a physical property and not magic, but of course I don’t know for sure. Neither do you.

But what is that property? Since it has no impact on your manifested behavior, then what is it?

If you believe consciousness exists in machines, then the burden of proof is on you. What is your empirical evidence?

My definition of consciousness is based upon their manifested behavior. It essentially IS their manifested behavior.

Let me just stop you here. This is the teleological fallacy.

Not everything that exists serves a purpose or evolved for a purpose.

But with your definition of consciousness, it is literally nothing.

It’s an invisible quality that you cannot test for, which does not impact the physical world in any way. It doesn’t change how you behave, and thus doesn’t play any role in evolutionary adaptation.

What IS consciousness in your definition then? Because it seems like in your attempt to say machines can’t have it, you’re left with a quality that has no physical form at all. It essentially exists outside the universe. That’s why it sounds like you’re describing magic, albeit an exceptionally useless and unimportant type of magic that does nothing.

Speed and storage capacity. If the book of rules is in the form of a lookup table of inputs and responses, it needs to be bigger than the observable universe. If it’s more complex, it is still immense (and takes a lot more time). It’s hard to make it a conclusion of anything if the library of rules is the size of Alaska and Searl dies of old age after completing a few sentences.

People’s intuitions, for better or worse, are tied to the scale at which they live their life. They think of intelligent agents as things you could actually hold a conversation with, and not on a geological timescale. So they take the ordinary definitions of “room” and “conversation” and “bits of paper” and have the correct intuition that this can’t add up to a human-like intellect. But he tells them that it has the results, and so the intuitive ludicrousness of it is misdirected.

More generally, the human in the Chinese room is obviously a metaphor for a CPU executing a program. If you would impute understanding to a CPU but not to a human doing precisely the same thing, then you are just giving magical powers to a CPU.

Having Searle in the room is misleading, since we know he is a conscious agent and want to attach a more important role in the system than he has. I think it’s possible to conceive of a human-CPU system that would be intelligent in a philosophical sense, but it would be vast, think on geological time scales and would be utterly impotent and helpless in the real world. It’s not something we could relate to. This is not remotely the sort of thing we’d need to model as agents in our lives. We don’t need to think of it as intentional because it doesn’t really do anything at all on the scale of our lives.

Take that process, put it into something the size of a golfball, speed it up by a factor of a few trillion or so and we’d be forced to treat it as a thinking agent with wants and desires.

Having Searle in the room is misleading, since we know he is a conscious agent and want to attach a more important role in the system than he has.

It’s intentionally misleading, I believe.

He’s saying that he’s the AI, in order to distract from the complexity of the rules he’s executing.

In reality, the set of rules are essentially the conscious mind, as he himself is really not doing anything aside from providing the rules execution engine.

But Searle rejects this, because the rules are just “bits of paper”, and thus cannot be conscious… This is where he falls into circular reasoning. It’s not a sound argument. It’s just “Things cannot be conscious, because things cannot be conscious.”

Yeah. The Chinese Room was a great concept for analysis, and Searle is not an idiot, but it really seems to me he gets lost in his arguments against machine consciousness.

To be fair, after Magnet suggested that there are folks who buy into Searle’s argument, I dug into it a bit more, and there seemingly are folks who do (although I don’t think that’s actually the Churchlands’ position, as they have presented numerous counterarguments against it, such as Churchlands’ “Luminous Room” argument, where they suggest that Searle’s entire argument hinges upon intuition, and his intuition is actually incorrect.

However, as obviously false as Searle’s argument appears to me, there seem to be others like Magnet who feel like it’s obviously correct. I’m really not sure how it is, as his argument seems to have straight up errors in it, from a logical perspective… but I am forced to admit that this perspective is not as universally held as I had believed.

As I pointed out previously, Searle’s final arguments degenerated into an accusation of his critics that they were “under the grip of an ideology.” But the exact same criticism can be squarely leveled at Searle. I’m not sure if much common ground can ultimately be found, if some folks start with the assumption that non-human entities (or at least inorganic entities) simply cannot be conscious. To me it is not a compelling argument, as it has no real proof beyond the fact that we have not witnessed such entities to date. But apparently for some, it is so obvious as to be unassailable.

I find the physiological basis of consciousness to be fascinating. If I were 18 again, I think I’d redirect my career toward neuroscience.

As for the various thought experiments about consciousness, I agree with Dr. Crypt’s comment upthread. Yawn.

No, his response is that you don’t need the bits of paper if someone is willing to memorize their contents. In which case, there is no “system” that understands things independently of its constituents.

Seriously, practically as soon as soon as the paper come out he identified and addressed five counterarguments. You’ve only touched on one of them.

The idea that internalization of those rules doesn’t comprise understanding chinese is essentially just another circular argument. It depends upon you assuming that those rules don’t inherently include understanding, despite the fact that no definition is ever given about what those rules are.

What? No. The rules tell you to match an input directly to an output. There is no semantic intermediary. You do not ever translate to English, or any other language.

That is literally the point of the argument.

Now you can argue, as some do, that you can’t imagine how a program could imitate a chinese speaker without somehow using a semantic intermediary. But that’s handwaving, not a refutation. Just like most counterarguments starting with “I can’t imagine how …” If you think it’s logically impossible, then prove it.

He suggests that you could somehow perfectly internalize that complete ruleset, but then not actually “understand” it. That it’d somehow be compartmentalized away from the rest of your brain.

No. You wouldn’t understand it, because understanding is explicitly not a part of the ruleset. It’s only meant to imitate Chinese speakers, not help you understand them.

That’s why you can say “I am voting for Hillary Clinton” in English while your instructions tell you to write “我投票给保守派” in Chinese. Or perhaps the rules tell you to say “我投自由派”. Which one would be consistent with your English reply? If you don’t know Chinese, then you have no idea. All you know is that you will be writing out one sequence of characters or another. It’s nonsense to say that you’ve internalized or in any way understand what “我投票给保守派” means.

Suppose the rules you memorized say that the next step is to write “我需要登记投票”. Is that racist? What, you don’t know? Then how can you claim understanding?

My definition of consciousness is based upon their manifested behavior. It essentially IS their manifested behavior.

Well, you can define consciousness as whatever you want. You can define it as the ability to run fast. That’s easy to test for.

But that doesn’t make your definition a good definition. Because the common definition of consciousness is not a behavior. It is self-awareness, a subjective internal state.

That’s not nothing. But yes, internal states are hard to test in other people. Very hard. That’s why consciousness is a hard problem, usually considered among the very hardest. Thankfully, you can at least test for it easily in yourself. It’s a start.

At this point, you have two choices. You can argue that your behavioral definition and the common definition are actually one and the same. But you haven’t made that argument, because you’ve never talked about internal states.

Or you can argue that you don’t have patience for very hard problems. So instead you will redefine consciousness to make it into a relatively easy problem. That’s a bit like saying “Wow it’s really hard to know if there is life on other planets. So let’s just pretend that all Earth-sized planets contain life. Because we can measure that right now.” Ok, go ahead and measure the size. But now you’re missing the whole point of what made this problem interesting.

In the meantime, people are going to go back to wondering about subjective internal states. Whatever you choose to call them.

Searle wasn’t really talking about using a human as a substitute for a CPU in a live Turing test. He was talking about using a human to validate a program that had already passed the Turing test.

As an analogy, we would never play an RTS at 1 frame per minute. But we expect it to have the same output regardless of the nominal frame rate, given appropriate input. In fact, the very definition of a Turing Machine (which includes all digital computers) is basically a machine that produces the same results as a human with a calculator, ticker tape, and a lot of patience.

So if you believe that you have written an AI that can truly “understand” Chinese dialog because it passes the Turing test when it runs at 4000 terahertz, then why shouldn’t a human understand Chinese dialog when he runs through the code manually? Because if the results depend on processing speed, then it’s not really a Turing Machine.

My definition of consciousness is based upon their manifested behavior. It essentially IS their manifested behavior.

Yeah the whole point of consciousness is that it is not defined as manifested behavior. Virtually the entire worldwide philosophical collegium for the last thousand years or more is against you on that one.

The belief that consciousness arises from physical and not spiritual elements is not inconsistent with this. Of course we may eventually come up with some measurable metric of mind complexity and correlate that with apparent or expressed self-awareness in biological and machine entities to develop a theory of self-awareness in minds that can be scientifically investigated. But that will still never accomplish the definitionally impossible step of ascertaining whether anyone else in the world is conscious or not.

You may say (as I do) that for all intents and purposes we might as well assume that anything in the world who exhibits self-aware behavior is conscious. But this pragmatic approach to dealing with the real world fails to address the Cartesian dilemma of possibly false sense-data and thus fails to prove anything at the deep level in which the question is normally posed.

I agree with what you wrote. But I want to point out that defining “self-aware behavior” is pretty tricky. Not long ago, scientists believed that animals were not self-aware. For example, it was believed that they did not experience pain, even if they acted like they were experiencing pain. Not anymore, but it’s not because we have any convincing new data. It was just a change in attitude. And many scientists still think fruitflies do not feel pain in the same sense that humans do.

In short, it’s an arbitrary threshold, seemingly based on human empathy.

Anyway, as you suggest we would partially resolve the problem if we could find a method to objectively “read” thoughts. Hopefully, that would give us some insight into who is thinking, and who isn’t actually thinking. There are lots of potential pitfalls on the way, of course. And even at the end, there’s no way to conclusively prove that our method always works.

Unfortunately, there is no such method and no guarantee we will ever have one. Until and unless that changes, our understanding of consciousness is going to remain rudimentary.

Sentience is the ability to comprehend and explain abstract concepts. Concepts like ‘luck’, ‘sentience’ (of course), ‘democracy’, ‘sacrosanct’, ‘randomness’ or even ‘abstract’ itself.

The mistake Searle makes is that the CR is a static system, whereas sentience and intelligence is everything but. His main point is that the room understands Chinese even though there is no ‘understanding’ in the room. There indeed isn’t, but there is a copy of understanding in the room; the rule book, which copies the Chinese understanding of whomever created it. So the room indeed has no understanding, but not for the reason Searle claims.

You can easily realize this by transporting the room a hundred year in time (either forward or backward) and test it again. The room will now make mistakes, locking up entirely because there will be words and expressions that didn’t or no longer exists in the original timeframe and thus have no rules attached to them.

Compare this to transporting a person with genuine understanding of Chinese. That person will have some issues, but will still be able to carry a conversation easily.

What about emotion? Isn’t there a link to sentience and emotion (anger, envy, happiness, regret, guilt)?

Animals can be angry, bored or content. But there is no way you can ever make a dog understand what ‘luck’ is. To an animal things just happen or they don’t. Any concept divorced from a basic learning experience falls outside their realm of comprehension and thus they are not sentient.

How about a dolphin? Or a chimpanzee? They can experience more complex emotions. Assuming this is correct are dolphins higher on the sentience scale than dogs but lower on the sentience scale than humans?

There is no ‘scale’ and as I just said, the ability to experience emotion is irrelevant. Either a being or sentient or it isn’t. The only murky part about grading this is what constitutes an abstraction. It’s fully possible that dolphins are sentient, possessing several abstract concepts that fit their world perfectly, yet are nearly impossible to comprehend for humans. The big problem is that you cannot measure understanding of abstractions without a mutual language.

Well, not really. Since the room is designed to replicate a turing machine, it’s not static. Presumably some of the rules in the big book o’ chinese literacy and thinking, are rules like, “write down what was just said and store it for future use”. Dynamic memory is a requirement of the system.

Hehe, you guys, I’m proud a’ ya :)

One crucial thing that really flipped the way I think about consciousness was Dennett’s observation that we’re not the experts on ourselves that we think we are. This ties in with the Wittgensteinian point that we’re often led astray by the jerry-built metaphors (for hypothesized inner workings of our minds) that our folk psychology (our inbuilt theory of mind) has so haphazardly constructed over the course of our evolution (W. didn’t know the details we know, but Dennett takes his cue from W.).

Really, we’re in exactly the same position with our own minds as we are with other minds - we impute something to ourselves, the very same thing we impute to others. Our self-model is a hypothesis of what we are, just as our other-model is a hypothesis of what others are. And the irony is, it’s really as invisible in our own case as it is in the case of others (Buddha’s point, no-self, or to be more precise, no substantial existence of the self we unexaminedly think we are, peeping out from behind the eyes).

There are conscious physical processes (ourselves, animals, robots) that can navigate the world, find things evitable or inevitable, etc. Some of them (us) create, as part of their world model they use for getting about, a self-model, and part of that self-model has clusters of ideas about hypothesized inner workings attached to it - metaphors, analogies, etc. (e.g. emotion is sometimes construed in a pseudo-mechanical sense, as a kind of pressure).

Some of our self-model is viable and describes causal processes, albeit sometimes just in primitive shorthand; some of it is only very roughly viable, and perhaps serves more as part of our society-facing side (we try to construct selves that fit in, and that colours how we model what we suppose to be the workings of our psyche). The former part of our folk psychology can be left as it is, the latter has to be discarded, or absorbed by more scientifically accurate descriptions.

Anyway, it’s not this self-model that “is conscious” (it’s just a model, not the substantive thing that we are). Consciousness is a thing, but there’s no special self that “is conscious”, it’s just the physical process that is, properly speaking (functionally, i.e. testably) conscious.

Consciousness is not confined to the physical processes in the skull only, but spread through all its relevant physical processes, and pertains to, threads out through, the world itself. Our presence, by intercepting a portion of the casual chains that things throw off, affords the opportunity for those things to reveal something of themselves that could not come to exist if it weren’t for our presence. The thing in just that aspect wouldn’t exist but for our presence as part of the causal, physical chain.