What is consciousness? What is sentience?

Doesn’t matter. The book and the ruleset within are static. The moment you hit something that isn’t in the rules, the system collapses.

Well… no.
Because you have rules that involve creation of new rules.

When it runs on a terahertz processor, the hypothetical system understands Chinese, the processor does not. When run on the Searle processor, the system understands Chinese (but not in a useful or interactive way) but Searle does not. Our intuition that the system doesn’t understand Chinese in the second case comes from the fact that the Searle-as-processor system is ludicrously impractical.

That’s why you can say “I am voting for Hillary Clinton” in English while your instructions tell you to write “我投票给保守派” in Chinese. Or perhaps the rules tell you to say “我投自由派”. Which one would be consistent with your English reply? If you don’t know Chinese, then you have no idea. All you know is that you will be writing out one sequence of characters or another. It’s nonsense to say that you’ve internalized or in any way understand what “我投票给保守派” means.

Suppose the rules you memorized say that the next step is to write “我需要登记投票”. Is that racist? What, you don’t know? Then how can you claim understanding?

But this is a good example of how the argument is based upon intuition, which is ultimately false.

The system which accomplishes the task of giving answers to literally any question in chinese, cannot simply be an exhaustive set of answers. Because that set of answers is infinite. Thus, the rules cannot simply be “for input X, give output Y”.

Instead, the rules would specify some set of complex process that represents not only the translation part that tells you how to translate answers, but the reasoning part that figures out what those answers should be. And therein lies the “understanding”.

Again, the problem you’re having here is that you’re imagining simplistic solutions which don’t actually solve the problem, and then saying that those solutions don’t pass muster for being conscious or understanding anything. But it’s kind of a variation on a strawman at that point.

Hell, even if you had a magical (I think magical is the correct term in this case) system which literally had an exhaustive set of pre-canned responses to literally any question asked, I think such a system still constitutes understanding. It’s a DIFFERENT type of understanding than you have about many things, but that’s just because it’s impractical for humans to have our understanding represented in an exhaustive way like that due to our limitations. For SOME things you have a memorized response… generally things that you have internalized completely, and understand on such a deep level as to have virtually instant recall. You see this in a bunch of cases with humans, where experts kind of skip over the reasoning process used by novices, as they’ve fully internalized it. In the case of the magical infinite set of pre-canned responses, you could say that it totally understands everything. This kind of defies intuition, but that’s because the situation of an infinite set of responses can’t really exist intuitively.

But that doesn’t make your definition a good definition. Because the common definition of consciousness is not a behavior. It is self-awareness, a subjective internal state.

That’s not nothing. But yes, internal states are hard to test in other people. Very hard. That’s why consciousness is a hard problem, usually considered among the very hardest. Thankfully, you can at least test for it easily in yourself. It’s a start.

You say it’s hard to test for… but that’s not really what you’re saying here, right? Based on everything you’re saying here, you actually think it’s IMPOSSIBLE to test for consciousness. That it’s a purely subjective, internal state, which no manifested impact on the outside world.

Do you believe other human beings are conscious, magnet?

The whole point is that the book is static. If a new word is introduced that isn’t in the book there is no possible correct response no matter how many ‘new rules’ are created and thus the room breaks. Unless you say that every rule ever can be inferred from the book based on chinese text input alone, at which point the book effectively becomes as omniscient as a god.

The dynamic part of the room is to facilitate operation, not understanding.

There isn’t really an argument to be had here. You are simply incorrect.

The point of the Chinese room is to be representative of the capabilities of a Turing machine. That specifically includes the ability to modify memory. It can not only read the symbols on the tape, but can modify what is written as well.

If a new word is introduced that isn’t in the book there is no possible correct response no matter how many ‘new rules’ are created and thus the room breaks.

No, it just does what a human would do. It asks, “what does that word mean?” And then incorporates the new word into its memory structure.

No there isn’t, because you obviously fail to grasp what I’m getting at. The ruleset is fixed. It being a Turing machine doesn’t atuomatically mean it can add new rules. That would be the same as a program rewriting itself, which far exceeds the scope of the original argument because then it can do far more than just pretending to speak Chinese.

It being a Turing machine doesn’t atuomatically mean it can add new rules.

It absolutely does.

That would be the same as a program rewriting itself

Yes, or at least operating with a dynamic memory, which is one of the fundamental components of a Turing machine.

If you want you can think of the “static” rules of a Turing machine as being like the firmware of a CPU which can’t easily be altered. But the fact a CPU has unalterable firmware doesn’t constrain its ability to execute arbitrary programs in the least. The infinite tape of a Turing machine allows for unlimited programmability and unlimited capacity for self-modification and self-programming, within the overall constraints of the formal system.

Exactly. The fact that it is a Turing machine implies that it has dynamic memory, otherwise it would simply be a finite state machine.

To be clear, this is the chief difference between a Turing machine and finite state machines, and why the problem set solved by Turing machines is a superset of those solvable by DFM.

Searle is specifically talking about simulating a Turing machine, as it is essentially capable of solving any problem. The point of the Chinese room isn’t whether it can provide the answer to any question. That is explicitly defined as a given in his experiment definition.

Searle’s argument is that while the system can provide the correct answer, it doesn’t *understand" it. That’s the point of the thought experiment.

Just to clarify, I meant that in the sense that a calculator takes input X and Y, and outputs the sum, Z. There are still infinite possible outcomes.

Instead, the rules would specify some set of complex process that represents not only the translation part that tells you how to translate answers, but the reasoning part that figures out what those answers should be. And therein lies the “understanding”.

Let’s pretend we have a Turing machine that can imitate a Chinese speaker, and break down what it must be doing.

Any Turing machine has only one job: read or write to a text file. By definition, all of its instructions can be simplified into nothing more than a finite set of tables.

For a Chinese conversation machine, here is an example:

Table 戈狐步舞威士忌探

If cursor is on 狗, write 猫, move the cursor forward, and then consult Table 共舞与星
If cursor is on 步, write 步, move the cursor backward, and then consult Table 忌探戈与星

Table 步舞威士忌探

If cursor is on 狗, write 忌, move the cursor forward, and then consult Table 共与星
If cursor is on 步 …

Note that these lines are written in Chinese. Only four things are written in English: “compare the current symbol to this…”, “write this…”, “move the cursor forward (or backward)”, and “consult this table next”.

And that’s what every single instruction looks like. In every single Chinese-labeled table. There is no other content in a Turing machine, by definition. And if you think that the names of the tables might provide insight, forget it. They are just random Chinese characters.

Searle argues that if you don’t understand Chinese, stepping through this program won’t change that. Sure, there is a “reasoning” behind how the tables were constructed, and whoever wrote the program must understand Chinese. But to make a Turing machine, the high-level organization that usually makes programs human-intelligible is stripped out. There is nothing left to help you deduce the reasoning. Particularly if the reasoning was originally in Chinese.

It’s like telling someone they can learn Chinese from scratch just by reading a Chinese dictionary. And if you don’t understand the definition because it’s in Chinese, then just look up the words you don’t understand.

The “system” is Searle plus the long stack of tables. But if Searle memorized the long stack of tables, then the system would be just Searle. And therefore, Searle should be able to understand Chinese merely by memorizing that stack of tables.

Memorizing all those tables is not easy. But it’s unnecessary. Because if you can’t understand Chinese from staring a bunch of Chinese instructions in front of you, how can you possible understand it simply by memorizing the instructions? That’s magical thinking. It’s like saying that you can’t learn Chinese from reading a Chinese dictionary, but you can learn it by memorizing that same dictionary.

Let’s address the other counterarguments:

“The massive complexity of the program causes the AI to understand Chinese”
A complex Turing machine is nothing more than a program with more tables. But if a single table is incomprehensible, then how can you possibly gain understanding by staring at more tables? You will never encounter any more English words, just the same basic instruction over and over again, only using different unintelligible symbols each time.

“The speed of the processor causes the AI to understand”
Similar to the above. If a single instruction is incomprehensible, how does running through 1000 of them per microsecond lead to understanding?

“The CPU sees things in the code that a human cannot when looking at the exact same code”
This is more magical thinking. How can a CPU gain more understanding from a line of code than a human?

Actually, a human can cheat by taking notes and doing a bit of side-processing. Maybe he could keep track of symbol frequencies in the output, and pray for some insight. But a Turing machine can’t do that, by definition.

Ok, did I miss any other counterarguments? I think there’s just one more:

Ok, now we are getting somewhere. It’s a different type of “understanding”. And that’s perfectly fine.

It’s ok to say the AI “understands” Chinese. But it’s not the same way that a Chinese speaker understands Chinese. Maybe it gives the same responses, but the internal state is NOT the same as that of a native speaker.

It’s a simulation, or model, of understanding Chinese. It’s not a perfect duplication.

And that means, by definition, that it’s not a “strong” AI. Which is Searle’s whole point. A Turing machine can act like it understands Chinese, but deep down it has a much different type of understanding than a native Chinese speaker. That’s all.

Do you believe other human beings are conscious, magnet?

I assume they are. I make a lot of assumptions to get by in life. But I don’t have a mind-reading device, so I clearly don’t know for certain.

Searle argues that if you don’t understand Chinese, stepping through this program won’t change that.

But it doesn’t matter if examining the program allows you to understand Chinese. THE PROGRAM understands Chinese.

I can examine the brain of a Chinese speaker, and not understand Chinese, but that doesn’t mean he doesn’t understand Chinese.

Ok, now we are getting somewhere. It’s a different type of “understanding”. And that’s perfectly fine.

Note that the part i was talking about there was the system which answered everything through a comprehensive set of predefined answers to ever possible question that could ever be asked, with a simple lookup instead of any reasoning… which really isn’t even possible, and would require magic. But if you had such a system, it would still understand Chinese LM just in a totally different, omniscient way.

This does not preclude a more practical system that just used dynamic memory exactly like a human does, which would then understand Chinese in a way very similarly if not identically to a human, taking into account the sensory limitations.

I assume they are. I make a lot of assumptions to get by in life. But I don’t have a mind-reading device, so I clearly don’t know for certain.

Why do you assume they are, given that you have nothing to go on but their manifested behavior?

You really don’t get it. I’m not talking about storing/changing of input/variables. I’m talking about changing the fundamental rules the program itself is made off. Not the manipulation of variables, but writing new functions of whole cloth.

Just like magnet said, a Turing machine is nothing more than something that turns X into Y. If you feed it Xþ where þ is an unknown input, you know what happens? It will do what every computer in existence will do and throw up an input exception. And yes, it’s possible to catch this and return a canned response instead, but that response would have to be generic because the input does not exist in the ruleset and thus there is no rule for it. If you press the system with more questions using the new input it will just keep restating the canned response, breaking the Chinese speaker illusion.

Unless of course, you want to claim that the ruleset is capable of asking followup questions discovering what the unknown input means and then correctly integrates this into the existing set. This isn’t possible because the system has no understanding of what the input means. If, however, you do believe this possible, you just disagreed with Searle’s conclusion that “Syntax by itself is neither constitutive of nor sufficient for semantics”. A CR that could do all that couldn’t just learn new Chinese words, it could learn any language made up of any character set given enough time.

No, the program does not. That’s the whole point of the CR argument. It offers only the illusion of understanding and that’s why it will break when presented with unknown inputs.

You really don’t get it.

Heh, as a professionally trained computer scientist with a background in philosophy, who specifically works in the field of AI, I’m pretty certain that I do.

I’m not talking about storing/changing of input/variables. I’m talking about changing the fundamental rules the program itself is made off. Not the manipulation of variables, but writing new functions of whole cloth.

Ok, I’ll try to explain this to you, but it’s going to require you to step back and acknowledge that maybe, just maybe, you may be wrong here.

All you need to change the way a program manipulates variables and write new functions, is dynamic memory. This is the key element which differentiates a turing machine from lower class automata, like a finite state machine.

A finite state machine operates in the way you describe. Everything is set ahead of time, with essentially everything defined statically in the states and transitions.

But a turing machine, with the addition of dynamic memory, is able to solve a much larger set of problems. These include problems that do exactly what you describe. This is essentially what the field of machine learning covers. Whole languages like LISP have major aspects of their syntax dedicated to runtime creation of new functions.

If you are interested in learning more about this, you can just look up automata and check this stuff out. It’s standard computer science stuff that’s covered in any CS program, usually as a freshman or sophmore level course.

Just like magnet said, a Turing machine is nothing more than something that turns X into Y. If you feed it Xþ where þ is an unknown input, you know what happens? It will do what every computer in existence will do and throw up an input exception.

What you are missing here, is that all that is required is that the input does not need to be predefined, it just needs to be specified via a predefined grammar. But (and here is the part you may be getting hung up on), that “grammar” doesn’t actually have to be the Chinese grammar. It could be something like, “colored pixels on a defined grid” or essentially anything that could be representative of human sensory input.

In such a case, even if you came up with an entirely new character, it’d still be able to be processed by the system. And then you’d have rules that said, “Check input X against our library of defined characters. If we don’t know what it means yet, call on the meaning inquiry subroutine.” Then that routine would go about the (extremely complex) process of asking questions and establishing meaning, and then just adding new pieces of data into the library of defined terms.

I mean, this isn’t some kind of magic that takes place. Hell, I’ve written such programs myself… and I’ve done it on a computer. And you know what every single computer you’ve ever used is? That’s right… it’s a Turing machine.

If, however, you do believe this possible, you just disagreed with Searle’s conclusion that “Syntax by itself is neither constitutive of nor sufficient for semantics”.

First, to be clear, I absolutely disagree with Searle’s conclusion. Searle is absolutely wrong.

But he’s not wrong for the reasons you are talking about. You are misunderstanding the point of his experiment.

Searle, as a given, establishes the ruleset which will do exactly as we describe here. That it is capable of solving the problem correctly. He is not, in any fashion, suggesting that it cannot be done. And, as you have correctly identified, that solution will inevitably require expansion of the system’s memory to learn new terms and thus rules about how to apply them. Hell, even before you get to that point, it’s going to need to learn even simpler things to pass the Turing test… For instance, it’d need to be able to answer the question, “What was the last question I asked you, before this one?” meaning it’d need to keep at least a short term record of inputs. This is all inherently defined as part of what a turing machine is.

Searle’s conclusion is that even if you are able to have the system work perfectly, and perfectly mimic the manifested behavior of a human being, that it STILL doesn’t actually understand anything. He’s wrong, for the various reasons we’ve already covered, and which can be read about at length from a multitude of sources. But none of it has to do with the program being erroneously written, and being incapable of learning new things. It’s assumed in the setup for the experiment that the program is correctly written. If the program functioned as you describe, and had no capacity for dynamic memory, then it wouldn’t actually be a turing machine.

In that case, Searle would be saying, “A finite state machine can’t think”, but that’s not really an argument anyway. There are tons of problems which we already know, with mathematical certainty, that finite state machines simply cannot solve. But a Turing machine can, in theory at least, solve any problem if given enough time and resources.

Heh, as a professionally trained computer scientist with a background in philosophy, who specifically works in the field of AI, I’m pretty certain that I do.

I know a lot of professionally trained computer scientists and half of them are drooling idiots I daily wonder how they manage to keep their jobs.

I wish you said this from the beginning. He’s not wrong, just not for the reasons he believes because those reasons are indeed flawed. And just to be clear—because you keep harping on dynamic memory and rewriting—I’m well aware that all of that is possible. What I disagree with is that this is part of the original argument. That is my issue here. The CR ruleset facilitates carrying a conversation in Chinese. Nothing more. Does that involve usage of dynamic memory? Yes. But what you are proposing is adding a potentially unlimited amount of interpretive rules thus creating a base that effectively is a true AI already, which most certainly was not Searle’s intent.

What you are missing here, is that all that is required is that the input does not need to be predefined, it just needs to be specified via a predefined grammar. But (and here is the part you may be getting hung up on), that “grammar” doesn’t actually have to be the Chinese grammar. It could be something like, “colored pixels on a defined grid” or essentially anything that could be representative of human sensory input.

Again, I’m well aware of how this is supposed to work. And again this exceeds the scope of the CR argument because Chinese characters are the only acceptable input and output. You keep describing a program that is effectively an AI already. And yes a system like that would indeed have no trouble processing any sort of input. It also will never exist because it’s simply too complex for any human to ever create such a program.

And just to make it perfectly clear; I’m in complete agreement that a Turing machine can run a true AI. Just not as a self modifying program. Rather the program needs to simulate a brain and then within that brain the mind is created. That’s why Searle’s syntax statement holds up. The code/ruleset is just hardware, just like a biological brain is the base for the human mind. It’s what runs within that ruleset that will give birth to a true AI.

Searle is wrong. Searle and Penrose believe, with no evidence whatsoever, that human intelligence is not computable and that human minds can determine the truth of statements that are provably unprovable. Human minds are unconstrained by Godelian incompleteness in their view. Thus they believe no machine (like a Turing machine) that is only capable of computing computable functions can implement human intelligence. This is pure mysticism with no scientific basis, but they and their supporters refuse to accept that.

However despite all that the Chinese Room is an interesting, worthy, and suggestive thought experiment that helps to clarify attitudes about functionalism vs. conventional materialism vs. spiritualism in metaphysics.

Again, I’m well aware of how this is supposed to work. And again this exceeds the scope of the CR argument because Chinese characters are the only acceptable input and output.

Well, no, not really. Indeed, the impractical method by which input and output is handled (i.e. a human) actually makes it easier since you are kind of getting character recognition for free.

All you need to have in the set of finite rules, is a set of rules for how to deal with new characters. Then you just store those new characters in your dynamic memory (in this case, just write them down on new pieces of paper and stick them into the filing cabinet of previously defined characters).

You keep describing a program that is effectively an AI already.

Yes, that is the starting point of Searle’s experiment.

Searle is not debating whether or not such a program can exist. He accepts that it can.

Searle is debating whether such a program would actually be CONSCIOUS. He is arguing that even though it is able to speak and learn chinese exactly as well as a human, that it isn’t actually aware of what it is doing.

And yes a system like that would indeed have no trouble processing any sort of input. It also will never exist because it’s simply too complex for any human to ever create such a program.

Well, certainly creating a strong AI is indeed an incredibly complex problem, which is why we haven’t done it yet. But there is nothing which prohibits such a thing from existing. The complexity of the algorithm is immaterial, as all turing machines are capable of executing ANY logical algorithm, of arbitrary complexity.

Figuring out exactly what that algorithm is is the hard part.

But to be clear, the parts that you are describing are merely more complex versions of things which we already do in the field of AI. And generally, the problem isn’t whether or not we can do them, but rather whether or not we can do them tractably, in a reasonable amount of time.

I’m in complete agreement that a Turing machine can run a true AI. Just not as a self modifying program. Rather the program needs to simulate a brain and then within that brain the mind is created.

Your brain IS a self modifying program.

“Holding a conversation in Chinese” does require a true AI already. Think of all the possible ways the conversation could go. You could tell the box a fairytale and ask it what it thinks of the main characters, how they relate to each other, what should they have done differently. Read it a translation of War and Peace and all the insights that might come from that. Talk about a new game you are playing, explain new game concepts and see if it understands them.

Searle’s position is that it can do all that, but there’s no understanding going on anywhere.

I think the thing you are missing is that the “write this” step can be writing to its own memory, which includes writing new “tables” and/or modifying existing ones. If you preclude writing to memory, you’d need an infinite stack of tables.

The “system” is Searle plus the long stack of tables. But if Searle memorized the long stack of tables, then the system would be just Searle. And therefore, Searle should be able to understand Chinese merely by memorizing that stack of tables.

First, as above, the table as you describe is astronomical and grows exponentially with a astronomical exponent with the length of the conversation. If allowed to modify memory, it’s still astronomical.

Second, memorizing a set of operations that produces an outcome doesn’t necessarily imply understanding. I can assure you that I didn’t initially understand how long division worked just because I memorized how to do it. And that’s a completely trivial algorithm. In the scenario as described, Searle’s conscious mind and working memory are still just the processor. Having him memorize the (either constantly changing or infinite) tables doesn’t a difference any more than moving the tables from books and scraps of paper to a chip in his head would.

But the brain simulation is just more tables and memory in the self-modifying program that contains it. Just like the very popular deep neural nets of today are just multiplying tables of numbers together to come up with an output. The brain simulation would amount to multiplying a bunch of tables together and multiplying those outputs together, and so on.