This is a thought experiment. Arguing that it’s impractical means you are missing the point.
Well, yes and no. In this case, its impracicality plays some perhaps non-trivial role since it screws with the success of the system in doing what it is supposed to do (pass the turing test). However, for the sake of argument, this could be ignored.
However, the reason why the impracticality is important, is that Searle’s presentation kind of depends upon ignoring this impracticality.
When people, correctly, pointed out that the overall system absolutely does understand Chinese, Searle’s only response was that that didn’t make sense, because it was “just a bunch of bits of paper.” This was kind of where he fell into circular logic, he essentially presumes that “the bits of paper” can’t possible understand anything. But this is essentially the hypothesis he is trying to prove.
The reality is, “the bits of paper” are a freaking mountain of code, as you correctly pointed out. But when you then look at it that way, it highlights the circular logic of Searle’s counterargument.
His whole argument is essentially, “In this room, it’s just me and some paper, and I don’t understand chinese!” Well sure… in his experiment, he’s largely immaterial. It’s the complex rulebook which contains the knowledge of Chinese, and all of the memory structures necessary to make semantically meaningful answers. Searle never effectively addresses this, just criticizing those who believe it as being blinded by their ideology.
And yet he wouldn’t understand a word of what he was writing.
Yes he would. He would understand it as well as anyone.
This is the fault with Searle’s counterargument… He suggests that you could somehow perfectly internalize that complete ruleset, but then not actually “understand” it. That it’d somehow be compartmentalized away from the rest of your brain.
Again, this is effectively dependent upon you not really thinking about what the “bits of paper” that comprise that ruleset actually are. Because they are a supremely complex set of rules about how to process any given statement, and how to create a new answer from a vast memory store of not only gramatical rules but also experiences.
This is, essentially, what would happen to you if you learned chinese as a human.
The idea that internalization of those rules doesn’t comprise understanding chinese is essentially just another circular argument. It depends upon you assuming that those rules don’t inherently include understanding, despite the fact that no definition is ever given about what those rules are.
For instance, the Churchlands (about as big a name as you find in this field) agree that the Chinese Room does not understand Chinese. Spoiler: they think that this does not necessarily doom other types of AI, and of course the debate does not end there.
Sorry, but I don’t think that most folks in the fields of cognitive science and AI actually agree with the Chinese Room. The arguments that have been put up against it are absolutely solid, and forced Searle to fall back upon handwaving. Maybe the Churchlands wrote some piece a quarter of a century ago saying they agreed with him, but they don’t comprise a majority opinion here.
Ultimately, you’re free to agree with his handwaving, but that’s really all it is. His counterarguments were clearly refuted, and he was left with absolutely no meanginful response except what was essentially an ad hominem attack on those who had defeated him.
Again, at this point the argument has been so thoroughly hashed out that us repeating it here is kind of pointless. You or anyone else can just go look at the whole arugment. It’s not like it’s progressing any further at this point. It’s done.
Huh?? My only claim is that consciousness exists in at least one person. That’s it. I assume you agree. If you disagree, then you don’t believe that consciousness exists at all. What’s the point of your argument?
Yes, I agree that consciousness exists in me. I also believe that it exists in you, because you behave as a conscious being.
I don’t make any claims that we can test for its existence. In fact, I explicitly stated that we can’t, at least not now.
But that’s the thing. That’s what makes your definition of consciousness essentially pointless. You’ve defined it as something which cannot be tested for, and has no impact on the environment. It’s nothing.
I assume it’s a physical property and not magic, but of course I don’t know for sure. Neither do you.
But what is that property? Since it has no impact on your manifested behavior, then what is it?
If you believe consciousness exists in machines, then the burden of proof is on you. What is your empirical evidence?
My definition of consciousness is based upon their manifested behavior. It essentially IS their manifested behavior.
Let me just stop you here. This is the teleological fallacy.
Not everything that exists serves a purpose or evolved for a purpose.
But with your definition of consciousness, it is literally nothing.
It’s an invisible quality that you cannot test for, which does not impact the physical world in any way. It doesn’t change how you behave, and thus doesn’t play any role in evolutionary adaptation.
What IS consciousness in your definition then? Because it seems like in your attempt to say machines can’t have it, you’re left with a quality that has no physical form at all. It essentially exists outside the universe. That’s why it sounds like you’re describing magic, albeit an exceptionally useless and unimportant type of magic that does nothing.