What is consciousness? What is sentience?

There’s also a profoundly stupid argument by (I think) the renowned Gilbert Ryle that says (I paraphrase) the world must be as I assume it is because otherwise I would go insane and I’m not insane, so there.

Would that then explain that the clinically insane found the world not to their liking?

I like the concept of a spectrum of consciousness and sentience. Much like how there is a spectrum of intelligence across different plants/mammals and a spectrum of physical attributes among mammals there probably is a spectrum of sentience.

That is a really good point.

What isn’t?

Well we know that dolphins, whales, octopi, apes and parrots are quite more intelligent than we thought earlier. Some animals even recognize themselves in mirrors, like elephants.

Of course the question is, are they self-aware? Is consciousness self-awareness?

Is there a theory of consciousness that explains why DrDel keeps replying to himself?

Because that technology doesn’t exist

Because that technology doesn’t exist

Seriously curious. Does anyone here believe that consciousness needs a numinous soul?

I had this idea in junior high, then 20 years later The Prestige ripped it off. Of course it was only in my head and no one else knew about it, so i’ll be generous when assessing damages in court.

Basically a ‘transporter’ that was actually a ‘copier’; you get an atom for atom recreation of the original subject… but it’s only a copy. The only person that could tell, or would even care, that the original copy had been erased would be that person, who would cease to exist. For all intents and purposes the ‘transporter’ would be just that, and even the ppl going into the transporter wouldn’t know, because no one would ‘survive’ to tell the tale, aside from our intrepid anti-hero and the enabling Plot. But basically, no, as the philosophers would put it, exact similarity does not amount to numerical identity.

Yes. The answer is there is something up with Android chrome and these forums… only seems to be when u try to do a quote reply

But… let’s stay on topic.

There was a fun short story a few years ago about a rich privileged asshole going through what he thinks is a rejuvenation cure. He’s gotten in the habit of abusing his body in every possible way, and so every ten years or so he shows up morbidly overweight and addicted to various substances and then emerges from the treatment young and healthy again. But it’s really a mind-copy in a new clone body, with the previous clone shuffled off (for the sake of the narrative) to a remote planet where he has to get healthy the old-fashioned way if he wants to live.

Transhumanism line of thinking: how do you think an individual’s consciousness / personality / and sentience would be altered after his or her consciousness / personality/ sentience is transferred to clones of themselves for a couple of thousand years … say they started in 2000 AD and have successfully cloned their meat body over and over again and transferred their consciousness / personality and sentience to their new bodies all the time until present day?

They would have witnessed a lot of history
They would be exceptionally wise learning for years of their and others mistakes
They may be emotionally detached from human’s around them because they will “outlive” everyone around them…

Those are personality modifications as a result of transhuman transfers.

But can we analyze consciousness and sentience if we assumed transhumanism is possible? How would they be affected? Would this give us better insight into what these concepts are and how they function ?

How do you know?

In such a hypothetical scenario, you wouldn’t necessarily have any grasp of what the real world was. You know that your own consciousness exists, by virtue of your experience in thinking about it. Descartes’s “I think therefore i am” start. But beyond that, the proof gets much more nebulous.

For instance, to take a bit of relatively modern fiction, you could be within some advanced virtual reality simulator like the matrix, where while the technology to create the matrix exists, it is merely not represented in the simulated world.

But that is somewhat far fetched perhaps. Possible, but not necessarily supported by any empirical evidence.

But ultimately the point is that while it may be reasonable to assume that other humans are experiencing the same conscious experience that you do, is not actually something that you know with certainty.

What you can do is say that, ultimately, it doesn’t matter. All that matters is the manifested behavior. If something acts though it is a conscious being, then it is a conscious being, and deserves to be respected as such.

But how do you know what a conscious being acts like?

Does a puppy act like a conscious being? A lizard? A housefly?

I think that anyone who understands evolutionary theory should be fine with the concept of consciousness and/or sentience in many other animals. The more closely related we are, and therefore the more similar our brains, the easier it is for us to relate to this. The question for many is, how much less complex does the brain have to be before you suddenly lose this quality? Or perhaps more likely, is there a continuum of awareness? Even within our own species, there are those who seem to think in a different way to others, able to bring together broader concepts, while others seem to have much less capacity for thought (insert easy jokes here about Trump’s supporters). Moreover, our capabilities change as we grow. Some animals are often compared to human children at certain ages; a smart dog has been compared to having the mental capacity of a two-year-old human child for example. In fact, that’s not particularly helpful because adult dogs are far better than we are at certain cognitive tasks (for example, those involving smell and spatial awareness of odours, and piecing together a social structure from that), but it’s the only metric that makes sense to most people.

Ultimately I think we have to look at the available evidence. To continue the same example, we know from biological and behavioural evidence that a dog perceives both its surroundings and its place in them in a way that we’d be able to relate to. Perhaps not in the same way that we do, but the same building blocks are there. What about a crocodile? Again, there are enough similarities there - although they’re much harder for people to see or relate to - for the same general ability. What about an amoeba? Well, such an animal clearly doesn’t have a brain for a start, although it can react and learn because there are some very basic ingredients there for it to do so. Perhaps its level of consciousness would stretch our definition too far, but at some point during evolution the framework for consciousness began to appear. I just refuse to accept that it only got switched on in humans because it’s convenient for some to believe that.

I don’t buy the argument that we don’t know if other humans are conscious or not; our biology is virtually identical between individuals (even more so with identical twins). It seems inescapable that what we experience is essentially inseparable from what others experience, especially as we can communicate the way in which we experience the world and think of ourselves to others. Of course we can’t prove it, but I tend to believe that overwhelming evidence is sufficient for us to move on to more interesting problems. I mean sure, we might all be dreaming this, or living in The Matrix, but such thought experiments are unending.

Philosophical considerations of consciousness are different from evolutionary, biological or neurological considerations. “Experiential privacy” (a concept in metaphysics which refers to the innermost experience of the process of a person’s conscious self-awareness) is protected by the reasoning behind the cogito regardless of what science may have to say on the subject. Even some advanced mind-reading device that apparently elucidates every thought, image, and symbol arising from a person’s brain doesn’t pierce this veil because there is no way to verify for certain that the device actually works.

If you put aside metaphysics, then consciousness is reduced to a physical mechanism of some kind, which is a perfectly legitimate field of scientific investigation. I have no doubt that from a scientific point of view we can determine some amount of consciousness to exist in most higher life-forms. Science however cannot address the ultimate Cartesian question any more than it can address the existence of God, partly because all our sense-data, measurements, observations, etc. may be mistaken or illusory.

Well that was the basis of Turing’s test. Essentially, take the baseline as the behavior of a human, since we agree that all humans are conscious beings.

But it’s not really a test for intelligence, as much as it’s a test for human level intelligence. I don’t think intelligence or sentience is a boolean state. Everything is intelligent to some degree. Hell, even an inanimate object like a switch could be thought to have an extremely rudimentary intelligence in that it is aware of its state.

Consciousness is the unreliable narrator that recounts whatever your associative algorithms have already done.

Intelligence describes behavior. Turing recognized this, and argued that it could be defined as something that imitates human behavior. In fact, his paper was titled “Computing Machinery and Intelligence”.

But consciousness is not a behavior, and it’s not really addressed by Turing. It’s quite possible that a lot of sentient creatures are not particularly intelligent. Likewise, it’s possible that an extremely intelligent creature or machine is still not sentient.

There is a famous thought experiment by John Searle that highlights some of the issues surrounding mind, consciousness, and understanding. I’ll just go ahead and quote the Wiki entry:

Searle’s thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker.

Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient paper, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program’s instructions, and produce Chinese characters as output. If the computer had passed the Turing test this way, it follows, says Searle, that he would do so as well, simply by running the program manually.

Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing a behavior which is then interpreted as demonstrating intelligent conversation. However, Searle would not be able to understand the conversation. (“I don’t speak a word of Chinese,” he points out.) Therefore, he argues, it follows that the computer would not be able to understand the conversation either.

Searle argues that without “understanding” (or “intentionality”), we cannot describe what the machine is doing as “thinking” and since it does not think, it does not have a “mind” in anything like the normal sense of the word.