What is consciousness? What is sentience?

Any takers on exploring this with me?

Was it Descartes or Kant who thought it was liquid secreted from the pineal gland? Pluto had antiquated theories as well.

A series of neural networks is a hand waving theory that doesn’t explain “the magic” completely.

So…what is consciousness? What makes us sentient?

Our brain is part sensor array… sight sound smell

Our brain makes decisions every day

Our brain is partially a memory vault…and we use those memories to help with future decision making

Assuming we all agree with these basic building blocks… how is consciousness and sentience formed?

How do we differ from a plant or an amoeba?

Sentience is a quart of beans and to care for nothing… oh, wait, that’s happiness.

The trap about sentience, and what has bothered scfi-fi writers for decades, is that the only one able to tell if you’re truly sentient is yourself. And since you’re the only one truly capable of knowing, if you’re not sentient it’s impossible for anyone to truly tell. It really is a sort of chicken and egg problem. It’s relatively trivial to imagine hypothetical “zombies” that appear to be sentient from the outside but have no ‘internal monologue’ going on; i.e., they pass every external test to judge if they’re sentient but that’s because they’ve been ‘programmed’ or ‘evolved’ to mimic sentience perfectly.

I think that’s the real catch - we can come up with any number of myriad descriptions of what sentience should be, but the only person capable of testing those criteria is the subject themselves - but you don’t need a test to know if you yourself are already sentient, so it’s useless to test. (The practical way around that is to notice that sentience is certainly related to biological evolution and that since we’re all related to one another, and i’m pretty sure i’m sentient (albeit not intelligent, let’s be honest now), everybody else is likely sentient. It’s harder/impossible for other species though.) Not many people today outside of religions are much interested in exploring the concept of ‘soul’, that there is some kind of intelligence that is not completely continuous with the physical body, but if you were interested in going down that route the medievals seemed to have hashed it out pretty completely.

The initial premise you pose is flawed. One can tell if others are sentient. Compare a human to an amoeba … clearly one is sentient and one is not.

The question is… what is sentience? And how does it work?

I think I’m somewhat with Endigm here. I don’t think it is so obvious that you have sentience and the amoeba doesn’t. At least, it seems reasonable to think (if hard to prove) that there might be a continuum of sentience with humans at one end and amoebas at the other and lobsters and crows and whales and macaques somewhere in between. At least, it’s a nice idea but is there any way to prove it for sure?

One approach is maybe to look for epiphenomena of consciousness that at least seem like they might correlate with consciousness itself. Evidence of self-awareness. Deliberative processes. What other behaviors spring from consciousness in our experience that we might find in other creatures?

What is asking?

I’m too un-stoned for this discussion right now, but I’d recommend Dan Dennett’s “Consciousness Explained” for an interesting take on the question.

IMO the mechanics of consciousness most likely have to do with our brains working all at once but not in perfect synchrony.

I don’t buy most of the old Jaynes bicameral-mind theory, but clearly your whole brain is firing neurons all the time regardless of wave state (the various “brain waves” are evidence of synchronization). And the fact of the corpus callosum being such a narrow chokepoint/pipeline may well be a factor, though people with the tissue severed are evidently still conscious. I think the fact the brain is a little out of focus generates the sense of existence as different overlapping aspects of the mind become indirectly aware of one another. (Note I like Minsky’s “society of mind” notions, but I don’t believe in separate, discrete mind components; I think it’s a mess of fuzzy, vague, overlapping elements or regions). There is naturally no direct sensory experience of this loss of sync (unless the sync is lost so much that you become schizophrenic or otherwise organically messed up) but I think the consequence is consciousness.

This doesn’t address the philosophical element, just the mechanism, of course.

But anyway for computers to be conscious, I expect the feature will have to be designed in to mimic brainlike consciousness rather than arising naturally from the complexity of a software mind because except by design advanced software minds will naturally not be out of sync or have this kind of vague, fuzzy overlap of functionality. But I certainly don’t believe in any ridiculous magical quantum microtubules that allow brains to transcend the limitations of Godelian incompleteness, a la the absurd theories of Searle and Penrose. For me all aspects of mind must necessarily be computable.

As an aside, one current philosopher who confronts consciousness head-on is Dennett. He’s a very argumentative, forceful type, so his writing may occasionally rub you the wrong way, but it’s his specialty anyway. ETA: Bah, Pogue got there first :)

How do you know that another human is sentient, and not merely a complex automaton designed to trick you into thinking it is intelligent?

Of course you can’t ever know this, but you can assume based on your inductive experience of the world that they are enough like you to also be conscious.

When you start doing scientific experiments about consciousness, the picture gets pretty complicated. We have this idea that we are experiencing a constant flow of sensations and that what one is experiencing at any instant has some defined meaning. That if you paused the video in your head at some point in time, there would be some reasonable “still frame” there.

But you start doing experiments and find that what you are “directly” conscious of seems to be subject to a lot of post-hoc rationalizations. More like a constantly updated news story on the web about a current event than a constant video. If your brain keeps revising your direct experiences, at what point do you actually “see” it? What version of the news story is on the screen when it plays in the “theater of the mind”? Dennet’s book has a lot of information about what consciousness isn’t, but not a whole lot about what it is, or why it’s there. I’ve read a few books about consciousness by different authors and the only thing I’m pretty confident about is that consciousness doesn’t work at all like the picture we seem to come to naturally.

This is super interesting to me to think about.

I think about a theoretical future where we could model the human brain precisely, down to the spin of atoms, and build one exactly like mine and put it in a body. Is that person me? What does my consciousness feel when that happens? To me it seems like my consciousness would feel nothing, and yet here is a “clone” of my brain, who acts just like me, standing next to me. So consciousness must not be about the atoms and neurons - it’s something else. My brain clone would have its own consciousness. Same as if we can ever get to the “singularity” and put people’s brains into computers. It’s my brain structure, but I’m not in a computer. I’m still in my body, right?

I’m sad that I don’t think I’ll live long enough to find out these answers.

For some reason, transhumanists think that if we could copy our brains and upload them to a computer that that copy would still be you-you, not a copy of you. Like, to the point where if I killed meat-you, you would happily consider yourself still alive. I never understood why the hell they think that’s the case.

Computer-you would happily consider itself alive but would have to work its feelings on meat-you out in therapy.

My statement was more to point out that it’s not necessarily given that humans are intelligent.

Also, I’d point out that it’s not actually through inductive reasoning that you arrive at the conclusion that humans are intelligent.

It’s more that you have an assumption that only through intelligence can one perform actions that are as complex as those of a human.

Indeed, it’s essentially defining the term intelligence to BE an entity who is capable of human like behavior. This is essentially the core bays of Alan Turing’s test.

It is through inductive reasoning, because all your assumptions about the world are based on the consistency of apparently similar things being actually similar. For your entire life you have been using inductive reasoning to bet that rocks will fall down when you drop them and that crows won’t be colored pink and so on. So when you see all these people around you who are evidently similar to yourself and who claim to be conscious like you yourself, the natural supposition based on your experience of the world is that they are in fact conscious, because in almost all previous cases of common-sense reasoning about the world these patterns of similarity have been borne out. Without relying almost exclusively on this kind of reasoning you would be intellectually paralyzed and unable to do almost anything in the world.

I’m with you. I think that if (when) we achieve this, the “copy” would basically be like a clone. There would now be two sentient beings - not one. And when the original you dies, you would really cease to exist. The other you would be plugging along, but the original you would have no knowledge of the computer you. The original you’s consciousness would just enter the darkness of death.

There are senses of “consciousness” that don’t necessarily have any “magic” to them. For example, someone’s drunk, you say, “how many fingers am I holding up?” That’s all in the public arena, so to speak, with criteria (for whether it’s “there” or not) being publicly accessible and shared.

Trouble comes when we think that consciousness in that sense necessitates having something “inside” that “is conscious”. IOW, we speak normally without any problems about a human being or animal being conscious of the world. It’s only when we start reflecting on self-consciousness that we seem to get into the weird philosophical tangle.

The thing is, the neural networks pretty much do explain, without much remainder (all that’s left is loose ends and detail really) how, say, an animal or person is able to navigate the world, move its body, etc. The brain pulls the strings, basically; the brain has the machinery that avoids tigers, makes the jaw, throat and tongue work in unison to shape sounds in speech, etc. But somehow that doesn’t seem to be enough, especially in our case: it seems to us that there’s this private phenomenal “show” that only we have, the having of which somehow doesn’t seem to be accounted for by the neural networks.

As an amateur of philosophy who’s been around the houses many times with these sorts of questions, my favourite theory of consciousness is Riccardo Manzotti’s, it’s called the Spread Mind hypothesis, or Process Externalism in posh philosophy-speak. The idea is that we’ve been looking at the problem of consciousness through the wrong end of the telescope, so to speak, and that we get a pretty elegant solution to the problem if we let go of the idea that consciousness or the mind is something that’s confined to the brain. The brain is necessary, but not sufficient, and the actual external objects are in a very real sense identical with our consciousness of them (IOW, there’s no such thing as “representation” that lives exclusively in our heads, only presentation that’s a causal, lived-through interaction between us and the physical processes we encounter).

This is neat because it gives a potential physicalist explanation of the insights of the “non-dual” mysticisms of the East (Zen, Dzogchen, Daoism, etc.), which also see subject and object as analogous to magnetic poles in one thing. Now, many Eastern-flavoured non-dual systems think of this one thing as “Consciousness” (with a big ‘C’ as it were), IOW they have a flavour that’s pretty close to Western Idealism. But Process Externalism offers a way of keeping one’s mystical insight cake and eating it along with plain, ordinary, scientific materialism (albeit a materialism that’s shifted away from a sense of objects as out there looking pretty much like they look when we experience them, even when we’re not experiencing them, to objects being processes that look exactly like our experience looks like when we interact with them).

How do you know, without a definition of sentience?

As pointed out by Miramon, most people start by defining themselves as sentient, and then assuming that anything that sufficiently resembles or behaves like them must also be sentient. But there are several problems with that approach.

First, something that resembles or behaves like you is not truly the same as you in all respects, so you can never be sure that another person is sentient.

Second, you can’t tell whether something that sort of resembles and behaves like you (e.g. a horse) is sufficiently similar to be considered sentient. There is no clear dividing line.

Finally, if you refuse to consider the possibility that something completely different from you is equally sentient (e.g. an amoeba) then “sentient” can basically be defined as “human-like”.

Intelligence is not the same thing as sentience. Intelligence refers to general problem solving ability. Sentience is harder to define, but generally refers to self-awareness, consciousness, or some other internal state that is distinct from behavior.

Turing attempted to define “intelligent” behavior by a machine, but that doesn’t necessarily have anything to with sentience or consciousness.

“I think! I think I am. Therefore I am! I think?”

“Of course you are my bright little star,
I’ve miles
And miles
Of files
Pretty files of your forefather’s fruit
and now to suit our
great computer,
You’re magnetic ink.”

“I’m more than that, I know I am, at least, I think I must be.”

“There you go man, keep as cool as you can.
Face piles
And piles
Of trials
With smiles.
It riles them to believe
that you perceive
the web they weave
And keep on thinking free.”