Jump to content

The mechanism of self-awareness


KipIngram

Recommended Posts

Maybe it's possible and I just didn't understand - but I thought the actual results were statistical in nature.

 

Yes. But the correlations found do not agree with the premise of the existence of local causes. That means if you want to see consciousness as a quantum phenomenon that consciousness is not just local. Do you want to go that far? (Some people do: accidentally I yesterday saw an episode of 'Through the Wormhole', named 'Is there life after Death'. Stuart Hameroff defends such a kind of theory. Maybe something for you?

 

But emergent consciousness theory proposes that something of an altogether different nature somehow arises from theories the foundations of which have nothing to do with that phenomenon. Consciousness is not "the stuff of physics" in any way. So that proposal is much more of a stretch and carries a higher burden of proof in my mind.

 

Hmmm. But we have other examples of emergence. E.g. electrons, protons and neutrons have no colour. However their composites (atoms and molecules) have. Electrons, protons and neutrons do not evolve, but organisms built from them do. Electrons, protons and neutrons do not plan the future: but we, built from them do. Can you give me a reason why that would be impossible, except 'I cannot imagine'?

I believe I have free will because I sense it, but I actually don't believe that's as strong a statement as saying I believe I have awareness because I sense it. You may be right and free will may be an illusion. But awareness is undeniable, because if I didn't have it I would "be aware of feeling it." It's a self-proving phenomenon as far as existence goes.

 

Well I am not saying free will is an illusion. Only a certain definition, namely as uncaused will, it makes no sense. And I don't think you really sense that. What you sense is that you do not know where your thoughts, feelings, motivations etc arise. But that does not mean that they originate from some non-physical domain. Hofstadter would say: you do not have access to the level where your brain 'calculates': you are not aware of your neurons firing.

What I think you do sense, is what I said before: that I am able to act according to my wishes and beliefs, and that makes free will. Creativity is the unexpected popping up of new ideas. But that does not mean that they do not have a causal foreplay on levels where your mind has no access to.

If it turns out it is emergent, and we prove that, we've learned something about emergence. If it turns out to be fundamental, and we prove that, we've learned something even more important. So I am not going to let emergence theory have an easy ride of it - I'm going to kick the tires as hard as I can until it either works or doesn't. Meanwhile I'm basing my current opinion (guess) on what I said earlier: it's easier for me to believe that something altogether new exists than to believe something new and completely different can pop out of an existing, almost fully-formed theory. I'll believe the latter if it's shown, but not before.

 

I do not think this will show ever. We already know that consciousness only arises in animals with brains of a certain complexity. I also think that consciousness exists in different gradations: it is not that insects are not aware at all, and only primates, or even just humans, have consciousness. But this means that your 'conscious atoms' can only express themselves as conscious when they are organised in certain complex structures. If we have discovered exactly what kind of structures these are, the need for postulating 'conscious atoms' will drop away, just as the angels moving celestial bodies dropped away by celestial mechanics. An "I cannot imagine how these celestial bodies are moving without angels pushing them" will not do.

Edited by Eise
Link to comment
Share on other sites

Eise: I'm sorry - I don't have time this morning to do the quoting dance, but I think you'll be able to tell what goes with what in my reply.

 

Yes, you're right - the Bell experiments don't say decisively that there's no interaction between the entangled particles; just that it's not local. Actually I lean more toward the interpretation that the measured quantities have no reality before they're measured, but I don't find that at all incompatible with the notion of consciousness as fundamental. I'm not really prepared to take a position on precisely how conscious choice might be "implemented." By that I mean to say that I don't have a theory for how the precise selection of which quantum events a particular consciousness might be able to influence is made. I can propose an experiment, but I'm not sure it's an experiment that would be easy to do in a humane way. First we'd have to find, in the brain of a test subject, the region we thought housed the quantum events in question, and then we'd create conditions that would produce repeatable behavior (say, offer food that the subject had to reach for), and then we'd look for deviations from the usual statistical distribution of quantum outcomes. It might be hard to get to the bottom of the situation, but at some level, if this theory is right, we ought to be able to find a "starting point" where those stats deviated, and that deviation would avalanche into the macroscopic response. One possible problem there, though, is that our instruments would be entering into the subjects ability to physically realize its free will in a highly invasive way - just by trying to watch we might "break things."

 

Color arises directly from electromagnetic effects, so that's perfectly natural. Actually "color" is a human perception, so that's not entirely true. But the behavior that produces light of the frequency we call some color or another is just a direct outcome of the same theory that describes how the individual particles behave.

 

No, I can't give a reason beyond "I can't imagine." But on the other hand, a few hundred years ago people wouldn't have been able to imagine how electrons and protons would give rise to color, whereas now it makes perfect sense to us. That's really the nut of my whole issue - I want to see a better connection before I completely accept emergence as the explanation here. I'm not rejecting it wholesale - I accept it as a contender. But I don't want to toss out the other possibilities without something a bit more solid.

 

By the way, I could ask you the same thing. We've never done quantum experiments in living brains; do you really feel you can reject as entirely impossible the idea that consciousness exists independent of physical reality, and achieves free will in the manner I've suggested? I am absolutely not proposing that consciousness could cause a quantum outcome that was not an eigenvalue of the wave function. I think both of these candidate thories are possibilities. If I were insistent that free will was NOT an illusion - that it was totally real and certain, then I'd feel I had to reject emergence based on that. But as I've noted I'm not nearly as sure of that as I am of the existence of awareness.

 

I really do think our free will discussion here isn't terribly productive. We do have different working definitions of free will, so even using the term is difficult. I am definitely referring to "uncaused input," by which I mean physically uncaused (obviously it would be "consciousness caused") and have suggested a route by which it could enter the physical domain. But I've also proposed a reasonable scenario where my sort of "free will" wouldn't really be having any effect on physical reality - it would only be affecting what aspect of physical reality the awareness was aware of, and that makes it a much more murky concept. So I really don't want to try to put a "stake in the ground" on free will. I think both of these are perfectly reasonable theories:

 

  • The Copenhagen Interpretation is essentially on the right track (i.e., no Many Worlds) and consciousness exercises free will by selecting certain allowed quantum outcomes.
  • The Many Worlds Interpretation is essentially on the right track, and consciousness "perceives" free will by selecting the path awareness takes through the worlds (but has zero physical effect on the multiverse).

That latter of those two proposals adheres to your perspective completely as far as free will goes. No physical effect - zero, zip.

 

I can't quite get my head around how you're choosing which possibilities to accept as possible and which ones not to. Without a working theory, it's just as "magical" for awareness to arise from emergence as it is for it to be fundamental. In the one case you have an absolutely unexplained "effect," and in the other you have an absolutely unknown "entity." We saw that Uranus didn't move right, so we postulated Neptune, and we found it. But what if Neptune had been invisible somehow? Yes, I know that's a stretch, but I'm drawing an analogy here. Let's say we just never could confirm it's existence, except for the fact that Uranus moved funny. The equivalent of your position would be "there is no Neptune - we just don't have the right theory yet." Of course we could see Neptune and all was well. But you're taking the position that we are so sure about what does and what doesn't exist that we can deny the existence of consciousness as a fundamental entity without further thought - that our theories MUST be extensible in some fashion to explain anything that would be attributable to "consciousness." On the other hand, I'm saying "Maybe you're right, but maybe there's a Neptune."

 

goldglow: Rational mental processing is vital for survival, but early on in this thread someone noted that "awareness" is not. You could envision a robot designed to function entirely as a human or other organism. As long as it made the right responses and so forth, it would survive as well as the real organism. Having "awareness" of those things happening, in the sense I mean it (i.e., "feeling it," as opposed to "registering and responding") isn't really necessary. it's an "add on" of some sort.

Link to comment
Share on other sites

 

...... Rational mental processing is vital for survival, but early on in this thread someone noted that "awareness" is not. You could envision a robot designed to function entirely as a human or other organism. As long as it made the right responses and so forth, it would survive as well as the real organism. Having "awareness" of those things happening, in the sense I mean it (i.e., "feeling it," as opposed to "registering and responding") isn't really necessary. it's an "add on" of some sort.

Thanks,K. I can't argue with any of that, but "..... not by bread alone....", and all that, has importance too, i think, ( for human beings anyway), now that we are more than just hunter/gatherers. Don't think i can add any more to this thread. Thanks for putting up with me. I'll watch from the sidelines from now on.

Link to comment
Share on other sites

Ok, so after thinking about this for a while the Neptune example isn't quite as good as I thought it was. In the case of Neptune it was proposed that there was another entity "more or less" like all of the other planets - just in a different place and with a different mass and velocity and so on. Something "new," but not something "different." So the proposal is is rather more compelling than in the case of consciousness.

 

I do understand that it's wise in science to resist the urge to introduce new fundamental things to explain observations. If we do that too freely, we wind up not "pushing the theory" as hard as we should try to push it, and might not move forward as quickly. Feynman talks about this in the 1964 Messenger Lectures (video 7). He took a strong position, saying we should always squeeze our existing theories as hard as we can before adopting new fundamental entities. Ok, that's fine - and I agree. But he made it very clear that this always involves guessing, and that when the dust settles you might wind up having to adopt the new thing anyway.

 

I'm ok with that perspective. I absolutely don't think that we should say "Oh, awareness is just different - we're never going to explain it with mainstream theory, so let's not even try. I think we should work these emergence theories as hard as we can; one of them might come through. But I agree with Feynman - it's a guess.

Edited by KipIngram
Link to comment
Share on other sites

It's easy to lose perspective with such questions. Think of it this way, if animals weren't aware of their surrounding they couldn't eat or mate. Since being aware is a necessity to being alive at all it is merely a given. From this point it's easy to confuse awareness with the things of which we are aware.

 

I believe "consciousness" is largely about communication. In the individual this awareness is the one way communication we have with our ganglia and nerve centers. Of course these nerve clusters are "conscious" as well since they need to be to do their job. We simply aren't aware of this consciousness because the medulla screens it from us (our consciousness). But there's a party going on in there in binary with everything awaiting your next conscious decision. Binary is the language of the individual. This is shown by the fact that when we make a move the move actually occurs a split second before the decision.

 

Individuals of a species must also communicate with each other and even across species. This is accomplished with species specific languages that are in harmony with one another. These languages tend to be quite simple and involve numerous vectors. These languages are largely opaque to humans because we use a different format for communication for 4000 years now. But species must have language to find mates and to avoid predation or other dangers.

 

Of course this doesn't really "explain" the nature of "consciousness" in terms anyone might understand but I believe it shows the nature of consciousness and it suggests why most languages can be so simple as to match the nature of the consciousness. It suggests means to study it and that consciousness is a function largely of complexity. This will leave you wanting because "how consciousness is a function of complexity" is the nature of your question to begin with. But this complexity simply involves communication within and between organisms. Without this communication there is no consciousness nor need for it. A baby is born and immediately is seeking to communicate. It lacks the fine motor skills necessary for most things the parents take for granted. These skills are learned just as is the finer points of language of any sort.

Link to comment
Share on other sites

 

But my point is that EPR experiments show that there is no room for hidden local causes. QM has only randomness on offer, and my will is simply not just randomness.

 

QM is the closest thing we have to describe the fundamental reality of the universe, but we are so far removed from QM, in terms of scale, that the randomness at that scale, seems predictable at our scale; a useful analogy would be an accumulator bet, the more different bets it contains the less chance of winning.

 

Quantum tunneling is perfectly possible at the QM scale but imagine the odds of that happening at our scale.

A successful termite colony displays intelligence but doesn't make a choice.

Link to comment
Share on other sites

Hi KipIngram,

 

I think I have good grounds to state that QM processes are not relevant for consciousness. The number of QM events involved in a simple signal process in the neurons and between the synapses is too big, and temperature is too high for individual QM states to play a role. See e.g. Max Tegmark,

The importance of quantum decoherence in brain processes:

 

Based on a calculation of neural decoherence rates, we argue that that the degrees of freedom of the human brain that relate to cognitive processes should be thought of as a classical rather than quantum system, i.e., that there is nothing fundamentally wrong with the current classical approach to neural network simulations. We find that the decoherence timescales ~10^{-13}-10^{-20} seconds are typically much shorter than the relevant dynamical timescales (~0.001-0.1 seconds), both for regular neuron firing and for kink-like polarization excitations in microtubules. This conclusion disagrees with suggestions by Penrose and others that the brain acts as a quantum computer, and that quantum coherence is related to consciousness in a fundamental way.

 

 

Of course I cannot be 100% sure that QM does not play a role, but it just does not make sense to me. It shifts the explanation of awareness, creativity and free will to a domain where QM states we cannot observe anything. The wave function itself is not an observable. The only thing we can observe in QM is the single event, not what happens immediately before. There is no way that a 'willed' QM event can be distinguished from the 'default quantum noise'. So how could the brain can do that?

 

Also I think that free will must not be explained by some new kind of process in the brain. As free will for me means 'being able to do what you want' no reference to special mechanisms in the brain is necessary. To distinguish free actions from coerced actions we only have to know what a person wants, and what he in fact does.

 

Same for creativity: the 'classical chaos' in the brain is already big enough that randomness because of neural mechanism are more than enough to explain why new ideas can arise. QM randomness would also not be distinguishable from the classical chaos in the brain.

 

So for me it is much too early to throw away the idea that the mind is an emergent phenomenon of the brain. Introducing consciousness as physically fundamental is like introducing God because we do not understand how the universe, and we in it, have arisen.

Edited by Eise
Link to comment
Share on other sites

I agree with your last statement - I wouldn't have us throw away the idea. We should work on that path, and see where it leads. If it's right, I think we'll eventually figure it out. I'm just not prepared to accept it as a certainty, without a better understanding of the mechanics involved. I also concur with your misgivings about the other proposal - I suspect that if consciousness is fundamental it likely does "reside" in a place that science won't ever be able to "get at." But if you think about it that's not entirely hard to understand. Under my definition is, in fact, fundamentally unpredictable. It's hard to see how the methods of science can study something that has no predictability. That doesn't make it ipso facto wrong, though - it just makes it a situation where science isn't very useful.

 

So you have one proposal (unproven but possible) that science can work on, and another proposal (unproven, but possible) that it can't. How science should respond seems pretty clear to me: work on what it can work on, until such time as it becomes evident (not sure what that would entail) that it's a fruitless path.

 

I'm not particularly bothered by the notion that reality might contain features which are not susceptible to the methods of science. I don't see that there are any guarantees that isn't the case. That in no way makes science useless - clearly it's useful for a heck of a lot.

 

I don't really have any more to say about free will. I think your definition of free will clearly represents something valid and present in reality. My definition, which goes further, may or may not be present in a physical sense. But awareness is something that more or less "proves itself" - the very fact that you think you're aware means that you're at least aware of being aware. If free will is an illusion, then awareness is the thing that's experiencing the illusion. I consider it to be a much more "rigorous" question than free will (as shown by the fact that we couldn't even really agree on what free will is).

 

So we have this pile of neurons, or this pile of transistors, that are behaving in some sort of algorithmic fashion. We can use that model to explain how every behavior arises. Even though we may not be far enough along to do so explicitly in all cases, I don't feel any doubt about the "robot aspects" of all this. But I still maintain that our existing theories don't provide any insight whatsoever into how that arrangement of neurons / transistors can come to possess an explicit awareness of itself in the way I'm talking about. For example, say an organism is pursuing some goal. How does the "optimization process" driving that (something that makes total sense in a robot) become desire / yearning (something that doesn't make sense in a robot). We can pursue goals unconsciously, in the same manner that a robot would. But in order to yearn for something we must be aware.

Link to comment
Share on other sites

.... But in order to yearn for something we must be aware.

The simplest form of self-awareness is to have a camera permanently pointing at yourself that you can see from that position as well as your actual position. We do this naturally by extrapolating after enough times in front of the mirror or other reflectors during growing up; that mirror, eventually, is actually our memory. From actual perspective and virtual perspective that a person has, a sense of 'I' can emerge.... it's very likely a bit more more complicated, with more steps involved, but it seems sensible to me. A person with something like Alzheimers will eventually forget who they are and where they are, so, I very much doubt that awareness is a metaphysical thing.

Edited by StringJunky
Link to comment
Share on other sites

The simplest form of self-awareness is to have a camera permanently pointing at yourself that you can see from that position as well as your actual position. We do this naturally by extrapolating after enough times in front of the mirror or other reflectors during growing up; that mirror, eventually, is actually our memory. From actual perspective and virtual perspective that a person has, a sense of 'I' can emerge.... it's very likely a bit more more complicated, with more steps involved, but it seems sensible to me. A person with something like Alzheimers will eventually forget who they are and where they are, so, I very much doubt that awareness is a metaphysical thing.

 

Haven't read the whole thread, but lesion (and other) studies have shown that certain brain areas are responsible for various aspects self-perception (including in a spatial and temporal sense). This includes simple things like recognizing parts of your own body (or failure to do so) but also includes out of body perceptions, where you do not feel in sync with your physical body. While it is not clear what self-awareness is (it could be very well be an illusion that emerges from the various signals the brain gets and the processing it does) there are associated mechanisms that can be traced. Linking the mechanism to outcome and perception is arguably the big challenge.

Link to comment
Share on other sites

Under my definition is, in fact, fundamentally unpredictable. It's hard to see how the methods of science can study something that has no predictability. That doesn't make it ipso facto wrong, though - it just makes it a situation where science isn't very useful.

 

Well, I think QM shows how we can still do science, even if we have lost perfect predictability: it predicts chance distributions. But as long as some event has a cause, we can still do science. Or it has no cause, and then we are left with randomness. I do not see how randomness can explain mental phenomena. I also do not understand why you mention 'unpredictability' here. Should that be an element of awareness? At most it is an element of creativity. But if I conscious avoid to collide with another car, I think I am fully aware, and very predictable. Somehow you intermingle creativity, awareness and free will. Sure, they are somehow interdependent, but they are not the same. I think, as you do, that awareness is the most difficult problem to understand. If we do understand it, I think the explanation of creativity follows directly. The 'problem' of free will I consider as being solved already. It is a pseudo problem, caused by a wrong understanding of what free will really is.

 

I'm not particularly bothered by the notion that reality might contain features which are not susceptible to the methods of science. I don't see that there are any guarantees that isn't the case. That in no way makes science useless - clearly it's useful for a heck of a lot.

 

If the mind somehow causes our behaviour, then it is accessible to science.

 

I don't really have any more to say about free will. I think your definition of free will clearly represents something valid and present in reality. My definition, which goes further, may or may not be present in a physical sense.

 

But I think your definition just goes too far. If one really observes, in one self, what free will is, what empirically is given what free will is, we do not have that much:

 

On on side, I do not know where my exact motivations come from. In some situations, where my choice is between equal alternatives, it is often impossible to say why I chose the alternative I did. On the other side, I perfectly know that what will happen next depends on my choice. My decisions matter. And if nobody forces me to do something I normally would never do, it was a free choice.

 

Everything else people associate with free will, like consciousness always preceding actions, or that we 'could have done otherwise' in a categorical sense (and therefore contradicts determinism), is metaphysical humbug that doesn't follow from my honest experience.

 

I consider it to be a much more "rigorous" question than free will (as shown by the fact that we couldn't even really agree on what free will is).

 

I agree. But maybe the problem is not science, but the concepts we are using to think about consciousness. And that is why the problem of consciousness is not just a scientific problem, but also a philosophical problem. Daniel Dennett, who had the courage to give his book the title 'Consciousness explained', is a philosopher. But one who uses all of cognitive science to show how his theory works. I think you should read the book. Due to the discussions we have here, I am now rereading it, and really, it is worth the time. But the most difficult in the book is to get rid of some (cherished?) illusions.

 

So we have this pile of neurons, or this pile of transistors, that are behaving in some sort of algorithmic fashion. We can use that model to explain how every behavior arises. Even though we may not be far enough along to do so explicitly in all cases, I don't feel any doubt about the "robot aspects" of all this. But I still maintain that our existing theories don't provide any insight whatsoever into how that arrangement of neurons / transistors can come to possess an explicit awareness of itself in the way I'm talking about. For example, say an organism is pursuing some goal. How does the "optimization process" driving that (something that makes total sense in a robot) become desire / yearning (something that doesn't make sense in a robot). We can pursue goals unconsciously, in the same manner that a robot would. But in order to yearn for something we must be aware.

 

I think that if we have explained how every behaviour arises, we will not feel the need for a separate theory about consciousness. I think that when all the 'easy problems' are solved, nobody will still see a 'hard problem'.

Link to comment
Share on other sites

 

Haven't read the whole thread, but lesion (and other) studies have shown that certain brain areas are responsible for various aspects self-perception (including in a spatial and temporal sense). This includes simple things like recognizing parts of your own body (or failure to do so) but also includes out of body perceptions, where you do not feel in sync with your physical body. While it is not clear what self-awareness is (it could be very well be an illusion that emerges from the various signals the brain gets and the processing it does) there are associated mechanisms that can be traced. Linking the mechanism to outcome and perception is arguably the big challenge.

Right.

Link to comment
Share on other sites

That's the way I look it: the existence of emergence is self-evident but an analytical explanation for complex phenomena, like those of a brain, is, as yet, beyond reach.

 

I very much doubt that awareness is a metaphysical thing.

agree

Link to comment
Share on other sites

But the most difficult in the book is to get rid of some (cherished?) illusions.

 

This is the fundamental question of this thread, which of our cherished opinions is true?

 

Mine is that this illusions exists, my evidence is that we are so easily manipulated, in so many ways; your turn.

Link to comment
Share on other sites

I'm just nagged by the notion that "awareness is an illusion" is a catch-22; if we're aware, then it's not an illusion, and if we're not aware, we can't be "aware" of any illusions. The very fact that we think it needs explaining means that something is there. Also, re: the studies that have been done mapping various cause and effect relationships in the brain, I don't doubt any of those. Those are solid experiments that measure "measurable things." But I haven't yet gotten to the point of associating my awareness with any of those things (voltages in the brain are still just voltages, etc. - I haven't been able to convince myself that "awareness" can plausibly arise from those things we can measure). So while the experiments are perfectly sound, I don't know that they relate to what I'm talking about.

 

This is all rather frustrating, because I can't even really find the right words to specify precisely what I'm talking about. I've just assumed that you guys know, because you have the same thing. Awareness, "mental spark," ego, etc. etc. etc. The part of us that feels triumph when we win and frustration when we lose and all that jazz. We don't even know how to quantify that, much less explain its origin.

 

There seems to be fair consensus among at least some of us (us specifically - here in this conversation) that animals such as dogs and cats have awareness. I imagine we'd also have consensus that bacteria don't. So as we navigate the spectrum in between, where does it appear? I'm sure we could narrow the range intuitively, but we don't know how to make a measurement that tells us whether it's there or not, or point to a specific brain structure that it's associated with, much less explain how that brain structure triggers it.

 

When we can do those things, that's when I'll be on board, and I might be on board sooner, if it at least looks like we're closing in on it. Stating a "how" - a mechanism - is vital, because it's certainly possible that consciousness could be fundamental and there still be brain regions that have activity that correlate with it.

Link to comment
Share on other sites

I'm just nagged by the notion that "awareness is an illusion" is a catch-22; if we're aware, then it's not an illusion, and if we're not aware, we can't be "aware" of any illusions. to a specific brain structure that it's associated with, much less explain how that brain structure triggers it.

 

Of course we are conscious. Consciousness in itself is not an illusion. But many aspects of it can be, like unconditional free will, a continuous stream of consciousness in the exactly the same time as 'world time', the ego.

 

This is all rather frustrating, because I can't even really find the right words to specify precisely what I'm talking about.

 

Did you look at my link of the 'hard problem'? I think we know what you are talking about.

 

There seems to be fair consensus among at least some of us (us specifically - here in this conversation) that animals such as dogs and cats have awareness. I imagine we'd also have consensus that bacteria don't. So as we navigate the spectrum in between, where does it appear?

 

I think there is no sharp threshold of animals that are conscious or not. Evolutionary I think consciousness goes hand in hand with the capability to steer behaviour by observing the environment. A bee that informs her colleagues that 50 meters from the beehive, in NE-direction is a field of clover might have consciousness, even if it is very limited.

 

Let me turn around your 'I cannot imagine' argument: I cannot imagine how an animal can picture its environment, evaluate possible actions with their outcome against its own interests, without being conscious. There is a level in the brain at which it makes symbolic interactions, with representations of itself, its environment and its interests. But we know that symbolic manipulations can exist in rigid, logical, and physical hardware. So consciousness might be the necessary consequence of being such a system.

Link to comment
Share on other sites

Ok, so tell me more about the ego being an illusion. How exactly do we define that word? It does seem, to me, to capture the "I am" essence that I'm trying to get at when I say "awareness." We're typing a lot of words at each other, but let's see if we can focus in this piece. I could define "illusion" in a way that would work for a computer. For example, one of my kids was really psyched over the "face recognition unlock" on her phone. I showed her how insecure it was by pulling a picture of her up on my phone and using it to unlock her phone. So you could say that her phone was suffering an illusion - it thought it was looking at her face when it in fact was not.

 

Of course we are subject to such illusions too, which I'll refer to as "sensory illusions." It's entirely obvious how those can work in a fully mechanistic way. But what I have trouble with is seeing how we could have an "ego illusion" if we don't have an ego to start with. That's the very point I've been trying to get at - our whole ability to have a "more than data" notion of what's going on in our world. We have a "higher level sense" of our own existence than I can explain via an algorithm. Algorithms just shuffle data around without having any notion of what that data represents. Our ability to have that notion - to "feel" things - is what I'm referring to as awareness or ego.

 

Anyway, back to you. Nothing you've said so far has caused me to decide I'm misguided on this, but I feel that I haven't swayed you either, and I don't think either of us is "just being stubborn"; we're both just failing to bring our points into focus for one another. :-(

Link to comment
Share on other sites

Ok, so based on this article:

 

http://www.bbc.com/earth/story/20170215-the-strange-link-between-the-human-mind-and-quantum-physics

 

I'll say that the "awareness/ego things" I'm talking about are "qualia." Whereas I see very well how we could program a computer to analyze sensor inputs and, say, set variables that we've designated to correspond to various qualia (for example, "there's a lot of red in this image"), I don't see how that corresponds to the way we experience qualia. That is what I am looking for a theory for.

Link to comment
Share on other sites

Ok, so based on this article:

 

http://www.bbc.com/earth/story/20170215-the-strange-link-between-the-human-mind-and-quantum-physics

 

I'll say that the "awareness/ego things" I'm talking about are "qualia." Whereas I see very well how we could program a computer to analyze sensor inputs and, say, set variables that we've designated to correspond to various qualia (for example, "there's a lot of red in this image"), I don't see how that corresponds to the way we experience qualia. That is what I am looking for a theory for.

It'll be tied in with memory.

Link to comment
Share on other sites

Ok, so tell me more about the ego being an illusion. How exactly do we define that word? It does seem, to me, to capture the "I am" essence that I'm trying to get at when I say "awareness."

 

There are different meanings of the word 'ego' of course. What I mean is the feeling that something of 'me' is the same though all the years, from my earliest childhood till now. It is the basis if thinking that this is a 'thing', a mind which is the unchanging subject of all experience and activity, and that according to many people survives death. This means it has an existence of its own, independent of the brain. That is the illusion of the ego: there is no such a thing. That does not mean that there is no consciousness.

 

"The light is on, but there is nobody at home".

 

Just to mention: this is also the classical Buddhist view: the soul has no independent existence of its own.

 

Our ability to have that notion - to "feel" things - is what I'm referring to as awareness or ego.

 

Well, as I described above I see this as different concepts. We assign consciousness to the ego, 'that what is conscious', but that is just a (strong) habit. We are aware of things we see, that we hear, of our thoughts and feelings etc. The ego is so to speak a narrative of the brain, without any independent existence.

 

Anyway, back to you. Nothing you've said so far has caused me to decide I'm misguided on this, but I feel that I haven't swayed you either, and I don't think either of us is "just being stubborn"; we're both just failing to bring our points into focus for one another. :-(

 

I don't know, you keep saying that. It's ok if my arguments do not convince you, but I hope at least that you understand them.

 

But again: did you read about David Chalmers, with his 'easy problems' and 'the hard problem'? I recently saw him in 'Through the Wormhole' and he argued for a view that seems very similar to yours: that consciousness is somehow a fundamental attribute of existence.

 

Chalmers argues that consciousness is a fundamental property ontologically autonomous of any known (or even possible) physical properties, and that there may be lawlike rules which he terms "psychophysical laws" that determine which physical systems are associated with which types of qualia. He further speculates that all information-bearing systems may be conscious, leading him to entertain the possibility of conscious thermostats and a qualified panpsychism he calls panprotopsychism.

 

I'll say that the "awareness/ego things" I'm talking about are "qualia." Whereas I see very well how we could program a computer to analyze sensor inputs and, say, set variables that we've designated to correspond to various qualia (for example, "there's a lot of red in this image"), I don't see how that corresponds to the way we experience qualia. That is what I am looking for a theory for.

 

Yes, qualia (singular: quale) is the technical term. However, I think you use it wrongly when you say we experience qualia: qualia are the experience. We experience that something is red: that experience is the 'red quale'.

 

However, I think you get in trouble if you want to answer the question: what is the causal role of qualia? Would an objective observer see the difference when somebody is missing qualia? If they play no causal role in the universe, do they exist then?

 

Dennett heavily criticises the concept of qualia in Consciousness Explained. The chapter's title is Qualia Disqualified. As quote above the text he has following quote:

 

Thrown into a causal gap, a quale will simply fall through it.

 

This does not mean we do not experience the world around and in us. It means that it can be explained by brain processes.

Edited by Eise
Link to comment
Share on other sites

So do you believe computers have qualia? How do qualia emerge from data structures and algorithms? You seem to agree with me that we have these qualia / experiences, but I still see no shred of a hard argument as to how they arise from an algorithmic process.


This is interesting. I'd read before about Penrose and Hammeroff's ideas, but hadn't seen anything quite this detailed.

 

http://www.quantumconsciousness.org/sites/default/files/Quantum%20computation%20in%20brain%20microtubules%20-%20Hameroff.pdf

 

I like the general idea, but it always struck me as somewhat far out to be invoking gravity in the context of such a topic. Also, even if Penrose and Hammeroff are entirely correct, it still just seems to provide a portal for quantum influences in the brain. I don't really see how a quantum superposition would have awareness any more than I see how a transistor computer would. Allegedly it's still just a superposition of physical states - if each individual state can't host awareness, I don't see how a superposition of them would suddenly be able to do so. So it still seems to call for something "extra."

Edited by KipIngram
Link to comment
Share on other sites

So do you believe computers have qualia? How do qualia emerge from data structures and algorithms? You seem to agree with me that we have these qualia / experiences, but I still see no shred of a hard argument as to how they arise from an algorithmic process.

 

We have to be very precise here. As qualia play no causal role, they have no existence on their own. It is a mistake to see them as some additional entities to the process that runs on the brain. So I would say we also have no qualia. It is just another word for being conscious. I think that if an AI program reports inner states in the same way we do, she has consciousness. I see no reason why not (have you seen the movie 'Her'? It is fun, and something to think about. Or Ex Machina?)

 

This is interesting. I'd read before about Penrose and Hammeroff's ideas, but hadn't seen anything quite this detailed.

 

I've read Penrose's Shadows of the Mind. I did not find it very convincing. He is misusing Gödel's incompleteness theorem (Hofstadter shows in GEB that such kind of arguments are invalid). I've even once had a small chat with Penrose, and he very honestly admitted that his theory is far from complete. In my own words: it is very speculative.

... but it always struck me as somewhat far out to be invoking gravity in the context of such a topic.

 

Exactly. I think we learn from physics that combining General Relativity and Quantum Mechanics leads to infinities, but that this problem only plays a role in extreme circumstances, like black holes and the Big Bang. So why we would need a quantum theory of gravity to understand consciousness is totally incomprehensible for me.

Also, even if Penrose and Hammeroff are entirely correct, it still just seems to provide a portal for quantum influences in the brain. I don't really see how a quantum superposition would have awareness any more than I see how a transistor computer would. Allegedly it's still just a superposition of physical states - if each individual state can't host awareness, I don't see how a superposition of them would suddenly be able to do so. So it still seems to call for something "extra."

 

I think theories of consciousness based on QM, or the possibilities left open by QM can never provide a theory of consciousness, simply because in QM we 'cannot look behind the scenes'. What we observer are quantum events: e.g. the measurement of a particle. There is no way to look behind what we observe (an obvious tautology, but tautologies are per definition true...). We even know that that there are no local hidden variables.

Edited by Eise
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.