Jump to content

The mechanism of self-awareness


KipIngram

Recommended Posts

@Kiplngram, being aware is essential to living. Life must be aware of its own needs in order to live. Life must be aware of its need to consume, breath, move away from danger, and etc. Computers need electricity to functions yet my computer never turns its own power button on because it is not aware, it isn't alive. I think you are adding a level of meaning to being aware which isn't real. Being aware does serve a purpose and does add an evolutionary advantage. The more aware of things life is the more it can take care of itself.

Link to comment
Share on other sites

Yes, excellent point. That's my main reason for thinking that there's something completely fundamental that we just don't have factored into our mainstream theories. And I also don't see how a "pattern" can feel aware, either. I don't really care how arranging the transistors (or nerve cells, viewed as mechanisms only) into some special pattern can suddenly make awareness "arise." You could write that pattern down as pencil marks in a book - the book wouldn't then be aware. Adding a person tasked with using some set of rules to erase marks and re-write them in a modified way wouldn't result in awareness either. That's exactly what a CPU does in a computer - it erases and rewrites storage per a set of rules. Each transistor is still a transistor, and each "mark" is just a voltage. No matter how complex the overall arrangement is - there's just nothing there that can explain this.

 

Our theories are missing a piece.

Edited by KipIngram
Link to comment
Share on other sites

And I also don't see how a "pattern" can feel aware, either.

 

It is not just a pattern of course. It is a very complicated process, with many levels, where higher levels can have influence on the lower levels. Really, re-read GEB, you probably would get a lot out of it.

 

I don't really care how arranging the transistors (or nerve cells, viewed as mechanisms only) into some special pattern can suddenly make awareness "arise."

 

Why 'suddenly'? Wouldn't consciousness exist in gradations? And why are you looking at just at the smallest components, and 'view them as mechanisms only'? Seems the precise recipe never to understand consciousness.

 

You could write that pattern down as pencil marks in a book - the book wouldn't then be aware. Adding a person tasked with using some set of rules to erase marks and re-write them in a modified way wouldn't result in awareness either. That's exactly what a CPU does in a computer - it erases and rewrites storage per a set of rules. Each transistor is still a transistor, and each "mark" is just a voltage.

 

This is basically the 'Chinese room' argument. See also the arguments against it.

You can read it online, taken from 'The mind's I', including a critique of Hofstadter.

 

No matter how complex the overall arrangement is - there's just nothing there that can explain this.

 

Our theories are missing a piece.

 

Maybe. Maybe not. Maybe we should just get used that some complicated processes are conscious processes, independently from how they are realised, in nerve cells or flip-flops.

 

Link to comment
Share on other sites

Just forgot to mention it again:

 

Each transistor is still a transistor, and each "mark" is just a voltage.

 

Every nerve cell is still a nerve cell, and each signal just a chemically transported potential difference.

 

You should get rid of this kind of emotional pictures if you want to understand consciousness, it is a blockade to look into the processing character of consciousness.

 

Maybe you should do this to get rid of it: every time you think of AI in terms of 'it is just ...' , build the same sentence but then with brain equivalents. And then do not forget we are conscious, even if we are 'just ...'.

Edited by Eise
Link to comment
Share on other sites

Just forgot to mention it again:

 

 

Every nerve cell is still a nerve cell, and each signal just a chemically transported potential difference.

 

You should get rid of this kind of emotional pictures if you want to understand consciousness, it is a blockade to look into the processing character of consciousness.

 

Maybe you should do this to get rid of it: every time you think of AI in terms of 'it is just ...' , build the same sentence but then with brain equivalents. And then do not forget we are conscious, even if are 'just ...'.

I think it's amazing that we, and all other life, emerge from these simple signals that act as an ensemble to create complex, thinking, feeling entities of varying abiities.

Link to comment
Share on other sites

I suspect this issue will not be resolved. Some people still believe in a flat Earth, and I think some people will always believe only humans are sentient, no matter how accomplished AI becomes.

Yes, we humans are too impressed with ourselves. That we are uniquely special. So special that it is impossible for us to exist in the natural world. Perhaps that is an atifact of sel awareness? Perhaps an overly developed sense of self is vital and creates a center of the universe syndrome.

Link to comment
Share on other sites

I'm totally open to all of the things the three of you just said - I just look and see no shred of progress toward an explanation. We may as well be saying that a non-magnetic material really is magnetic - we just haven't been able to show it yet. I'm just waiting for someone to show me the science. Even the AI industry has turned away from this; a few decades ago that industry was all about the notion of sentient computers, but these days they're much more focused on sophisticated algorithms that use probability to distill something useful from huge amounts of real-world data.

 

I absolutely get it, Eise, that every statement of this sort about transistors has a parallel statement about nerve cells - that's pretty much the whole point. There is no hint whatsoever of really workable theory re: how a computer structure can cause awareness to emerge, and there's no such theory re: how neural structure can either. I would say that applies to any system that operates purely in terms of structures of deterministic mechanisms.

 

It seems to me that the word "emergence" is tossed around in this arena in a way that makes it more or less synonymous with "magic." Meanwhile, Hoffman's notion violates no currently extant physical theory, and he seems to have chosen the right path to follow for me to consider it a "serious" theory (specifically, the agenda is to show that the mathematical structures involved can lead to predictions that match experimental observations across the board). That makes the theory a contender. It may fall flat on its face before it's all said and done, but I'm willing to watch while the attempt is made. If that agenda succeeds, we'll have a theory that is simpler in its fundamental premises than the traditional paradigm. And that's what we're supposed to be all about, right? Simplest theory that fits the facts wins?


Just as a side observation, the reluctance everyone seems to have about the idea that consciousness could simply be a fundamental feature of reality seems quite similar to Einstein's stubborn resistance to quantum theory. Yes - a rigorous program is required; New Age magic evangelizing doesn't cut it. But Hoffman seems to be attempting that sort of rigor. Why does this notion disturb people so much? Compare it to, say, the idea that spacetime is a quantum foam. Everyone seems quite content with that one to say "Well, maybe and maybe not - let's work it out and see if it gives us a simpler, more concise picture of reality." But when the word "consciousness" enters the discussion, everyone seems to literally freak out and mount the most fanatic opposition they possibly can. What's the difference?

Link to comment
Share on other sites

Just as a side observation, the reluctance everyone seems to have about the idea that consciousness could simply be a fundamental feature of reality seems quite similar to Einstein's stubborn resistance to quantum theory. Yes - a rigorous program is required; New Age magic evangelizing doesn't cut it. But Hoffman seems to be attempting that sort of rigor. Why does this notion disturb people so much? Compare it to, say, the idea that spacetime is a quantum foam. Everyone seems quite content with that one to say "Well, maybe and maybe not - let's work it out and see if it gives us a simpler, more concise picture of reality." But when the word "consciousness" enters the discussion, everyone seems to literally freak out and mount the most fanatic opposition they possibly can. What's the difference?

It's you freaking out at the thought we aren't more than just wet machines... you want us to be more than that. Just because we don't know the whole process yet doesn't mean it's the wrong road to go down. It's consistent with and extensible from known science; there's no isolated islands of thought being made in the middle of nowhere.

Link to comment
Share on other sites

No, no freaking out here. I just look at transistors and see gadgets that are fully specified by Maxwell's equations and aspects of semiconductor physics - there is no room there for new developments to arise that makes us realize transistors are aware. And I made my points about patterns earlier. I think we'll continue to make strides in how to build such patterns that are capable of more and more sophisticated behavior (like better recognition of patterns in images and other sorts of data, and so on). Proper interconnection patterns have proven extremely useful in computing, and I think further progress will be made there. But as I noted earlier, within the computer the bits of those patterns are just voltages, and voltages aren't aware either.

 

I think the scientific method is an amazing and powerful technique. We've done great things with it and will continue to. But I don't think it's ever appropriate to take on such certainty that we have all the answers in our hands. Just before the "difficulties" that led to quantum theory arose, physicists felt certain that classical physics would explain everything. The whole clockwork universe and all. That turned out to be wrong - extremely wrong. Until we actually have a theory of some effect that has stood up to a reasonably thorough battery of testing there's no way to make any confident conclusion about how that effect works.

 

Basically, your denying the existence of something you can't observe the non-existence of. Not just noting that its existence is unproven, but ruling the possibility out altogether. I don't find that rational at all. We don't know what we don't know.

 

Look, if someone has a breakthrough next year and publishes a thorough and convincing explanation of how awareness and "experiences" (pain, love, hate, joy, etc.) pop nicely out of mainstream physical theory, that will be great. You'll get no further argument from me. But that explanation is entirely missing currently, and to me that means that any explanation that doesn't conflict with our body of experimental data is a valid candidate.

 

Hoffman is a step ahead - he's actually put a theory on the table with math behind it. The process of expanding that (or trying to) to match up with experimental results is just getting started, but at least it's proposed an underway. The emergence guys haven't come up with anything similarly "ready for prime time" yet.

Link to comment
Share on other sites

Things seem magic when one does not understand them. If we could time travel, and took radio communicators (2 way radios) back to the dark ages, we might be accused of using magic if anyone saw us use those radios. Thus, emergence seems like magic because we don't understand what is required for sentience or consciousness. However, I expect that will change once we have a working AI that is convinced it is conscious and can convince some of us. That we cannot observe consciousness in another means a conscious AI will have difficulty convincing skeptics (most of us) it is conscious. On the other hand, it may not be important whether an AI is conscious or not as long as the AI does what we need and want. The reason it may not be important is we currently have no consensus definition of consciousness. Dictionary.com defines consciousness as, "awareness of one's own existence," and defines awareness as, "consciousness." My gut says that is the definition, but it isn't testable and may never be testable. I believe we will make AI conscious, and be able to test that each invocation to validate it is AI conscious. However, to compare AI consciousness with human consciousness may only be possible if we can accurately simulate a brain.

Link to comment
Share on other sites

I'm totally open to all of the things the three of you just said - I just look and see no shred of progress toward an explanation.

 

I see progress. But it is slow, but in my opinion this can be explained by the complexity of the subject; and by the emotional barriers that it might be possible to understand ourselves as what we are: wet information processing machines. Please, re-read GEB; and read Consciousness Explained by Daniel Dennett. (This is a few months of work, I know...).

 

I absolutely get it, Eise, that every statement of this sort about transistors has a parallel statement about nerve cells - that's pretty much the whole point. There is no hint whatsoever of really workable theory re: how a computer structure can cause awareness to emerge, and there's no such theory re: how neural structure can either. I would say that applies to any system that operates purely in terms of structures of deterministic mechanisms.

 

I would say this is just not true. There are some great ideas around. I would be happy to discuss above mentioned books with you. I would say that you give up the naturalist agenda too early. Consciousness seems to be the most complicated issue to tackle in science, and therefore a solution seems to be far away. But maybe we just need another perspective. To give up explaining, and postulate conscious agents as fundamental, is too much the move of an 'old universe creationist'.

 

It seems to me that the word "emergence" is tossed around in this arena in a way that makes it more or less synonymous with "magic."

 

If it is just proposed as that, and nothing more then you are right. But in my opinion this is just a very high level abstract of the idea.

 

Just as a side observation, the reluctance everyone seems to have about the idea that consciousness could simply be a fundamental feature of reality seems quite similar to Einstein's stubborn resistance to quantum theory. Yes - a rigorous program is required; New Age magic evangelizing doesn't cut it. But Hoffman seems to be attempting that sort of rigor.

 

I think there is a long way to go from Hoffman's conscious agents, to explaining human consciousness. And it might be (good that we are in the speculation forum) that if a research programme would be based on it, in the end the consciousness agent is thrown out as a superfluous assumption of the theory. Newton believed in God as a creator of the universe. But his mechanics was a great stepping stone in forgetting God as just a superfluous hypothesis.

 

No, no freaking out here. I just look at transistors and see gadgets that are fully specified by Maxwell's equations and aspects of semiconductor physics - there is no room there for new developments to arise that makes us realize transistors are aware.

 

You are doing it again...

 

Look, if someone has a breakthrough next year and publishes a thorough and convincing explanation of how awareness and "experiences" (pain, love, hate, joy, etc.) pop nicely out of mainstream physical theory, that will be great.

 

No, no. That will not ever happen. But it is not necessary. It must only be possible to implement a functionality, in complexity more or less equivalent to neural structures in the brain. Does evolution pop out nicely out of mainstream physical theory? Or chess programs? Sorry to repeat my self so often: read GEB again. If we will understand consciousness one day, it will not be directly derived from physical theory. But conscious entities will be implemented in physical structures.

 

Hoffman is a step ahead - he's actually put a theory on the table with math behind it. The process of expanding that (or trying to) to match up with experimental results is just getting started, but at least it's proposed an underway. The emergence guys haven't come up with anything similarly "ready for prime time" yet.

 

I think Hoffman is much farther away from 'prime time'. In fact, it seems to me, he has given up.

Link to comment
Share on other sites

Oh, I don't think that's true. I exchanged emails with him very recently, and he seemed fully enthusiastic. Several papers in the pipeline, and a book due out later this year. He certainly gave the impression of still being engaged with the agenda.

 

Hey, I was reading around online about free will this afternoon, and a thought experiment occurred to me. I'd be very interested in your opinions on it. Let me see if I can do a decent job laying it out here. The idea has only been cooking for a few minutes, so please bear with me.

 

-------

 

In response to claims that free will is manifested through quantum uncertainty, many opponents of the idea note that while quantum uncertainty does represent a theoretical absence of complete determinism in the universe, it is not the case that quantum events are in any way critical to the salient operation of our brain. In that sense, they say, our personal actions are, in fact, completely determined by the laws of physics.

 

So, I propose an experiment. We arrange a quantum experiment. It could be anything, but ideally is something with two equally probable outcomes. It's agreed in advance that if the outcome of the experiment is "positive," I will sit down in a red chair that's in the room. On the other hand, if the outcome is "negative," I will sit down in a blue chair, also in the room.

 

Now we run the experiment, and I sit down in either the red or blue chair, as prescribed. End of experiment.

 

-------

Now, I don't think I'll go so far as to claim that the quantum event determined my action directly - clearly we used an instrument to render the quantum event in some macroscopic way, and that determined my action. I think the point of interest here is that we fully accept that the quantum result absolutely could have been either positive or negative - both were entirely real possibilities and it was mere chance that produced a specific outcome. And therefore "me sitting in the red chair" and "me sitting in the blue chair" were also two entirely possible courses of action: neither was ruled out in advance by determinism.
This makes the question of whether there is any such "quantum amplifier" mechanism in our brains pretty important. As far as I know we have neither clearly identified such a mechanism nor decisively ruled out the existence of one. So that's a pretty big open question, and extremely important in terms of the possibility I've been entertaining in this discussion. If consciousness does exist as a fundamental thing, then quantum uncertainty is the ONLY mechanism I see for it to produce any effect in the physical world. And an "amplifier" within our brains would be a firm requirement: no amplifier, no free will.
So, comments?

I guess Many Worlds (which I've never been able to find palatable) would say I did both (red char and blue chair). But I guess it would also say that I walked out of the room without sitting down, tripped and fell and knocked myself out before getting to either chair, and so on and so on and so on, right?

Link to comment
Share on other sites

Oh, I don't think that's true. I exchanged emails with him very recently, and he seemed fully enthusiastic. Several papers in the pipeline, and a book due out later this year. He certainly gave the impression of still being engaged with the agenda.

 

Sorry, that is not what I meant. I meant he gave up on the road of cognitive science explaining consciousness.

 

In response to claims that free will is manifested through quantum uncertainty, many opponents of the idea note that while quantum uncertainty does represent a theoretical absence of complete determinism in the universe, it is not the case that quantum events are in any way critical to the salient operation of our brain.

 

Yes, I am such an opponent. Free will is not free will by 'breaking through the stream of determinism', but by causing actions based on our mental state (intentions, beliefs, ...). So even if QM effects play a role in my brain, it would only be a disturbing one, possibly breaking the causal chain from my motivation to my action.

 

In that sense, they say, our personal actions are, in fact, completely determined by the laws of physics.

 

Yes. In order to be free, determinism must be true, or at least there must be sufficient determinism.

 

And therefore "me sitting in the red chair" and "me sitting in the blue chair" were also two entirely possible courses of action: neither was ruled out in advance by determinism.

 

But this is not an example of free will at all. Free actions are actions that are not overruled by the actions of others. The best examples of free actions are where I want something (motivation) and can actually do it (action). Bad examples are where I do not care what action will come out, and you gave such a kind of example.

This makes the question of whether there is any such "quantum amplifier" mechanism in our brains pretty important. As far as I know we have neither clearly identified such a mechanism nor decisively ruled out the existence of one. So that's a pretty big open question, and extremely important in terms of the possibility I've been entertaining in this discussion. If consciousness does exist as a fundamental thing, then quantum uncertainty is the ONLY mechanism I see for it to produce any effect in the physical world. And an "amplifier" within our brains would be a firm requirement: no amplifier, no free will.

 

A quantum amplifier would mean that my free actions are random actions. That is definitely not what free will is.

Link to comment
Share on other sites

Well, on the assumption that quantum actions are random. Experimental ensembles that we set up in labs do look like they are, but I don't know that that implies all individual quantum actions are under all circumstances).

 

You and I have differing notions of free will. What you call "free will" I call "freedom" (being free of coercion by others). What I call free will is actually originating a course of action, with no prior cause, as a completely uninfluenced choice.

 

Now, forgive me if I've misunderstood you, but what I read in your words is a model of free will that in fact is still deterministic. As in, people make the choices they make because they's where the laws of physics lead, based on past history. If that is so, then you invalidate all moral judgment - it still makes sense to "restrain dangerous humans" from doing harm, but it makes no sense to judge them as "evil" or malicious. They had no choice. It equally makes no sense to laud people for their good behavior - they didn't really have a choice. It also thoroughly undermines the whole notion of trying to "better oneself." You're going to be what you're going to be, and that's just that. It sounds an awful lot like the fate argument to me.

 

Am I taking your meaning correctly?

Link to comment
Share on other sites

I think it is rather presumptuous of people to talk about quantum effects in consciousness, especially non-physicists.

 

It has to have an effect.

 

It's a case of quantum woo, I suspect.

 

 

But, yes, I think it's too far down the rabbit hole.

Link to comment
Share on other sites

 

 

Now, forgive me if I've misunderstood you, but what I read in your words is a model of free will that in fact is still deterministic. As in, people make the choices they make because they's where the laws of physics lead, based on past history. If that is so, then you invalidate all moral judgment - it still makes sense to "restrain dangerous humans" from doing harm, but it makes no sense to judge them as "evil" or malicious. They had no choice. It equally makes no sense to laud people for their good behavior - they didn't really have a choice. It also thoroughly undermines the whole notion of trying to "better oneself." You're going to be what you're going to be, and that's just that. It sounds an awful lot like the fate argument to me.

 

 

There will be things like memories and other neural settings that will have a large part to play in the final "decision".

Link to comment
Share on other sites

Well, yes, but those memories were formed based on experiences driven by physical events. I don't really see that that changes the "validity of judgment" thing. I recognize that you still could objectively categorize people as "good" and "bad" simply because of the physical events they triggered, but if they have no "real" choice, at the moment of committing an action, then it's hard for me to really make a moral judgment about them re: that action. It seems like the proper mental attitude to have toward them would be similar to the one we have toward, say, the weather.

Link to comment
Share on other sites

Well, yes, but those memories were formed based on experiences driven by physical events. I don't really see that that changes the "validity of judgment" thing. I recognize that you still could objectively categorize people as "good" and "bad" simply because of the physical events they triggered, but if they have no "real" choice, at the moment of committing an action, then it's hard for me to really make a moral judgment about them re: that action. It seems like the proper mental attitude to have toward them would be similar to the one we have toward, say, the weather.

There are differences between the weather and a person, including we can stop a serial killer from killing again; however, we cannot stop the weather (e.g., a tornado) from killing. Well, in both cases we can move to a safe location, but we can incarcerate a killer, not a tornado.

Link to comment
Share on other sites

I think the formal phrase for the approach I've outlined above is "consciousness is an emergent property of complexity." What I'm looking for is "How?" How do we take that step from a finite state machine to real "self awareness"?

 

If I may join your discussion, this appears to me to be the focus of your original query. Although I have not read in detail the other replies you've received, I think I have an answer that may differ from the others. We primarily assess consciousness equivalency in other species by the only means and standards we are somewhat capable of measuring and understanding, which is human brain function and output. If another species brain structure and function are similarly configured to ours and it's behavioral expressions are found comparable, then we can be confident that this other species possess some measure of human equivalent self-awareness.

 

Taking a machine to a state of human equivalent self-awareness would require a programming construct to a complexity comparable to that suggested by how the human brain evolved. Human consciousness is an "emergent property of complexity" in human brain evolution. We can't presume that consciousness is anything more than a product of brain function because no evidence for that "anything more" has been determined by science and all human expressions and attributes can be traced to specific aspects of brain function. The human brain didn't begin as human, it evolved from something basic to something complex and, surprisingly, its contiguous functional configuration provides evidence of that evolution, its stages and the survival influences likely compelling those major functional stages.

 

I believe we have the programming capability to mimic every major functional development of the human brain. What the programmers lack, in my opinion, is an adequate understanding of those functions and a clear perspective of their functional hierarchy. For example, cortical function is subservient to subcortical inputs, which means that nothing happens in our cortex without subcortical directives; therefore, who we are emerges from brain structures not generally associated with thought. This isn't a quality programmers might know but could be significant in how they configure brain equivalent programming.

Edited by DrmDoc
Link to comment
Share on other sites

Yes, I think it's a terribly difficult issue to discuss because of the point you made: the key observation that I make that leads me to believe consciousness involves more than mechanism is my own self awareness. I can't observe yours - you can't observe mine. Each of us can directly observe on one self-awareness: our own. Sharing those observations is impossible, and the whole thing is more or less rendered "unscientific" right then and there.

 

I absolutely cannot deny the theoretical (and perhaps practical, someday) possibility of a system constructed using the equivalent of today's computer technology (except much more complex) that can 100% pass the Turing test and give a flawless imitation of consciousness. It would be able to communicate in-depth about its "feelings," express its belief that it was self-aware, and so on. But understanding the underlying physics of that system (Maxwell's equations, etc.) as I do, I'd still not be able to believe that it "feels its self awareness" in the manner that I do. And that same argument of course applies to a brain, viewed exclusively in terms of Maxwell's equations, chemistry, and so on, if the proposed model was classical in all of its significant particulars.

 

I've actually regretted bringing this up over the last few days, because this is more or less where every conversation seems to end up: if you can't demonstrate your own self-awareness to others, then you're not self aware. But I think I'll choose not to regret it, though, because some people have shared some very interesting links and so forth with me.

 

Thanks very much for the reply, and it's nice to meet you!

Link to comment
Share on other sites

Well, on the assumption that quantum actions are random. Experimental ensembles that we set up in labs do look like they are, but I don't know that that implies all individual quantum actions are under all circumstances).

 

QM predicts probabilities. EPR experiments show that underneath the determined probability distribution do not lie local causes. And that is exactly what you seem to suggest: that QM-events allow room for the will to interfere with nature.

 

You and I have differing notions of free will. What you call "free will" I call "freedom" (being free of coercion by others).

 

Well, we might get in a discussion about terms, but if I compare the different words with their opposites, I have following picture:

  • free actions vs coerced actions
  • determinism vs randomness
  • freedom vs oppression

Free will for me means being able to act freely (first bullet).

 

What I call free will is actually originating a course of action, with no prior cause, as a completely uninfluenced choice.

 

If my choice were completely uninfluenced, then my actions would have nothing to do with the circumstances I am in, and also with who I am: my character, the things I learned in my life, my self-knowledge. That kind of free will is a chimera.

 

Now, forgive me if I've misunderstood you, but what I read in your words is a model of free will that in fact is still deterministic.

...

Am I taking your meaning correctly?

 

Well I think I have written that pretty clearly. Of course we need determinism to act freely. In the first place, we cannot know any outcome of our actions, i.e. choose certain outcomes, if the result of my actions would be random. I need, so to speak, a reliable nature; that in similar circumstances occur similar outcomes. In the second place, the only way I can even be made responsible for my actions is when I determined my actions (I really like the word in this context... Somebody who is sure in his actions is said to be very determined).

 

Now you are suggesting that there must be something (mind, soul), that sits in the control room, using the controls, getting information via the senses, but not subject of causality. But now you have only moved the problem to some subentity.

 

As in, people make the choices they make because they's where the laws of physics lead, based on past history. If that is so, then you invalidate all moral judgment - it still makes sense to "restrain dangerous humans" from doing harm, but it makes no sense to judge them as "evil" or malicious. They had no choice. It equally makes no sense to laud people for their good behavior - they didn't really have a choice. It also thoroughly undermines the whole notion of trying to "better oneself." You're going to be what you're going to be, and that's just that. It sounds an awful lot like the fate argument to me.

 

No. Determinism and fatalism are two very different things. Fatalism means that I have no influence: things take their course independently of what I want. In determinism however, my motives, beliefs, feelings etc are causal factors. And to these factors belong also moral considerations. So there is no contradiction between moral behaviour and determinism at all. And of course you have choices: there is a relevant sense in which you have a choice when you are given a menu card in the restaurant, where e.g. a prisoner just has to eat what he gets. And if nobody forces you to pick something you do not want, you have a free choice.

 

You just state this argument as if it is obvious. But it is not at all. Nothing would change, except that we might become a little bit less harsh in our verdicts, because we know there is nothing like ultimate responsibility, absolute free will, and the Evil (with capital 'E').

 

I've actually regretted bringing this up over the last few days, because this is more or less where every conversation seems to end up: if you can't demonstrate your own self-awareness to others, then you're not self aware.

 

No! Exactly the opposite. Because I am conscious of my behaviour, the choices I make, the ideas and feelings these are based on, I must conclude that others are self aware too. I can even talk about the ideas and feelings with others. Obviously they have them too. In other words: philosophical zombies do not exist. Entities that behave exactly as we do, which includes talking about their inner feelings, thoughts, doubts, etc, but are not conscious are a pure philosophical fantasy.

 

Therefore I know: certain complex processes can lead to self awareness. If we have found the conditions that such processes arise, we will have understood consciousness. If you think we don't, then you are applying harder constraints on explanations of consciousness than any other science does on its explanations.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.