Jump to content

The mechanism of self-awareness


KipIngram

Recommended Posts

I'm going to put this here in Speculations from the get-go, because it always seem to lead into some "uncertain" areas. I've been browsing here for a couple of days now, and I feel like there are quite a few very well-informed and talented people here. This is a topic I've been thinking about for years, and I'd be ever so happy if someone can bring me something new to ponder.

 

First of all, let me say what I am not trying to do with this post. I am in no way trying to back-door my way into a religious assertion. I'm pointing this out because I think some people do use this general topic with an eye toward doing that. I am of the opinion that our "minds" involve something more than just our brains (and I'll talk more about that below), but I see no reason whatsoever that has to imply the usual baggage that's associated with it (a Creator, etc.) . For lack of a better term I'm going to refer to this opinion as the "we have spirits" opinion. So when I refer below to spirits in any way, that's what I'm talking about - a "something extra" that works together with our brain to make our mental activity and experiences possible.

 

Also, in case this gets long-winded, what I'm ultimately going to wind up asking for is the best possible explanation of the case for our minds being purely brain-based. Some new insight that no one's been able to give me in the past.

 

Let me get started. In reading about artificial intelligence, you run across two concepts:

  • The Strong AI Theorem. My understanding of this theorem is that it essentially is the claim that sentient thought arises completely from the operation of some sort of hardware and software (our brains and the pattern of neural wiring and activity, or a computer with the appropriate software, etc.) . That there's nothing more to it than that - case closed
  • The Weak AI Theorem. This is the claim that with sufficient hardware power and software cleverness we could in principle construct a machine that mimics the behavior of sentient thought to as high a level of accuracy as we wish.

First of all, I'm 100% down for the Weak Theorem. It seems obvious - any behavior can be programmed, and with a big enough database of information, clever enough software, and powerful enough hardware to run it all this looks like an obviously achievable goal. So there is absolutely no way for me to know whether or not all of the people that I encounter in life actually have spirits or not. Every one of you could be nothing more than very superb androids for all I know.

 

But it's different when I consider myself. In a way I don't know how to express in hard scientific terms, I "feel" my own existence. I am aware of myself as a living, thinking thing. I feel physical pain and pleasure. I feel joy, sadness, anger, and so on. I am aware of the chemical / physiological basis of all of these things (though I'm not an expert, so don't ask me for precise chemical formulae and so on). But I'm talking about more than the measurable chemical changes that occur in my body based on my mental state. I'm talking about the actual perceived sensations. The crux of my difficulty here is that I haven't been able to develop any sort of a good theory of how a chemical (and that's all the materials of our bodies are) can experience sensation.

 

I'm going to switch over to computer-based systems now, because I understand them more completely. I'm actually a digital circuitry engineer by education and experience more than I am anything else. So, if the Strong AI Theroem is true, then in theory we could build a computer and write the right software, and that entity could then experience the same sort of things I can feel myself experiencing. Pain (we'll assume there are suitable sensors wired in). Joy, anger, etc. The computer would have the same sort of innate sensation of these things that I have when I experience them. It would not only react externally in the appropriate manner, but it would literally have an internal awareness of its own "frame of mind."

 

As I said, I've thought about this for years, and I don't see how it's possible. The hardware of a computer is essentially a vast array of transistors, each of them either "on" or "off," wired together in an arrangement that enables its function. Each of those transistors operates according to well-defined semiconductor physics. And, finally, no transistor is "aware" of the state of any other transistor, or of any global patterns of transistor activation - each transistor just has a particular voltage on each of its inputs and that's it. There are no "back channels" for global state of any kind to find its way into the operation of an individual transistor.

 

So the hardware of a computer is a machine and nothing more. It moves through states the same way the cylinders of a car move through a particular pattern of motion. I see no basis on the hardware front for self-awareness.

 

So that brings us to the software. The most general description of the full software state of a computer is to say that it's an evolving pattern of true and false logic bits. It's a table of information. The hardware can "time step" that table from its value set now to its value set and the next clock cycle. But I could write each one of those patterns down in a book, say one book per time step, and put those books on the shelves of library. That library would then represent the evolution of the software in a fully complete way.

 

I think I can wrap this up now - exactly how does a pattern of information experience sensations? I certainly see how you could point to one particular bit of that information and say "There - that bit right there is the 'pain' bit." That would just be a function of how the software was written - maybe a temperature sensor exceeding a certain value causes that bit to be set, and we call that "burning pain." But what is EXPERIENCING that pain? How do we assert that the hardware / software system is having an awareness of the state of pain?

 

I've asked this question many times over the years, but no one has ever responded in a way that feels like more than hand waving. It always reminds me of that old cartoon that I saw in one of the World Book Science Yearbooks many years ago - a bunch of math in the top left corner of a chalkboard, a simple result in the bottom right, and the words "THEN A MIRACLE OCCURS" in between. One scientist is saying to another "I think you need some more detail here in step 2."

 

I think the formal phrase for the approach I've outlined above is "consciousness is an emergent property of complexity." What I'm looking for is "How?" How do we take that step from a finite state machine to real "self awareness"?

 

My absolute failure to discover a satisfactory answer to this question is basically why I've come to believe we have spirits. It's an Occam's Razor thing - it solves the problem. Nothing is left unexplained. If we have spirits, then it's our spirits that feel self awareness, and that unburdens the hardware from having to do so by itself. And as far as I can see, presuming spirits doesn't foul science up in anyway. Science certainly doesn't show that we do have spirits, but it doesn't show that we don't either.

 

Many people use Occam's Razor to argue against spirits: "Why add something new when you can explain everything without adding it?" Except that in my opinion we can't explain everything. At least not in a way that I've been able to find believable.

 

So, please - can anyone say something new to me? Something that can put me on a new track to understanding how a finite state machine could even begin to embody this thing I'm calling self-awareness?

 

As I said above, I'm putting this in Speculations because I think we're talking about a fundamentally non-scientific issue. There's no way to tell from the outside whether a person has self-awareness or not. It's a strictly internal thing that each of us can feel about ourselves only.

 

Oh, one last thought. Some people will say "You don't have self awareness - it's an illusion." Ok, if that's the case, then what is it exactly that's experiencing the illusion? That seems like a dead end to me. I can almost accept the argument "free will is an illusion" as something at least to debate (I do think we have free will, but I could honor a debate). But "self awareness is an illusion" seems like a contradiction from the outset to me.

 

Thanks for reading,

Kip

 

 

 

Link to comment
Share on other sites

Oh, one last thought. Some people will say "You don't have self awareness - it's an illusion." Ok, if that's the case, then what is it exactly that's experiencing the illusion? That seems like a dead end to me. I can almost accept the argument "free will is an illusion" as something at least to debate (I do think we have free will, but I could honor a debate). But "self awareness is an illusion" seems like a contradiction from the outset to me.

I very much agree with you. I think that fact I can even ponder if I'm self aware is proof I'm self aware.

Link to comment
Share on other sites

what I'm ultimately going to wind up asking for is the best possible explanation of the case for our minds being purely brain-based.

 

I don't think you have a case:

 

Our brain may manipulate all our senses and hormones, but it doesn't create them.

Link to comment
Share on other sites

 

I don't think you have a case:

 

Our brain may manipulate all our senses and hormones, but it doesn't create them.

 

Tell me more - just to be clear, my own belief at this time is that we do have spirits and they're the seat of our minds. In fact, that may be all there really is - a guy named Donald Hoffman has a theory of "conscious agents" in which he proposes that consciousness is the fundamental essence of reality, and that what we call "the universe" is something we perceive as a result of exchanging information with other conscious agents. I'm closer to believing that than I am to believing in the "emergent property of complexity" premise.

 

I'm trying to give the other side the best possible chance - give it a fair hearing.

Link to comment
Share on other sites

 

Tell me more - just to be clear, my own belief at this time is that we do have spirits and they're the seat of our minds.

 

You're on a science forum and saying thing's like "we do have spirits" is akin to saying "I have fairies in my garden" but why do we need more than the physical 'grey matter'?

 

In fact, that may be all there really is - a guy named Donald Hoffman has a theory of "conscious agents" in which he proposes that consciousness is the fundamental essence of reality, and that what we call "the universe" is something we perceive as a result of exchanging information with other conscious agents.

 

This just seems a twisted version of Asimov's Gaia (a work of fiction).

 

I'm closer to believing that than I am to believing in the "emergent property of complexity" premise.

 

Emergent properties are everywhere and well known to science, how else do you explain a termite mound?

 

Read this.

Edited by dimreepr
Link to comment
Share on other sites

Hi KipIngram,

First a compliment for the way you show your ways of thought. The quality of your posting is high above many of the postings made in the philosophy forum.

 

You ask for people if they can bring something new. Reading all your thoughts, I am not sure I can, but I give it a few tries.

As a take-off, I would take that you know you are conscious. As you were born from humans, grew up, live and communicate with humans, I hope you agree that this is a sufficient basis to state that all humans are conscious. A second point is that we are animals: we descended from other animals, those from simpler organisms, even from organisms that do not have nervous systems. The fact that changes in our central nervous system (drugs, concussion) change our consciousness, is a basis for supposing that consciousness is related to brain processes, so we can safely assume that organisms without nervous systems are not conscious. So where would the spirit enter organisms in the historical evolution? I think the question makes no sense at all. Therefore I think we can safely assume that consciousness is a function of nervous systems above certain complexity.

 

This of course is not a theory of consciousness: but for me it is enough to see this as an empirical fact: that complex nervous systems can give rise to consciousness. The next question is then if completely different systems, like complex electronic devices (aka computers, but we might need some completely different kinds of hardware as we have now) principally could have consciousness. Elsewhere I described what I think are necessary attributes of of a system to call it conscious: being able to observe the environment, react on it in ways that show that the system evaluates possible courses of actions, and is able to reflect on reasons for its actions, and communicate them, and understand reasons of other systems (organic or not).

 

You say the hardware of a computer is a machine, and nothing more. That is of course not true. It is a very special machine. And it is this specialty in which it differs from machines with other specialties, like cars or pumps. One should always be aware of expressions like '... is nothing more than...', or '... is just...'. You can be sure when hearing or thinking such a thought, that exactly that attribute that is important is cut off. You should put a trigger on those words, something like 'over-generalisation alarm!'. A steam engine is nothing more than iron, coal and water. But you cannot ride a train with a heap of iron, a heap of coal and a tank with water. And the essence of the train is not that it is steam engine: it could also be a diesel engine. Different material components, but does the same: it pulls a train.

 

So I would say: we are complex electro-chemical engines, so complex that consciousness can arise. (So we are not just electro-chemical engines, we are very complex ones.) There is no reason beforehand to exclude the possibility that a system built on other underlying principles could be conscious too. But it must at least share some of the complex structures that we are.

 

In this thread you are referring to Gödel, Escher, Bach. I think you really should reread it. The essence of the book is not what you write there: that is just a global formulation of Gödel's incompleteness theorem. I see the main point of GEB in the hypotheses that consciousness can arise in systems that are built up of different layers of complexity built on each other, and where the higher levels can change the system on lower levels (see strange loop). I think you can get a quick overview of Hofstadter's thinking if you read one of the final chapters of GEB, called 'speculations', if I remember it correctly.

 

All this is of course not consciousness explained. But one should ask what one expects from such an explanation. I think that if we would know the necessary conditions for a system to be consciousness, we understand consciousness. A philosopher like Daniel Dennett even thinks that we know what these conditions are. So if you are ready with GEB, go on with Dennett, 'Consciousness Explained'.

 

Hope I gave you a few new thoughts... Happy to investigate further, but I will not have always so much time...

Link to comment
Share on other sites

Eise, thank you for the very thorough reply. After posting the message earlier that referenced GEB I was actually thinking that perhaps the book was much more focused on consciousness-related things than I'd initially recalled. I haven't always been as interested in "nature of mind" issues as I am these days, and I imagine when I read the book the first thing my interests were in other areas. The bit about all logical system containing truisms that can't be proven is just something that "stuck in my memories." I will re-read.

 

Also thank you for the kind words regarding post quality. I try. It's a great hope of mine that before I die I'll achieve a tier 1 understanding of theoretical physics. When I was in graduate school (engineering) I deliberately took more math than my program required with that goal in mind - I felt I should take advantage of available training opportunities while I had them. Also, we had one particularly superb math professor during that period at The University of Texas at Austin - sitting in his lectures was always a pleasure. Since then I've tried to read as much as I can and I think I've gradually absorbed some knowledge, but I have a long way to go. I'm hoping that when I retire I can kick the process up to a higher pace.

Link to comment
Share on other sites

dimreepr: First of all, your post wasn't exactly friendly, though I've certainly seen worse in my day. But more pertinently, what my post really asked for, more than anything else, was ideas regarding how consciousness might emerge from brain complexity that I haven't run across before. Just saying "it does," or "why couldn't it?" isn't really helpful. I'm absolutely interested in reviewing new material on that front. Also, the link you posted just went to a web page on the "for whom the bell tolls" poem, and I didn't really see a connection.

 

I knew when I wrote the post that "spirit" is a dangerous word to use, though it seemed better than "souls." Maybe I should have referenced Hoffman and said "conscious agents," but like I said, the point of the post wasn't to argue for the whole spirit/soul/conscious agent thing - it was to seek new reference material on the emergent consciousness premise.

 

On a separate thread that referenced machine consciousness someone noted that the GEB book notes the idea that a logical system can contain unprovable truths. That was helpful - I knew that about GEB, but I'd never pondered it in connection with emergent consciousness. I was of the opinion that since the emergent consciousness premise puts consciousness entirely within the material realm, that science then MUST eventually be able to explain it. But the GEB connection has caused me to see that's not as obviously true as I had presumed. It gives me something new to think about, and that was my goal.

 

Let's not have ill will between us, ok? I promise I'm not here to try to slip religion in through the back door or anything like that. I'm legitimately curious about these things - I just want have the best understanding I can of how my own self-awareness (which is the most undeniably real thing I can observe about the world) can be fit into the grand scheme of things.

Link to comment
Share on other sites

I think that self awareness is an offshoot of generating and processing quantum abstractions of a given a specific stimulus. The stimulus could be considering a math problem, or looking in the mirror. The ability to "show all answers" generated by this stimulus and then go thru the process to pick a few conclusional outputs to place in memory, uses an individually distinct process of information processing each time, regardless of the stimulus. Our individual process pathway, or our particular "algorithmic data set" allows self awareness as it is essentially the same each time, thus we gain familiarity with how our brain works, and self-identify as "it", but really we only "with it" and somewhat vulnerable to "it's" quirks and charms. In a larger sense, our personalities are the way our brains "crunch the numbers", and also remain pretty much fixed throughout life, but are less rigid, as the overall personality is a more complete combination of all brain components, of which are in a perpetually competitive state, not only of behavioral contol aspects, but also of competition for basic nutrients, oxygen and waste removal. .

Edited by hoola
Link to comment
Share on other sites

This is an absolutely unanswerable question. A solipsist would not even know if another human had consciousness, let alone a machine. he would be right in supposing such as most all of us surely have conjectured at times.

It is a good thing to exercise the mind, of course, so one's time is not wasted in thinking about it.

As to the question of whether the brain created consciousness or the other way around, since I am aware that symbolic artifices always stand as metaphors for the dynamics of mind, I reason that the so-called objective world is rather arbitrary, and it is I who have projected it into being through beliefs I have absorbed along the way.

This realization of subjective reality also includes the symbolic representation of my body. It is said that the blind become better at hearing and other senses so that one might imagine that the sense of sight is not an electrical or chemical process, but a true reality describing a certain embodiment of feeling alone. Also, when a person has lost a huge amount of brain material, memeory still seems to remain, as well as a great degree of functionality.

It seems obvious that such a phenomenon would be unlikely to occur in a machine. It could be tried. One could remove a huge part of a computer with a hacksaw.

This isn't to say there could not be a computer so designed to survive such an attack, only that I feel that we already know a human can survive such damage and still do math or sing a song.

Link to comment
Share on other sites

Well, certainly no computer that you go out and buy right now would survive a hacksaw very well. But if you challenged me to build a computer system that was resilient in that way, and you gave me specifications about what sort of cuts you wanted to make (I mean general specifications - it would be too easy if you told me exactly where you were going to cut) I could at least make a run at it. It would involve a lot of redundancy and information storage with error correcting codes and things like that.

 

My current job involves this on a very small scale - we design enterprise grade flash memory based data storage systems. It turns out that modern NAND flash memory is actually pretty crappy stuff - it just "spontaneously forgets" to some extent. We error correcting data encoding techniques and multiple levels of RAID redundancy to overcome that. The idea is to avoid having to just have full copies of the system (mirroring) in order to get the reliability we need.

 

I enjoyed your answer - not trying to tear it down. But I think it's no surprise at all that evolution favored the development of brains that show redundancy.

Link to comment
Share on other sites

Good points. However, what remains obvious to me is that the emergence of consciousness along the evolutionary path makes no sense, and particularly because of what you just said. You agree that the brain could wire itself to survive damage that would destroy huge areas , as we have seen happens regularly in the case of people born with very little brain matter.

You say you could easily imagine how you could create a computer that could somehow restructure itself to compensate for physical damage.

Let's say, okay, maybe in the far future, but possible.

All the more reason for me to say, okay. The why would we ever need a consciousness to aid our already amazing computer-like brain?

Survival? Why? You could ask, what particular mutation would create awareness when awareness would not survive any better than a better computer?

There is no reason I can imagine that survival should benefit from awareness. One could design into a computer any reaction to threat---- something like adrenalin, a flight or fight response, intelligence to figure out an escape route, and on and on.

I have said, it seems awareness would be, rather, a detriment to survival. A computer would be much more apt to sacrifice itself for the larger group way before humans would. We humans are hamstrung a bit in that department. We often allow torture to influence our decision to protect the larger group. Clearly, if we had a way to endure torture we would survive better genetically.

But awareness works against this. It causes us to defend ourselves at a far greater degree than what would help our culture survive.

Screw them! This hurts like hell!

So when I hear of people say the brain created awareness, I wonder how Darwin would feel about that. It's the last thing I would ever put into a computer. I can just see the computer giving up all the other computers in hopes of stopping the pain for just a little while...

Link to comment
Share on other sites

Oh, you're giving me too much credit. I'd design the computer to be redundant from the get-go; the damage would either leave enough redundant circuitry intact for the thing to still work, or it wouldn't. And I imagine a design (one that I did, at least) would pay a heavy efficiency price for that robustness - it would be bigger, heavier and so on than a design without redundancy.

 

I read your reply as an argument against consciousness having biological origins - you're pointing out why it doesn't bring evolutionary advantages. Did I read that right, or did I miss your point entirely?

Link to comment
Share on other sites

No, you got it. And reading what I wrote I would agree it was confusing.

I am saying that;

Awareness is nothing like any survival trait or mechanism, It is fundamentally unlike anything else.

It is only the requirement of science to explain all experience as physical in nature that causes them to seek a physical source for consciousness. The fact that so many in science actually theorize or even believe that the brain creates consciousness shows how far they are from absorbing what is absolutely obvious.

I forget where, some thread here, where a commenter asked the question, "Do you think consciousness is just an illusion?"

Boy! That's a great question! Let me think about that!

Link to comment
Share on other sites

Yes, my response to "consciousness is an illusion," (or even better, "self awareness is an illusion") has always been to ask "What, exactly, is it that is experiencing the illusion????"


Seriously, Dave, my big problem in this area is that I have never been able to perceive any "bridge" whatsoever from fundamental physics (Maxwell's equations, the equations governing behavior inside a transistor, etc.) that seems to offer the least bit of hint as to how it is we're "aware." As far as self-aware future computers goes, I just don't see it - at least not using today's computer technologies where the whole computer is essentially one big finite state machine. Each individual transistor is just a piece of material in an electrical state, and has no "knowledge" whatsoever of the state of other transistors. Each bit of data in the representation of the state is just that: one bit of data.

 

I don't see where self-awareness gets a toehold. Something is going on that we are not even close to understanding. I have no idea whether it will turn out to be something we can study scientifically or not, but it's certainly good to see people try.

Link to comment
Share on other sites

@Kiplngram, I think a lot of people (not implying you) confuse intelligence with self awareness. We even see this in religion where many believe only humans have a spirit/soul. Of course awareness and intelligence are not the same. My cat is self aware. She is aware is aware that I can hear her when she meows which is why she meows to me at various intervals and volumes pending on the situation and what response she is trying to get. She is also aware she can be seen which is evident by the fact that she hides. Is my cat aware of all things I am aware of like she is a cat, lives on a planet, will die one day, and etc; of course not. Does that make me more self aware than her, maybe/maybe not. For this discussion I don't get the impression that matters? For this something is self aware or not. You aren't attempting to create degrees?

 

Other conflate being self aware with life. That self awareness only exists amongst the living. Computers are a series of solid state switches which respond to inputs. They are not alive. They do not have the ability to do more than what they have been programed to do or operate without inputs. Many devices have a variety of sensors which create inputs but that is still based on programing and isn't truly self generated. Many insects merely exist responding to inputs though. The only variance in routine being created by their environment moment by moment. Free will, choice, exhibited independent behavior, evident acts of self awareness, and etc aren't clearly present in all individuali insects. That said insects are alive and collectively are able to problem solve and exhibit other acts of an awareness. Cells would be another example of a living thing with doesn't think, responds as programed, and aren't apparently self aware.

 

So for AI the questions I ponder are. Can something which isn't alive be self aware and can something that is completely nonorganic be alive? I apologize for the ramblings and unfortunately there is no brillant payoff. I am not sure AI would need to be sentient to be self aware. At least not for AI to make decision regarding self preservation or expansion. All life on earth came from the same place. Life was only created on earth once. We only have this example. It is either the exception or it is the rule. we really don't know (at least I don't).

Link to comment
Share on other sites

Right on, Kiplngram.

 

Also, I sense that everything has consciousness if we are projecting reality.

This would mean that the thing we project is our own consciousness. I "create" you and you me, but at the same time, both of us has created volitional characters who both exist separately and also answer our expectations.

Link to comment
Share on other sites

I don't think we can *prove* others are self aware (cats included) - any measurable behavior could be that of an automaton. And I can't prove to you that I'm self aware - the thing I'm talking about isn't my ability to solve problems with intelligence (I can demonstrate that to you). I'm referring to things like feeling pain, happiness, and so on. The "sensation that I am." *I* can observe that in myself, and I can't produce an acceptable way for a mechanism to achieve that. I actually feel like that is the most directly observable of the things I can perceive - it doesn't arrive at my senses via light or sound waves, or get processed through my sense organs in any other way. It's innately internal.

 

That said, I am 100% convinced that my family's dog is self-aware. I can't prove it, so it's "not science" I guess, but I'm a believer. :)

 

I'm anxious to see where Donald Hoffman goes with his conscious realism theory. He takes consciousness as a given - an axiomatic property in the same way conventional theories take charge (for example), so the theory offers no explanation for how it happens. What he is burdened with demonstrating with good math and science rigor is that such a structure of conscious agents would yield perceptions that match our experimental operations - all of them. A big uphill climb ahead of the guy, but he seems to be intent on taking it head-on rather than trying to dodge it somehow. I look forward to following it.


You know, I thought of something else in reference to one of the comments above that consciousness doesn't seem to be an evolution-driven thing. Not necessarily related, but consider sleep. The need to sleep is NOT an advantage. You're vulnerable when you're asleep. An organism that overcame a biological requirement to sleep would have a distinct advantage. Except possibly it can be viewed as energy conserving, so maybe I need to think about that.

 

But if the need to sleep somehow arises at the (fundamental) level of conscious agents, then the biological things we see that imply the organism must sleep would just be a reflection of that lower level reality.

Link to comment
Share on other sites

I have to say that the idea of consciousness being a necessity is out. It's a good point about sleep but I am going deeper. I see the very fundamental quality of awareness as being absolutely divorced from the material body. It is absolutely and unequivocally different from any biological process. to think that awareness could suddenly be "invented" by nature is patently absurd.

Awareness is the very embodiment of a witness, one who stand aside and witnesses a body and mind that feel pain and pleasure and desire and hope. All of that has zero function and certainly, as said, detrimental.

I imagine the idea of torture, and how a DNA genome would adapt to a hive mentality where each would always consider the group.

look at the world today. Wars and famine, overpopulation and greedy corporations fleecing societies, lonely people with cell phones stuck to their ears---- all because we are aware. Our awareness is our pitfall. It causes us to be selfish if nothing else. Self-centered and sick with greed. What mechanism in nature would survive better because of greed or jealousy or even anger, which often robs a person of common sense and causes his premature demise or that of his neighbor?

But not even considering that, show me the gene that creates awareness! Awareness is special! It must be obvious.

I guess I have to say, there really is no argument. Only someone attempting to bolster the idea that everything that exists can be dissected in a laboratory. Their desire is to see the whole of existence as measurable and finite has disrupted their minds. It completes their narrative, in their own limited world view.

I can't believe we're even talking about it.

Better to wonder if machines could think! That I can't solve.

Edited by Dave Moore
Link to comment
Share on other sites

I'm going to put this here in Speculations from the get-go, because it always seem to lead into some "uncertain" areas. I've been browsing here for a couple of days now, and I feel like there are quite a few very well-informed and talented people here. This is a topic I've been thinking about for years, and I'd be ever so happy if someone can bring me something new to ponder.

 

First of all, let me say what I am not trying to do with this post. I am in no way trying to back-door my way into a religious assertion. I'm pointing this out because I think some people do use this general topic with an eye toward doing that. I am of the opinion that our "minds" involve something more than just our brains (and I'll talk more about that below), but I see no reason whatsoever that has to imply the usual baggage that's associated with it (a Creator, etc.) . For lack of a better term I'm going to refer to this opinion as the "we have spirits" opinion. So when I refer below to spirits in any way, that's what I'm talking about - a "something extra" that works together with our brain to make our mental activity and experiences possible.

 

Also, in case this gets long-winded, what I'm ultimately going to wind up asking for is the best possible explanation of the case for our minds being purely brain-based. Some new insight that no one's been able to give me in the past.

 

Let me get started. In reading about artificial intelligence, you run across two concepts:

  • The Strong AI Theorem. My understanding of this theorem is that it essentially is the claim that sentient thought arises completely from the operation of some sort of hardware and software (our brains and the pattern of neural wiring and activity, or a computer with the appropriate software, etc.) . That there's nothing more to it than that - case closed
  • The Weak AI Theorem. This is the claim that with sufficient hardware power and software cleverness we could in principle construct a machine that mimics the behavior of sentient thought to as high a level of accuracy as we wish.

First of all, I'm 100% down for the Weak Theorem. It seems obvious - any behavior can be programmed, and with a big enough database of information, clever enough software, and powerful enough hardware to run it all this looks like an obviously achievable goal. So there is absolutely no way for me to know whether or not all of the people that I encounter in life actually have spirits or not. Every one of you could be nothing more than very superb androids for all I know.

 

But it's different when I consider myself. In a way I don't know how to express in hard scientific terms, I "feel" my own existence. I am aware of myself as a living, thinking thing. I feel physical pain and pleasure. I feel joy, sadness, anger, and so on. I am aware of the chemical / physiological basis of all of these things (though I'm not an expert, so don't ask me for precise chemical formulae and so on). But I'm talking about more than the measurable chemical changes that occur in my body based on my mental state. I'm talking about the actual perceived sensations. The crux of my difficulty here is that I haven't been able to develop any sort of a good theory of how a chemical (and that's all the materials of our bodies are) can experience sensation.

 

I'm going to switch over to computer-based systems now, because I understand them more completely. I'm actually a digital circuitry engineer by education and experience more than I am anything else. So, if the Strong AI Theroem is true, then in theory we could build a computer and write the right software, and that entity could then experience the same sort of things I can feel myself experiencing. Pain (we'll assume there are suitable sensors wired in). Joy, anger, etc. The computer would have the same sort of innate sensation of these things that I have when I experience them. It would not only react externally in the appropriate manner, but it would literally have an internal awareness of its own "frame of mind."

 

As I said, I've thought about this for years, and I don't see how it's possible. The hardware of a computer is essentially a vast array of transistors, each of them either "on" or "off," wired together in an arrangement that enables its function. Each of those transistors operates according to well-defined semiconductor physics. And, finally, no transistor is "aware" of the state of any other transistor, or of any global patterns of transistor activation - each transistor just has a particular voltage on each of its inputs and that's it. There are no "back channels" for global state of any kind to find its way into the operation of an individual transistor.

 

So the hardware of a computer is a machine and nothing more. It moves through states the same way the cylinders of a car move through a particular pattern of motion. I see no basis on the hardware front for self-awareness.

 

So that brings us to the software. The most general description of the full software state of a computer is to say that it's an evolving pattern of true and false logic bits. It's a table of information. The hardware can "time step" that table from its value set now to its value set and the next clock cycle. But I could write each one of those patterns down in a book, say one book per time step, and put those books on the shelves of library. That library would then represent the evolution of the software in a fully complete way.

 

I think I can wrap this up now - exactly how does a pattern of information experience sensations? I certainly see how you could point to one particular bit of that information and say "There - that bit right there is the 'pain' bit." That would just be a function of how the software was written - maybe a temperature sensor exceeding a certain value causes that bit to be set, and we call that "burning pain." But what is EXPERIENCING that pain? How do we assert that the hardware / software system is having an awareness of the state of pain?

 

I've asked this question many times over the years, but no one has ever responded in a way that feels like more than hand waving. It always reminds me of that old cartoon that I saw in one of the World Book Science Yearbooks many years ago - a bunch of math in the top left corner of a chalkboard, a simple result in the bottom right, and the words "THEN A MIRACLE OCCURS" in between. One scientist is saying to another "I think you need some more detail here in step 2."

 

I think the formal phrase for the approach I've outlined above is "consciousness is an emergent property of complexity." What I'm looking for is "How?" How do we take that step from a finite state machine to real "self awareness"?

 

My absolute failure to discover a satisfactory answer to this question is basically why I've come to believe we have spirits. It's an Occam's Razor thing - it solves the problem. Nothing is left unexplained. If we have spirits, then it's our spirits that feel self awareness, and that unburdens the hardware from having to do so by itself. And as far as I can see, presuming spirits doesn't foul science up in anyway. Science certainly doesn't show that we do have spirits, but it doesn't show that we don't either.

 

Many people use Occam's Razor to argue against spirits: "Why add something new when you can explain everything without adding it?" Except that in my opinion we can't explain everything. At least not in a way that I've been able to find believable.

 

So, please - can anyone say something new to me? Something that can put me on a new track to understanding how a finite state machine could even begin to embody this thing I'm calling self-awareness?

 

As I said above, I'm putting this in Speculations because I think we're talking about a fundamentally non-scientific issue. There's no way to tell from the outside whether a person has self-awareness or not. It's a strictly internal thing that each of us can feel about ourselves only.

 

Oh, one last thought. Some people will say "You don't have self awareness - it's an illusion." Ok, if that's the case, then what is it exactly that's experiencing the illusion? That seems like a dead end to me. I can almost accept the argument "free will is an illusion" as something at least to debate (I do think we have free will, but I could honor a debate). But "self awareness is an illusion" seems like a contradiction from the outset to me.

 

Thanks for reading,

Kip

 

 

 

If you are really interested, the path you need to travel down to help your understanding is study the concept of emergence; how new, and 'higher', or more sophisticated, phenomena arise from simpler components that appears to be more than the sum of the parts. In effect, to use an analogy, 1+1=3.

 

This Wiki definition elaborates more clearly:

 

In philosophy, systems theory, science, and art, emergence is a phenomenon whereby larger entities arise through interactions among smaller or simpler entities such that the larger entities exhibit properties the smaller/simpler entities do not exhibit.

 

Give it a read: https://en.wikipedia.org/wiki/Emergence

Edited by StringJunky
Link to comment
Share on other sites

It's a problem of definitions. In the case of brain/mind or mind/brain, the analysis is useless. If you define awareness as fundamentally unique in aspect, it is more like 1 + symbol for 1 = 1 + symbol for 1

In other words, using a symbol of a phenomenon in the same conceptual framework with the actual phenomenon it represents cannot yield the answer.

A simple formula that is instigated by the question, "What happens if I have a chair and I add another chair?" would look like this:

Chair plus chair = 2 chairs. But the formula 1 plus chair = ? lacks enough information to yield a result.

Awareness is not a variable that is applicable in defining brain in the same way that brain is a variable in defining frontal lobe.

This is the problem with science. Science is forever limited to its basic tool kit.

You ask, "Where does space end?", and they hand you a tape measure. You cannot add or subtract from infinity for conceptual reasons. Nor can you add or subtract from awareness.

Link to comment
Share on other sites

I don't think we can *prove* others are self aware (cats included) - any measurable behavior could be that of an automaton. And I can't prove to you that I'm self aware - the thing I'm talking about isn't my ability to solve problems with intelligence (I can demonstrate that to you). I'm referring to things like feeling pain, happiness, and so on. The "sensation that I am." *I* can observe that in myself, and I can't produce an acceptable way for a mechanism to achieve that. I actually feel like that is the most directly observable of the things I can perceive - it doesn't arrive at my senses via light or sound waves, or get processed through my sense organs in any other way. It's innately internal.

 

That said, I am 100% convinced that my family's dog is self-aware. I can't prove it, so it's "not science" I guess, but I'm a believer. :)

 

I'm anxious to see where Donald Hoffman goes with his conscious realism theory. He takes consciousness as a given - an axiomatic property in the same way conventional theories take charge (for example), so the theory offers no explanation for how it happens. What he is burdened with demonstrating with good math and science rigor is that such a structure of conscious agents would yield perceptions that match our experimental operations - all of them. A big uphill climb ahead of the guy, but he seems to be intent on taking it head-on rather than trying to dodge it somehow. I look forward to following it.

You know, I thought of something else in reference to one of the comments above that consciousness doesn't seem to be an evolution-driven thing. Not necessarily related, but consider sleep. The need to sleep is NOT an advantage. You're vulnerable when you're asleep. An organism that overcame a biological requirement to sleep would have a distinct advantage. Except possibly it can be viewed as energy conserving, so maybe I need to think about that.

 

But if the need to sleep somehow arises at the (fundamental) level of conscious agents, then the biological things we see that imply the organism must sleep would just be a reflection of that lower level reality.

 

I understand that you referencing self awareness on a uniquely personal level. The singular moment experienced where one know it is, was, and one day won't be. As you read this you are aware of yourself. Last night you were aware of yourself as well but that moment, last night, is gone and perhaps never was. You have a memory of last night but memories are imperfect, they aren't real. So all we ever are is right now as we are aware of right now. Who am I, what am I, why am I, is reality real, and etc are questions we don't believe any other life on earth wonders about or in capable of wondering about. Our ability to wonderis a product of our curiosity which drove our intelligence and not vice versa. Curiosity often leads to knowledge (or it kills the cats :) ) but knowledge does lead to curiosity. Curiosity is emotional and knowing something isn't. So how we feel important.

 

I honestly see no difference between basic human emotional responses and that of other mamals. A wolf pack, orca pod, lion pride, and etc are all comparable is structure and socail interaction. All containing traits exhibited by our hunter gatherer ancestors. I see no reason to assume the basic mechanism for awareness is different between a human and a wolf. Clearly we are more intelligent but also clearly evolved via the same process. It is important, in my opinion, not to assume we (humans) are more evolved or superior. That isn't how evolution works. It isn't a system that moves species in a linear direction lesser to greater. That we are special or the pinnacle of evolution is a indulgence which through our history has create much strife.

 

I don't understand why you think pain and happiness are special? In my opinion emotions and our responses to them are some of the simplest aspects of consciousness. It is why we use reward systems to train animals. Just create a positive like a treat which provides satisfaction and a negative like a spray bottle blast to the face and we are able to influence behavior. That works on seemingly all mamals from mice to horses. If by automation or otherwise all mamals proactively work to avoid pain while seeking comfort. Insects however do not. An ant or bee will sacrifice themselves in an instant. For many insects dismembering or killing themselves is often part of collective problem solving. Which leads me to believe their sensor perception of pain and comfort nonexistent and as a result they lack any comparable emotions.

Link to comment
Share on other sites

I have to say that the idea of consciousness being a necessity is out. It's a good point about sleep but I am going deeper. I see the very fundamental quality of awareness as being absolutely divorced from the material body. It is absolutely and unequivocally different from any biological process. to think that awareness could suddenly be "invented" by nature is patently absurd.

Awareness is the very embodiment of a witness, one who stand aside and witnesses a body and mind that feel pain and pleasure and desire and hope. All of that has zero function and certainly, as said, detrimental.

I imagine the idea of torture, and how a DNA genome would adapt to a hive mentality where each would always consider the group.

look at the world today. Wars and famine, overpopulation and greedy corporations fleecing societies, lonely people with cell phones stuck to their ears---- all because we are aware. Our awareness is our pitfall. It causes us to be selfish if nothing else. Self-centered and sick with greed. What mechanism in nature would survive better because of greed or jealousy or even anger, which often robs a person of common sense and causes his premature demise or that of his neighbor?

But not even considering that, show me the gene that creates awareness! Awareness is special! It must be obvious.

I guess I have to say, there really is no argument. Only someone attempting to bolster the idea that everything that exists can be dissected in a laboratory. Their desire is to see the whole of existence as measurable and finite has disrupted their minds. It completes their narrative, in their own limited world view.

I can't believe we're even talking about it.

Better to wonder if machines could think! That I can't solve.

 

Wow Dave - that's a marvelous post. Lots of food for thought.

 

I understand that you referencing self awareness on a uniquely personal level. The singular moment experienced where one know it is, was, and one day won't be. As you read this you are aware of yourself. Last night you were aware of yourself as well but that moment, last night, is gone and perhaps never was. You have a memory of last night but memories are imperfect, they aren't real. So all we ever are is right now as we are aware of right now. Who am I, what am I, why am I, is reality real, and etc are questions we don't believe any other life on earth wonders about or in capable of wondering about. Our ability to wonderis a product of our curiosity which drove our intelligence and not vice versa. Curiosity often leads to knowledge (or it kills the cats :) ) but knowledge does lead to curiosity. Curiosity is emotional and knowing something isn't. So how we feel important.

 

I honestly see no difference between basic human emotional responses and that of other mamals. A wolf pack, orca pod, lion pride, and etc are all comparable is structure and socail interaction. All containing traits exhibited by our hunter gatherer ancestors. I see no reason to assume the basic mechanism for awareness is different between a human and a wolf. Clearly we are more intelligent but also clearly evolved via the same process. It is important, in my opinion, not to assume we (humans) are more evolved or superior. That isn't how evolution works. It isn't a system that moves species in a linear direction lesser to greater. That we are special or the pinnacle of evolution is a indulgence which through our history has create much strife.

 

I don't understand why you think pain and happiness are special? In my opinion emotions and our responses to them are some of the simplest aspects of consciousness. It is why we use reward systems to train animals. Just create a positive like a treat which provides satisfaction and a negative like a spray bottle blast to the face and we are able to influence behavior. That works on seemingly all mamals from mice to horses. If by automation or otherwise all mamals proactively work to avoid pain while seeking comfort. Insects however do not. An ant or bee will sacrifice themselves in an instant. For many insects dismembering or killing themselves is often part of collective problem solving. Which leads me to believe their sensor perception of pain and comfort nonexistent and as a result they lack any comparable emotions.

 

I didn't mean to imply that pain etc. are things that are "special." All I really meant was that I have not been able to see a way for a computer, built using the technology of today's computers and thus just a 100% deterministic collection of transistors, could experience such things. We can program them to use those words and report those words when they are in certain states (temperature sensor > T1 --> "ouch, that burns."). But I wouldn't believe the computer was actually *feeling* the sensation that I call burning.

 

I just cannot imagine a computer, any computer, that I would feel like I'd "killed" if I took a sledgehammer to it. That's the way a lot of humans felt in the re-imagined Battlestar Galactica - the Cylons were "toaster," and those humans felt no remorse whatsoever about destroying them. The show was deliberately designed to make us the viewer see the Cylons differently, or at least wonder about the issue. And of course they weren't purely based on our current computer technology - it was overtly stated that at some point the Cylons had started "playing with human DNA."

 

The question for me is not whether it will ever be possible for us to build conscious, feeling creations. Clearly that's done all the time - every time a man and a woman have a baby. So it's not out of reason at all to think we might someday figure out how to build some non-biological machine that operates in the same way. But just as I can't see how a regular computer made of transistors could experience "awareness," I can't really see how a mechanism built of atoms can do so either. I am very interested in reading more about emergence and so on. That was really the reason I made this post in the first place - to solicit links and pointers to new things to study. But as of today, with my current state of knowledge, I lean toward believing that "awareness" comes from some other layer of reality that "interfaces" the physical universe using these mechanisms we call bodies. So if we figure out how to build non-biological devices that are "aware," I'll suspect that awareness comes from the same source.

 

Some instances of awareness wind up in human beings. Some instances wind up in dogs, cats, etc. So in that futuristic scenario, some instances will wind up in these devices we've built. How that happens is a fascinating thing, but I don't know that it will ever be brought into a properly scientific perspective.

 

Or, maybe I'll read something new about emergence and decide that's a more plausible explanation. Sitting here today I really can't say, but I'm interested in chasing down every lead. I'm not entirely ignorant of emergence today, but so far I still see "what emerges" as behaviors. Patterns in the deterministic output. I haven't yet seen how it can lead to "something" being aware. But thank you to everyone who's sending me links and so forth - I very much appreciate them.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.