Jump to content

Can a computer be "conscious"?


Ibeamer

Recommended Posts

Computer programs called simulators are used in development of new computer to check if the computer logic ( circuits / gates etc. ) perform as desired. The computer can simulate a non existent new design to determine if it is correctly designed. It also can therefore be made to simulate itself. Can the computer then be said to "self aware", an element of consciousness?

Link to comment
Share on other sites

Computer programs called simulators are used in development of new computer to check if the computer logic ( circuits / gates etc. ) perform as desired. The computer can simulate a non existent new design to determine if it is correctly designed. It also can therefore be made to simulate itself. Can the computer then be said to "self aware", an element of consciousness?

 

I don't think self-simulation would be the correct element, and also, a computer could only simulate part of itself, not it's entirety - it could never simulate itself simulating itself as it could never hold the data.

 

But humans aren't fully self aware either. We can "think about what we are thinking about" to a degree, but we have other sections of our brains creating that effect, which we can't be aware of because then we'd need yet larger unknown sections to create our awareness of those. In other words - I am pretty sure the subconscious is a necessary component of a conscious mind.

 

I suspect a computer program can be aware of itself, but I am not sure what that would require. Consciousness to me, seems to be not about the data, but about the relationship between elements of data. To be aware of a number "5" you need to aware of how it is different from at least something else, such as the numbers 1,2,3 and 4. Logic gates can flip bits and perform mathematical calculations on those numbers - but it happens in a manner as automatic as my cells processing oxygen, a process I am keenly unaware of.

 

So maybe, if you had not so much a simulation, but a small amount of data being processed, and a larger block of data that somehow reflects how the program is using that data, and the contrasts in that data.... maybe it would be conscious, but I really don't know.

Link to comment
Share on other sites

It seems to me, what is lacking is a scientific definition of terms such as "conscious" and "self aware". What is the difference between an organic mechanism that knows 5 is greater than 4 and less than 6 or a set of logic gates that come to same conclusion? Only the implementation details. Of course organic implementations are many orders of magnitude greater in size than current CPUs, but if a CPU existed comparable to the size and storage capacity of a human brain, how could we distinguish between them? The "Turing test" probably couldn't.

Link to comment
Share on other sites

Computer programs called simulators are used in development of new computer to check if the computer logic ( circuits / gates etc. ) perform as desired. The computer can simulate a non existent new design to determine if it is correctly designed. It also can therefore be made to simulate itself. Can the computer then be said to "self aware", an element of consciousness?

 

No. I think consciousness comes about through systems with a function similar to the cerebral cortex. That is: systems which are able to analyze and predict the behavior of their inputs with a time sensitive component, and are able to do so at various levels of abstraction. These systems are actually able to construct a cohesive inner world based on sensory input. Another important factor is the ability to interact with the outside world, and predict what effect a particular action might have on the future of the world.

 

Your example isn't much different than using a text editor to edit the text editor's own source code. While that's a novel concept, there's nothing particularly interesting going on.

Link to comment
Share on other sites

I have seen and browsed enough articles by neuro-scientists and psychologists to realise that even the experts do not know what the word 'conscious' really means.

 

In other words, it is a question we cannot answer because we lack a clear definition of the parameter being measured.

Link to comment
Share on other sites

Despite the thread title, I think the OP's question is more specific. He asks if a computer simulating itself could be said to be self aware. My instinctive response to this one is to say "no", but as you say Lance it is difficult to pin down a clear meaning of consciousness, even if we par it down to the bare minimum of self-awareness.

 

I suppose you could look at it backwards: humans are self aware, so is this because we simulate ourselves? Again, my gut says "no".

Link to comment
Share on other sites

No. I think consciousness comes about through systems with a function similar to the cerebral cortex. That is: systems which are able to analyze and predict the behavior of their inputs with a time sensitive component, and are able to do so at various levels of abstraction. These systems are actually able to construct a cohesive inner world based on sensory input. Another important factor is the ability to interact with the outside world, and predict what effect a particular action might have on the future of the world.

 

Your example isn't much different than using a text editor to edit the text editor's own source code. While that's a novel concept, there's nothing particularly interesting going on.

 

But how about the technique known as "heuristics" where a computer program gathers history of its environment (inputs etc.) so as to improve its own performance in the future? Also "neural networks" which have the ability to learn. An interesting (fun) example is at http://www.20q.net a 20 questions game

Link to comment
Share on other sites

But how about the technique known as "heuristics" where a computer program gathers history of its environment (inputs etc.) so as to improve its own performance in the future? Also "neural networks" which have the ability to learn.

 

Neural networks have lead to some pretty nifty things in the field of "applied AI" however I don't think such systems have much hope of ever being conscious.

 

I think of humans stand any chance of making a computer conscious in the near future it will come about through rampantly copying the structures of the human brain such as the cerebral cortex, thalamus, the loops between the two, and the hippocampus.

 

There's already been great headway simulating the cerebral cortex and hippocampus. Both Numenta's NuPIC software and the BlueBrain project are working on cortical simulations, and other researchers have built a detailed mathematical model of the hippocampus through very detailed analysis of its structure.

Link to comment
Share on other sites

Have you ever heard of the Chinese room thought experiment? http://en.wikipedia.org/wiki/Chinese_Room

 

Yes and I've read Dennett and Hofstadter extensively deconstruct Searle. I agree with Hofstadter but not Dennett. I guess I'd place myself in the emergent materialist school.

 

The Chinese Room thought experiment tries to be a sort of reductio ad absurdum... it's simply absurd to consider that any part of the room actually "understood" Chinese, that somewhere within the symbolic abstraction consciousness actually arose.

 

I see it as more of an argument from personal incredulity. A huge problem arises from the immense disparity between how fast the Chinese Room could operate compared to how fast a sequential computer that could simulate the entire brain would have to operate to function in realtime. I don't see the Chinese Room as really being a practical example, as it would take countless generations of "operators" to even attempt to simulate a human brain comprehending a single sentence. This alone makes the example absurd... the difference in timescales is completely incomprehensible.

 

Imagine, rather than the Chinese Room's pathetic excuse for a Turing machine, we replace the example with a man who has travelled back in time with a 10 octillion CPU core computer chip which runs at 500 exahertz. Could you see that as being conscious more easily than you could the Chinese room? Functionally the two systems are equivalent... it just takes the Chinese room several orders of magnitude longer to compute the same thing. But the multicore computer chip feels more like a human brain, because it's massively parallel, and also fast enough it should be able to simulate neurons at the atomic level.

 

Other than that, I don't find much substance to Searle's arguments or biological naturalism. What exactly is it that Searle finds "special" about biological systems which can't be simulated by a Turing machine?

Link to comment
Share on other sites

I have a much simpler test to determine if a computer is conscious:

 

If a computer, without previously being programmed to do so, can begin pleading with me not to turn it off, can construct elaborate chains of reasoning to justify it, and can threaten to call my mother and make her stop me (if it is really clever), I will call it conscious.

 

That sort of behavior would show to me that it is aware it is a computer and can be turned off, that it has will ("no! Don't do it!"), and that it can think in some sense.

 

Whether or not it is merely simulating it is beyond the point.

Link to comment
Share on other sites

The computer can simulate a non existent new design to determine if it is correctly designed. It also can therefore be made to simulate itself. Can the computer then be said to "self aware", an element of consciousness?

 

I'd say that computers can, in theory, be more aware of itself than people could ever hope to be. They could know exactly how they function, and be able to grasp how any of their sub-processes work. I don't think simulating itself is part of either this nor self-awareness.

 

Self-awareness, as I understand it, is an awareness of one's self-hood, not necessarily an understanding of one's self. Eg, thinking and talking in the first person personal. Not so much anatomy and neuroscience.

Link to comment
Share on other sites

Why is the difference in timescale relevant?

 

Because it's hard to think about a consciousness which exists long enough to perceive a single question and respond acting on a timescale of hundreds of millions or billions of years. Because the actions of the Chinese room seem so mundane and operate so slowly its hard to conceive of how it could represent a structure as complex as the human brain, let alone bring about consciousness.

 

For the Chinese room to work it'd actually have to be a vast warehouse, probably hundreds of square miles, packed with trillions of sheets of paper. It'd have to be manned by untold generations of operators, because a single operator's lifetime doesn't count for much in terms of the overall simulation. And it'd likely take hundreds of millions of not billions of years to answer a single question.

 

And that doesn't take into account at what point the "room" learned Chinese, although I suppose that's a question for another day.

Link to comment
Share on other sites

And that doesn't take into account at what point the "room" learned Chinese, although I suppose that's a question for another day.

Isn't the point that the room never actually learned Chinese? It's just able to approximate Chinese via really long processes.

 

Even if you had more efficient processors, which speeds up the timescale the room is no more conscious of Chinese than, say, google translator (or some future, better technology).

 

The point of the Chinese room is to say that the Turing test is an inefficient way to test the consciousness of AI, because consciousness can be approximated.

Link to comment
Share on other sites

Isn't the point that the room never actually learned Chinese?

 

No, the point is to try to debunk functionalism. The room is an incredibly ludicrous and convoluted example of a Turing machine, and the functionalists would argue that consciousness should be able to run on any Turing machine. Searle wants you to look at the Chinese Room and go "there's no way something like this could possibly be conscious"

 

Imagine if you will we scanned the brain of a Chinese man enough to the point that we could completely reconstruct a copy of his brain inside a computer. Now, imagine you printed a digital copy of his brain out on paper and used that as part of the "book" the man in the Chinese room uses to process the message.

 

The Chinese Room could perform a complete simulation of the Chinese man's brain. However, the timescales involved just for the simulation of the man's brain to process the question would be completely incomprehensible. Right now one of the world's fastest supercomputers, BlueBrain, is being used to simulate a single neocortical column of a rat. BlueBrain has 8192 multigigahertz POWER processors and it can't even simulate a single neocortical column of a rat in realtime. Humans have nearly a million neocortical columns and each one is 6 times more complex than the ones in rats.

 

To even simulate the entire human neocortex the BlueBrain computer would run 6 million times slower than it presently does (and keep in mind the neocortex. If the Chinese Room were even able to do one operation per second, which is pushing it, it'd still be billions of times slower than even one of these CPUs. For a full neocortical simulation, just as a rough estimate the Chinese Room would take approximately 10^20 times longer to function than a normal human brain.

 

Imagine if it took you 10^20 times longer to experience a given amount of time. To people who were operating in "realtime" would you appear to be conscious?

 

It's just able to approximate Chinese via really long processes.

 

Even if you had more efficient processors, which speeds up the timescale the room is no more conscious of Chinese than, say, google translator (or some future, better technology).

 

If that's what you got from the exercise you're missing the point. The question is could the Chinese Room potentially be conscious? Obviously Google Translate isn't. A complete brain simulation could, but it's hard to fathom the Chinese Room executing one. It's at this point Searle throws his hands up in the air and declares "ridiculous" and expects you to do the same.

Link to comment
Share on other sites

If that's what you got from the exercise you're missing the point. The question is could the Chinese Room potentially be conscious? Obviously Google Translate isn't. A complete brain simulation could, but it's hard to fathom the Chinese Room executing one. It's at this point Searle throws his hands up in the air and declares "ridiculous" and expects you to do the same.

 

Obviously the Chinese room wouldn't be able to successfully execute an AI simulator, because the processes would take too long. But, you can stick feathers up your butt or create a perfectly artificial-feathered bird-android but you still haven't really produced a chicken.

 

As a side note, I haven't really studied this stuff, so if I'm misunderstanding Searle's purpose of the Chinese room, perhaps I'm trying to describe something different.

Link to comment
Share on other sites

Obviously the Chinese room wouldn't be able to successfully execute an AI simulator, because the processes would take too long.

 

And therein lies the problem. Searle makes no mention of the timescales involved, but instead provides a framework for a Turing machine and a description of the behavior. I don't think that we will ever see an AI implementation which can target the Chinese Room and provide you with a Chinese answer to a Chinese question within a single human lifetime. Based on my estimates we're talking about something more on the order of 10^18th years, give or take a few orders of magnitude. However, the thought experiment in effect implies that the Chinese Room could answer a question within a human lifetime as opposed to a civilization providing operators for the Chinese room for a period of time much longer than the age of the universe.

 

This is why I personally find the thought experiment ridiculous.

 

But, you can stick feathers up your butt or create a perfectly artificial-feathered bird-android but you still haven't really produced a chicken.

 

Right, so that's back to whether functionalism is the correct philosophical interpretation of how the mind operates. It assumes that consciousness is, in effect, a "function" of the brain, and that by simulating the brain in a computer you will produce consciousness.

 

Not that I'm defending functionalism. I believe "mindstuff"/noumena/qualia are distinct from the physical processes that represent them. I believe there is a symbolic layer of abstraction. In that regard, I would defend emergent materialism first and foremost.

 

As a side note, I haven't really studied this stuff, so if I'm misunderstanding Searle's purpose of the Chinese room, perhaps I'm trying to describe something different.

 

No, you're getting it, and Searle WANTS you to assume that whatever process is going on inside the Chinese Room is more like Google Translate than a complete simulation of the brain. Searle's goal is for you to see the process is a mechanical one and therefore wants you to believe it cannot produce consciousness.

Link to comment
Share on other sites

And therein lies the problem. Searle makes no mention of the timescales involved, but instead provides a framework for a Turing machine and a description of the behavior. I don't think that we will ever see an AI implementation which can target the Chinese Room and provide you with a Chinese answer to a Chinese question within a single human lifetime. Based on my estimates we're talking about something more on the order of 10^18th years, give or take a few orders of magnitude. However, the thought experiment in effect implies that the Chinese Room could answer a question within a human lifetime as opposed to a civilization providing operators for the Chinese room for a period of time much longer than the age of the universe.

Ok, but can't you replace the person in the room with a mechanical processor that's 10^18th times faster than that person and still end up with a 'chinese room' (though not necessarily the same one that Searle has created).

 

You get the functional (presumably) without necessary consciousness, because it's still a Chinese room, albeit a much faster one.

 

I suppose that still feeds into your next point:

Right, so that's back to whether functionalism is the correct philosophical interpretation of how the mind operates. It assumes that consciousness is, in effect, a "function" of the brain, and that by simulating the brain in a computer you will produce consciousness.

 

 

I would defend emergent materialism first and foremost.

I'll look into that, thanks.

 

No, you're getting it, and Searle WANTS you to assume that whatever process is going on inside the Chinese Room is more like Google Translate than a complete simulation of the brain. Searle's goal is for you to see the process is a mechanical one and therefore wants you to believe it cannot produce consciousness.

The obvious problem is that we can't build a fast enough 'chinese room' to get to a relevant timescale... but even if we could, would we be able to tell the difference between 'functional' consciousness and "true" consciousness? I'm guessing Turing thinks not but Stearle thinks yes.

Edited by ecoli
Link to comment
Share on other sites

The obvious problem is that we can't build a fast enough 'chinese room' to get to a relevant timescale... but even if we could, would we be able to tell the difference between 'functional' consciousness and "true" consciousness? I'm guessing Turing thinks not but Stearle thinks yes.

 

Now you're on to the p-zombie problem:

 

http://en.wikipedia.org/wiki/Philosophical_zombie

 

Dennett has an extensive rebuttal to this, but it basically boils down to the "if it quacks like a duck" response. If something's behavior is indistinguishable from that of a human, can it really lack consciousness?

Link to comment
Share on other sites

  • 2 weeks later...

Interesting. The question of simulating or anyhow else producing consciousness ultimately comes down to determinism.

 

If we could define precisely what is consciousness, what we are ready to accept as consciousness, and if it happens to be some deterministic process, then yes - you could program it with if-then-else.

 

 

This is in the same time a question of "free will". If consciousness is deterministic, then theoretically it is possible, but then if we ever create AI it would mean we were destine to do it, which would then feel as if we ourselves are part of some simulation.

 

 

But, if consciousness is not deterministic then we still have a chance of manufacturing it, thought not with if-then-else. Rather, it will be some physical system capable of learning so that it can in effect program itself. Somewhere along this process of learning and evolving the consciousness could emerge, at least in a sense that we could not make a difference between real conscious response and that of our AI.

 

This is how "Terminator" and "The Matrix" came to be. With non deterministic AI there is always a possibility for it to go crazy and turn us into batteries.

 

 

It is all about input, response and how we define consciousness.

 

Say, a fly has a consciousness. If we could know how fly responds to any given situation then we could program artificial fly to do exactly that. This is not consciousness, but if we can not tell the difference based on response, then obviously the very definition of the word "consciousness" must involve the mechanics by which process occurs. This is circular trick, because if we could define the mechanics of consciousness we would be able to re-produce it.

Edited by Sione
Link to comment
Share on other sites

  • 3 weeks later...

this thread seems to be quite like the other one..

 

My "thought" is: If you can't tell the difference, does it matter?

 

it matters because of the time. if you can't tell now, it doesn't mean someone else would not be able to tell it later. if we want to trick someone then it doesn't matter as long as we tricked them. tricking ourselves might be fun, but it is a matter of principle when building something to build it as fast as possible, as good as possible and able to last in valid state as long as possible.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.