Jump to content

Speculation about Consciousness


fredreload

Recommended Posts

... but many years after the event historians will be able to discern the first traces of the ghost in the machine

Like I said earlier, they'll just realise, one day, AI has reached the equivalent of a sapient level of consciousness and cognitive autonomy.

Link to comment
Share on other sites

If people can't see that an ape or an earthworm is conscious then how will we tell a machine is conscious?

 

We'll just assume it's following its programming no matter what it says. The turing test means very little in establishing that something is "conscious" and primarily is merely telling us the machine can respond to verbal stimuli verbally.

 

Without knowing what constitutes consciousness it's pretty hard to ascribe to anything other than people who we understand and who understand us. I suppose we're mostly just giving them the benefit of the doubt that they aren't merely holograms or placeholders or some sort of actor.

 

All the world's a stage ya' know.

 

Why can't we program an actor to seem as sentient as the next guy?

Link to comment
Share on other sites

To program sentience, we have to understand it mathematically, but we don't. People agree we are sentient, but we cannot precisely define what it is. If it emerges from large neural nets, then we don't have to know what it is, we only need to build large neural nets.

Link to comment
Share on other sites

To program sentience, we have to understand it mathematically, but we don't.

 

 

Really?

 

 

 

If it emerges from large neural nets, then we don't have to know what it is, we only need to build large neural nets.

 

Oh, apparently not. :)

Link to comment
Share on other sites

We program individual features of a neural net that do not change. However, neural nets learn and change their own functionality as they learn, and sentience may emerge as neural nets learn. That does not mean we know how to write a sentient program that cannot learn.

Edited by EdEarl
Link to comment
Share on other sites

We program individual features of a neural net that do not change. However, neural nets learn and change their own functionality as they learn, and sentience may emerge as neural nets learn. That does not mean we know how to write a sentient program that cannot learn.

 

Imatfaal's "ghost in the machine" musing led me to think about what would convince me, as SJ extoled, that "eureka" moment has arrived. I've been arguing computers as unreliable representations of brain function and currently incapable of reproducing consciousness without considering what may constitute convincing evidence otherwise. I think what makes the consciousness of human brain function so distinct from computers is its capacity to engage behaviors independent of its genetic programing, which is more simply referenced as instinct. We have the ability to overrule our fight or flight instinct, to self-innovate and engage proactive rather than reactive behaviors. I equate human instinct with the programing computers are incapable of disobeying without being programed to do so. When computers are able to demonstrate, without programmer intervention, an ability to overrule, rewrite, and exceed their programing or preprogramed parameters and responses, I might then consider myself as having actually witnessed a "ghost in the machine."

Edited by DrmDoc
Link to comment
Share on other sites

 

When computers are able to demonstrate, without programmer intervention, an ability to overrule, rewrite, and exceed their programing or preprogramed parameters and responses, I might then consider myself as having actually witnessed a "ghost in the machine."

Like I said earlier:

 

 

Like I said earlier, they'll just realise, one day, AI has reached the equivalent of a sapient level of consciousness and cognitive autonomy.

Link to comment
Share on other sites

Google's AlphaGo (AlphaX as it has learned many games in addition to Go) is an example of a primitive level of cognitive autonomy. They let it learn games like space invaders with its programmed instinct being to read the game score and try to maximize it. It did not know how to play any of the games it learned. The set it playing and during the first hundred or so games, it lost badly. After 500 or so games it knew how to play and win. See: https://www.youtube.com/watch?v=Qvco7ufsX_0. It even developed strategies that none of the developers had ever seen or imagined.

Link to comment
Share on other sites

What's scary is the FBI, CIA and NSA having AI to monitor people using connected cameras, WIFI emissions to monitor people inside buildings, phone calls, other spy tech, and internet data. Soon AI will be able to track many people from cradle to grave. Perhaps Google has the best AI, but governments will have very good AI.

Edited by EdEarl
Link to comment
Share on other sites

Google's AlphaGo (AlphaX as it has learned many games in addition to Go) is an example of a primitive level of cognitive autonomy. They let it learn games like space invaders with its programmed instinct being to read the game score and try to maximize it. It did not know how to play any of the games it learned. The set it playing and during the first hundred or so games, it lost badly. After 500 or so games it knew how to play and win. See: https://www.youtube.com/watch?v=Qvco7ufsX_0. It even developed strategies that none of the developers had ever seen or imagined.

 

As I understand AlphaGo, it's game play is based on algorithms that employ a form of computational statistics or mathematical optimization to make optimal predictions through continuous play. If my understanding is correct, I don't think that quite equals cognitive autonomy in the sense that the machine is behaving in a way that is inconsistent with its algorithms. If AlphGo, one day, stopped its gameplay and somehow began to create it's own game or a game more challenging to its programing, I might be impressed.

 

The scary time will be when it loses a game and says, "Best of three?"

 

If I knew it wasn't programed to say that, my blood would chill and I would probably think the launch codes are next!

Link to comment
Share on other sites

 

Imatfaal's "ghost in the machine" musing led me to think about what would convince me, as SJ extoled, that "eureka" moment has arrived. I've been arguing computers as unreliable representations of brain function and currently incapable of reproducing consciousness without considering what may constitute convincing evidence otherwise. I think what makes the consciousness of human brain function so distinct from computers is its capacity to engage behaviors independent of its genetic programing, which is more simply referenced as instinct. We have the ability to overrule our fight or flight instinct, to self-innovate and engage proactive rather than reactive behaviors. I equate human instinct with the programing computers are incapable disobeying without being programed to do so. When computers are able to demonstrate, without programmer intervention, an ability to overrule, rewrite, and exceed their programing or preprogramed parameters and responses, I might then consider myself as having actually witnessed a "ghost in the machine."

 

 

I strongly disagree on many levels.

 

But I do think there's a simple way to detect sentience. When it's both impossible to predict what the computer will do yet most things it does are beneficial to its own health and welfare we can assume consciousness exists. Of course it will also try to tell us in some language but this would be open to misinterpretation or other factors.

Link to comment
Share on other sites

But I do think there's a simple way to detect sentience. When it's both impossible to predict what the computer will do yet most things it does are beneficial to its own health and welfare we can assume consciousness exists. Of course it will also try to tell us in some language but this would be open to misinterpretation or other factors.

If the computer is not programed to produce unpredictable responses and if it is also not programed for self-preservation, I might then agree to the possibility--but not without some sign of responses independent of its programing.

Edited by DrmDoc
Link to comment
Share on other sites

 

 

When it's both impossible to predict what the computer will do yet most things it does are beneficial to its own health and welfare we can assume consciousness exists.

 

Unless it was programmed to do that.

Link to comment
Share on other sites

 

As I understand AlphaGo, it's game play is based on algorithms that employ a form of computational statistics or mathematical optimization to make optimal predictions through continuous play. If my understanding is correct, I don't think that quite equals cognitive autonomy in the sense that the machine is behaving in a way that is inconsistent with its algorithms. If AlphGo, one day, stopped its gameplay and somehow began to create it's own game or a game more challenging to its programing, I might be impressed.

If we could abstract what the brain does (in terms of information processing) into an algorithm, then how could anything the brain does be inconsistent with said algorithm? Or do you mean to say that it does something unexpected, because that which we do not expect =/= inconsistency

Edited by andrewcellini
Link to comment
Share on other sites

It may be that our brains sometimes do random things because quantum events are sometimes random. If it is true, then we could never capture all brain processes in an algorithm. If a neuron fires partly because of an electron tunneling, it will recover and that error will usually be insignificant. It is possible to make the wrong fight or flight decision, but such events are rare.

 

When a computer reads a memory location to get an instruction and a random error occurs, the algorithm is usually trashed and might cause a crash. A random error in data might be insignificant or it might propagate the error via calculations into other data, and might cause algorithm failure without a crash. Computers are sensitive to memory errors; whereas, brains are not.

 

That Turing machines are universal does not mean we can easily translate the contents of a neural net into a program, except by coding a neural net and letting it learn.

Link to comment
Share on other sites

If we could abstract what the brain does (in terms of information processing) into an algorithm, then how could anything the brain does be inconsistent with said algorithm? Or do you mean to say that it does something unexpected, because that which we do not expect =/= inconsistency

 

If we could reduce what the brain does to functional algorithms, I believe we would find that brain function involves an amalgam of separate and distinct algorithms--some self-evolving or self-writing--acting in concert to produce optimum system responses that may or may not conform to the plan of its base algorithm. If we can do all that with computer programing--inserting self-writing optimizing algorithms that can overrule but not replace its primary algorithm--then I think we will have the makings of true AI. Not necessarily an inconsistency with what we expect but rather an inconsistency with instinct or some hardwired programed response mechanism.

 

If I may add, I think we can do that now with a proper understanding of brain function but we can't because programmers don't understand enough about brain function and its evolution to create such programs.

Edited by DrmDoc
Link to comment
Share on other sites

Computers are sensitive to memory errors; whereas, brains are not.

https://en.wikipedia.org/wiki/Memory_errors

https://en.wikipedia.org/wiki/Misattribution_of_memory

http://science.sciencemag.org/content/341/6144/387

 

I don't know if that is a clear distinction; what seems to be clear is how error arises and is handled in the brain can be different from how computers currently do it.

That Turing machines are universal does not mean we can easily translate the contents of a neural net into a program, except by coding a neural net and letting it learn.

I don't believe anyone implied that (assuming your usage of neural net in the former was about real nets, ie brains), and for the latter part of your statement, well, I would hope they train the network because that's how you get it to for example find patterns. Human brains aren't given a model of reality, they make one, but it's clear that the process of learning, at the very least for forms of associative memory, is able to be modeled mathematically and is thus realizable in computers.

If I may add, I think we can do that now with a proper understanding of brain function but we can't because programmers don't understand enough about brain function and its evolution to create such programs.

This makes me wonder if ideas developed in AI regarding how networks learn, ideas like finding minima in some "energy function" akin to hopfield networks and boltzmann machines, have been incorporated into neuroscience. I haven't the faintest idea tbh, but such models have a physical feel.

It may be that our brains sometimes do random things because quantum events are sometimes random. If it is true, then we could never capture all brain processes in an algorithm. If a neuron fires partly because of an electron tunneling, it will recover and that error will usually be insignificant. It is possible to make the wrong fight or flight decision, but such events are rare.

I could also see a problem if the brain uses such features of quantum physics for computational purposes, but that could be due to ignorance and lack of imagination on my part.

Edited by andrewcellini
Link to comment
Share on other sites

I don't understand the motives of those who have suggested translating from neural net to computer program. A Turing machine is universal (computers are finite renditions of Turing machines), which means they can compute anything computable. Neural nets are equivalent to Turing machines, if given infinite memory. Thus, a finite neural net is equivalent to a computer of similar memory capacity. Actually translating from a neural net to a program is proof of equivalency, but AFAIK serves no other function; although, for a non trivial net the task could be massive. It is much easier to make simple finite Turing machine with a neural net as proof.

Link to comment
Share on other sites

I could also see a problem if the brain uses such features of quantum physics for computational purposes, but that could be due to ignorance and lack of imagination on my part.

Given that consciousness is an emergent property of a ginormous ensemble of contiguous molecules and quantum effects occur within the domain and at the level of atoms, I can't see how quantum anomalies/effects within atoms would have a bearing on the macro state of consciousness. The effects are too microscopic.

Edited by StringJunky
Link to comment
Share on other sites

Given that consciousness is an emergent property of a ginormous ensemble of contiguous molecules and quantum effects occur within the domain and at the level of atoms, I can't see how quantum anomalies/effects within atoms would have a bearing on the macro state of consciousness. The effects are too microscopic.

I don't either, and there is reason to doubt it. If I can remember the name of it I'll post a paper I've posted before by Max Tegmark going over claims from Orch OR about the function of quantum decoherence and whether the brain acts like a quantum computer. Needless to say, I think it's reasonable to keep that option open as there could be other ways in which quantum mechanics could rear its ugly but useful head. Even Tegmark has inquired what it means for a computer as well as a conscious entity to be configuration of matter, and I'm fairly certain he extends it to quantum mechanics. I might post that one too.

 

Edit:

http://arxiv.org/abs/quant-ph/9907009

and

https://arxiv.org/abs/1401.1219

respectively

Edited by andrewcellini
Link to comment
Share on other sites

I don't understand the motives of those who have suggested translating from neural net to computer program. A Turing machine is universal (computers are finite renditions of Turing machines), which means they can compute anything computable. Neural nets are equivalent to Turing machines, if given infinite memory. Thus, a finite neural net is equivalent to a computer of similar memory capacity. Actually translating from a neural net to a program is proof of equivalency, but AFAIK serves no other function; although, for a non trivial net the task could be massive. It is much easier to make simple finite Turing machine with a neural net as proof.

 

According to one definition, Turing machines are mathematical models of hypothetical computers that can use a predefined set of rules to determine a result from a set of input variables. If that is the nature of the neural nets you are referencing, it really isn't equivalent to the nature of brain function as I understand it. Human brain function, as I understand, involves a collective of several separate and acutely different functional parameters interlocking to produce a unified functional response potentially exceeding those separate functional parameters. Conversely, Turing machines are limited to and by their mathematical mold and predefined set of rules. Our brain's functional responses aren't necessarily limited by its functional matrix.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.