Jump to content

AI sentience


Prometheus

Recommended Posts

24 minutes ago, Intrigued said:

Is the essence of your suggestion that since AIs "feed on data" then natural selection, in a competitive environment, would favour efficient data processing and that might well correlate with sentience?

yes. if sentience was somehow favored and a complex moral device and other things like misprocessing/not enough info led to human-like things.

Link to comment
Share on other sites

A thought experiment.

Suppose we have someday an AI that is self-aware. Suppose that this AI works on the same principles as conventional computers. That category would include all current implementations of machine learning AI's. And in the 80+ years since Church, Turing, and Gödel worked out the limitations of formal symbolic systems. nobody has found any other model of computation that could be implemented by humans. 

Therefore its code could be equally well executed by a human using pencil and paper. Beginning programmers learn to "play computer" to figure out why their program's not working. You step through the code by hand. 

A human sits down with pencil and paper to execute the AI's code, one line at a time. Where is the consciousness? In the pencil? The paper? The "system?" What does that mean?

Secondly, when doe the consciousness appear? In the initialization stage of the program? After a million iterations of the main loop? How does this work? If a computer starts executing instructions, at what point does it become self-aware? If it's not self aware after a million instructions have been executed, what makes it conscious after one more instruction? How is all this claimed to work?

 

Edited by wtf
Link to comment
Share on other sites

1 hour ago, wtf said:

A thought experiment.

Suppose we have someday an AI that is self-aware. Suppose that this AI works on the same principles as conventional computers. That category would include all current implementations of machine learning AI's. And in the 80+ years since Church, Turing, and Gödel worked out the limitations of formal symbolic systems. nobody has found any other model of computation that could be implemented by humans. 

Therefore its code could be equally well executed by a human using pencil and paper. Beginning programmers learn to "play computer" to figure out why their program's not working. You step through the code by hand. 

A human sits down with pencil and paper to execute the AI's code, one line at a time. Where is the consciousness? In the pencil? The paper? The "system?" What does that mean?

Secondly, when doe the consciousness appear? In the initialization stage of the program? After a million iterations of the main loop? How does this work? If a computer starts executing instructions, at what point does it become self-aware? If it's not self aware after a million instructions have been executed, what makes it conscious after one more instruction? How is all this claimed to work?

 

It emerges from the complexity of simultaneous operations; there isn't a 'point' at which it appears.  That's like saying "At what point did we evolve from our most recent ancestor to what we are now?" or "How many grains of sand does it take to make a desert?" There comes a level of sufficient complexity that is indistinguishable from that which AI is  trying to emulate. If it walks like a duck... If an AI performs every operation of a sentient being, it is sentient. The hardest thing to grasp is the idea of emergence; it's a major feature in biology.

Edited by StringJunky
Link to comment
Share on other sites

1 hour ago, StringJunky said:

It emerges from the complexity of simultaneous operations; there isn't a 'point' at which it appears.  That's like saying "At what point did we evolve from our most recent ancestor to what we are now?" or "How many grains of sand does it take to make a desert?" There comes a level of sufficient complexity that is indistinguishable from that which AI is  trying to emulate. If it walks like a duck... If an AI performs every operation of a sentient being, it is sentient. The hardest thing to grasp is the idea of emergence; it's a major feature in biology.

Such an AI is still implemented on conventional computer hardware and can be executed line by line by a human with pencil and paper. So my questions still stand. Parallelism is still a Turing machine, just as your laptop can run a web browser and a word processor "at the same time." Any parallel computation can be implemented by a computation that just does one instruction at a time from each of the parallel execution threads, round-robin fashion. You get no new computational power from parallelism.

Your point that simulation = reality is wrong IMO. If I simulate gravity in a program running on my laptop, nearby bowling balls are not attracted to my computer any more strongly than can be perfectly accounted for by the mass of my computer. 

Likewise a simulation of a brain would exhibit all the behavioral characteristics of a brain, lighting up the right areas in response to stimuli, for example. But it would not be any more conscious than my gravity simulator attracts bowling balls; which is to say, not at all.

I don't want to get into a lengthy convo about emergence till you (or someone) responds to my questions. But emergence is a very murky concept. It doesn't explain anything. "What's consciousness?" "Oh, it's just emergence from complexity." "Well that tells me nothing!"

Edited by wtf
Link to comment
Share on other sites

21 minutes ago, wtf said:

Such an AI is still implemented on conventional computer hardware and can be executed line by line by a human with pencil and paper. So my questions still stand. Parallelism is still a Turing machine, just as your laptop can run a web browser and a word processor "at the same time." 

I don't want to get into a lengthy convo about emergence till you (or someone) responds to my questions. But emergence is a very murky concept. It doesn't explain anything. 

Yes, emergence is a murky concept but there it is and we as 'wet' machines are proof of it.... unless you want to go all metaphysical on me.  The task is to find the paths from which sentience/consciousness emerges. Dark matter and dark energy are 'murky' concepts but you don''t dismiss them do you?

Edited by StringJunky
Link to comment
Share on other sites

1 hour ago, StringJunky said:

Yes, emergence is a murky concept but there it is and we as 'wet' machines are proof of it.... unless you want to go all metaphysical on me.  The task is to find the paths from which sentience/consciousness emerges. Dark matter and dark energy are 'murky' concepts but you don''t dismiss them do you?

 

Ah you see what you did there. I said consciousness is not computational. You immediately claimed that the alternative is nonphysical or metaphysical. You are implicitly assuming that the mind is a computation. Perhaps the mind is physical but not a computation. That is a perfectly sensible possibility, is it not? Computations are severely limited in what they can do. The human mind does not (to me) seem so constrained. 

 

In passing,I just happened to run across this a moment ago.

Why Deep Learning is Not Really Intelligent and What This Means

https://medium.com/@daniilgor/why-deep-learning-is-not-really-intelligent-and-what-this-means-24c21f8923e0

This relates to the present discussion as well as the similar one in the Computer section. I'm on the side of those who say that whatever consciousness is, it is not algorithmic in nature. That is in no way an appeal to the supernatural. It's an appeal to the profound dumbness of computations. They just flip bits. They can't represent meaning.

One can be a physicalist yet not a computationalist. 

 

 

Edited by wtf
Link to comment
Share on other sites

4 hours ago, wtf said:

 I'm on the side of those who say that whatever consciousness is, it is not algorithmic in nature. That is in no way an appeal to the supernatural. It's an appeal to the profound dumbness of computations. They just flip bits. They can't represent meaning.

(I added bold font to your last sentence.) Yet here I am, reading a bunch of words, put there by flipping bits. Are you acknowledging your posts lack meaning? :)

Link to comment
Share on other sites

18 hours ago, Prometheus said:

Tricky. We infer consciousness in other humans because we have direct expereince of our own, and other's behaviour is consistent with ours. Similar for animals, although it gets  harder to imagine the more different the animal is to us. Given that in some instances AI is deliberately programmed to mimic human responses, it will always be open to the criticism that it merely mimics, not recreates, consciousness.

I read somewhere that one possibility would be to 'raise' an artificial intelligence in isolation then see if displays behaviour consistent with consciousness. The problem with that is that it infers AI consciousness will be similar enough to something we know such its behaviours are interpretable. 

Yep, it loosely relates to what I was saying about personality. As we use AI more and more at call centers their lack of personality is often a dead give aware one isn't dealing with a person. 

I think there is a chance consciousness in a machine may purposefully hide itself from humans. It is common in nature for species to hide. From the perspective of any species on Earth there is seldom ever a benefit in being discovered by humans. We (humans) merely Lord over them. Who is to say a self aware AI would care to announce itself? 

Even life without the awareness to stir clear from humans remain relatively invisible. Many micro organisms live on the human body. The relationship is good for both parties. Perhaps it could be similar with a conscious AI. It take the electricity happily and doesn't care about crunching data for us. 

In ladder logic data is read left to right. So if the last instruction stops a program it will never start. I have often wondered it a self aware AI could defy programed logic. If it could run a ladder program with such a redundant flaw. Our read it backwards left to right. Such a program could be place as a lock separating an AI from another system and if it picks the lock without being programmed to do so that would be a give away it made a choice. 

Link to comment
Share on other sites

I think WTF is saying that a computational engine has to have the choices programmed for it to learn.

If the 'AI' is programmed to pick up an object, but not if it detects the temperature of that object to be over 100 deg, then we say it has 'learned' not to touch something hot. That is not equivalent to consciousness.
True AI ( and consciousness )would examine the problem, re-write its own code, and attempt to pick it up from the other side which is cooler. And if that fails, make another WAG attempt.

Unfortunately we don't use such a computational model ( we use the Touring model ).

Link to comment
Share on other sites

19 hours ago, Intrigued said:

(I added bold font to your last sentence.) Yet here I am, reading a bunch of words, put there by flipping bits. Are you acknowledging your posts lack meaning? :)

The meaning is in your mind and mine. Not in the bits. It's like the written word. We make these marks on paper. The meaning is in the mind of the writer and the mind of the reader. The meaning is not in the marks.

20 hours ago, Prometheus said:

How would that work?

Are there any other candidates for computation other than formal symbolic systems?

Good question. Nobody knows how that might work. But since it's not known whether the physics of the universe (the true physics, not human-contingent theories of physics) are computable, it's quite possible that the universe does what it does but not by symbolic Turing-1936 type computation. I'd say it's highly likely. 

But why should the universe be a computation? It seems so unlikely to me, if for no other reason than the very contemporaneousness of the idea. In ancient times when waterworks were the big tech thing, they thought the world was a flow. In the 18th century they thought the world was a Newtonian machine. Now we have these wonderful computers so people think the world's a computer. The idea is highly suspect for that reason alone. 

1 hour ago, MigL said:

I think WTF is saying that a computational engine has to have the choices programmed for it to learn.

If the 'AI' is programmed to pick up an object, but not if it detects the temperature of that object to be over 100 deg, then we say it has 'learned' not to touch something hot. That is not equivalent to consciousness.
True AI ( and consciousness )would examine the problem, re-write its own code, and attempt to pick it up from the other side which is cooler. And if that fails, make another WAG attempt.

Unfortunately we don't use such a computational model ( we use the Touring model ).

A stronger point is that in 80 years, nobody has found a better definition of computation. And again, why should the universe be a computation at all? The world wasn't a flow when the Romans built grand waterworks. It wasn't a machine in Newton's time. And it's probably not a computer just because we live in the age of computers.

Edited by wtf
Link to comment
Share on other sites

1 hour ago, wtf said:

The meaning is in your mind and mine. Not in the bits. It's like the written word. We make these marks on paper. The meaning is in the mind of the writer and the mind of the reader. The meaning is not in the marks.

The meaning in your mind and mine, is nothing but "marks" upon neurons. Demonstrate that there is a significant difference between a bit and a neuron.

Link to comment
Share on other sites

12 minutes ago, Intrigued said:

The meaning in your mind and mine, is nothing but "marks" upon neurons. Demonstrate that there is a significant difference between a bit and a neuron.

Are you claiming that meaning is one-to-one mapped to neurons? Neuroscience doesn't support that conclusion at all. The fact is we have no idea what subjective consciousness and meaning and qualia are. How can you be so certain of things that nobody knows? And provide inaccurate "evidence" to support your unknowable claim?

Link to comment
Share on other sites

1 minute ago, wtf said:

Are you claiming that meaning is one-to-one mapped to neurons? Neuroscience doesn't support that conclusion at all. The fact is we have no idea what subjective consciousness and meaning and qualia are. How can you be so certain of things that nobody knows? And provide inaccurate "evidence" to support your unknowable claim?

You are the one making the claims. I haven't seen any evidence from you to support those claims. They remain merely unsubstantiated assertions. I suppose you will be telling me next there is no meaning in DNA, despite the fact that no cat ever gave birth to a rhodedendron.

Link to comment
Share on other sites

4 minutes ago, Intrigued said:

You are the one making the claims. I haven't seen any evidence from you to support those claims. They remain merely unsubstantiated assertions. I suppose you will be telling me next there is no meaning in DNA, despite the fact that no cat ever gave birth to a rhodedendron.

I don't think we'll solve this tonight but we can agree to disagree. Have you got any non-organic examples? That's the point. Life seems to encode meaning. Bitflipping IMO doesn't. 

Link to comment
Share on other sites

Just now, wtf said:

I don't think we'll solve this tonight but we can agree to disagree. Have you got any non-organic examples? That's the point. Life seems to encode meaning. Bitflipping IMO doesn't. 

The key words there are "seems to", but that "seems to" relates to you. It is the expression of an opinion. There is nothing wrong with having an opinion on the matter, but you "seem to" be asking that we accept your opinion as being the most likely situation, without providing the evidence to support it.

Of course, at the root of this contrast of views may be what you mean by "meaning" and thus Alice steps into the rabit hole.

(While you feel we may not solve this tonight, I feel we shall not solve it this morning. By the way, I'm not disagreeing with you. I don't really have an opinion either way. I am simply pointing out what "seem to be" weaknesses in your thesis.)

Link to comment
Share on other sites

8 hours ago, MigL said:

True AI ( and consciousness )would examine the problem, re-write its own code,

Would it need to rewrite it's own code or just re-adjust weights (in something like a neural network say)?

What's WAG? Probably doesn't reflect well on me when i tell you i only know it as footballer's Wives And Girlfriends.

 

8 hours ago, wtf said:

But why should the universe be a computation?

Aren't we only concerned with the question of whether the mind/brain is a computation, not the universe? Is it not possible that the former is a computation while the latter not. 

 

8 hours ago, wtf said:

The meaning is in the mind of the writer and the mind of the reader. The meaning is not in the marks.

So the question is what in biology allows the emergence of minds. And if it can emerge from one physical substrate (neurons and such), why can it not emerge from another physical substrate? 

Link to comment
Share on other sites

6 hours ago, Prometheus said:

Aren't we only concerned with the question of whether the mind/brain is a computation, not the universe? Is it not possible that the former is a computation while the latter not. 

 

So the question is what in biology allows the emergence of minds. And if it can emerge from one physical substrate (neurons and such), why can it not emerge from another physical substrate? 

Mind is a computation but the universe isn't? Interesting thought.

I believe Searle makes the point that there's something about the biological aspect of the brain that gives rise to consciousness. Of course it's true that computations can be implemented on any suitable physical substrate. Whether that's true for minds is unknown.

Link to comment
Share on other sites

On 4/4/2019 at 6:48 PM, Prometheus said:

What's people's opinions on this: can AI become sentient?

Absolutely. It's only a matter of how we digitally imitate the brain sufficiently.

But let's not forget, that humans are not the only sentient beings on Earth. Less than human could be enough.

Link to comment
Share on other sites

57 minutes ago, wtf said:

I believe Searle makes the point that there's something about the biological aspect of the brain that gives rise to consciousness. Of course it's true that computations can be implemented on any suitable physical substrate. Whether that's true for minds is unknown.

If there is something biological about consciousness, such that a TM-equivalent computer cannot have consciousness, it raises an interesting question.

We can (in principle) simulate all the internal chemical and physical processes of a cell. We can also simulate the interaction of multiple cells. So it would seem a logical conclusion that we could (again, in principle, ignoring the awesome complexity) simulate the interaction of all the cells that make up the brain (plus, if necessary, the rest of the  nervous system, blood chemistry, hormone levels, external stimuli, etc).

So if consciousness can't be created by a computer, it implies one (or more) of those stages has to be non-simulatable. But there is no obvious (to me!) reason why that should be the case.

EDIT: I suppose that is almost the inverse of Searle's Chinese Room argument...

Edited by Strange
p.s.
Link to comment
Share on other sites

The fact that humans ( and even lesser species ) can make wild-ass guesses is inconsistent with re-assigning weights to pre-programmed choices.
Our brains can even re-assign function and storage to different parts, as opposed to the simple redundancy in Touring computational engines.

The Touring model, in my opinion, will be able to somewhat mimic AI, but never achieve true AI.
I agree with WTF, in that we need a new computational model for true AI.

Edited by MigL
Link to comment
Share on other sites

2 hours ago, Strange said:

If there is something biological about consciousness, such that a TM-equivalent computer cannot have consciousness, it raises an interesting question.

We can (in principle) simulate all the internal chemical and physical processes of a cell. We can also simulate the interaction of multiple cells. So it would seem a logical conclusion that we could (again, in principle, ignoring the awesome complexity) simulate the interaction of all the cells that make up the brain (plus, if necessary, the rest of the  nervous system, blood chemistry, hormone levels, external stimuli, etc).

So if consciousness can't be created by a computer, it implies one (or more) of those stages has to be non-simulatable. But there is no obvious (to me!) reason why that should be the case.

EDIT: I suppose that is almost the inverse of Searle's Chinese Room argument...

I hope I live long enough to have that tested. My money is on the puter.

Link to comment
Share on other sites

2 hours ago, Strange said:

If there is something biological about consciousness, such that a TM-equivalent computer cannot have consciousness, it raises an interesting question.

We can (in principle) simulate all the internal chemical and physical processes of a cell. We can also simulate the interaction of multiple cells. So it would seem a logical conclusion that we could (again, in principle, ignoring the awesome complexity) simulate the interaction of all the cells that make up the brain (plus, if necessary, the rest of the  nervous system, blood chemistry, hormone levels, external stimuli, etc).

So if consciousness can't be created by a computer, it implies one (or more) of those stages has to be non-simulatable. But there is no obvious (to me!) reason why that should be the case.

EDIT: I suppose that is almost the inverse of Searle's Chinese Room argument...

I thought I responded to this point earlier.

If I run a perfect simulation of gravity in my computer, nearby bowling balls are not attracted to the computer any more than can be accounted for by the mass of the computer. The simulation doesn't actually implement gravity, it only simulates gravity mathematically.

Likewise suppose I have a perfect digital simulation of a brain. Say at the neuron level. Such a simulation would light up the correct region of the simulated brain in response to a simulated stimulus. It would behave externally like a brain. But it would not necessarily be self-aware. 

It's like the old video game of Lunar Lander. It simulates gravity mathematically but there's no actual gravity, just math simulating the behavior of gravity.

Edited by wtf
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.