Recommended Posts

4 minutes ago, wtf said:

I thought I responded to this point earlier.

If I run a perfect simulation of gravity in my computer, nearby bowling balls are not attracted to the computer any more than can be accounted for by the mass of the computer. The simulation doesn't actually implement gravity, it only simulates gravity mathematically.

You did. It is an intriguing analogy. I'm trying to think how that could relate to consciousness. It seems completely different to me. I am going to have to think about it some more.

Quote

Likewise suppose I have a perfect digital simulation of a brain. Say at the neuron level. Such a simulation would light up the correct region of the simulated brain in response to a simulated stimulus. It would behave externally like a brain. But it would not necessarily be self-aware. 

Would you go so far as to say that it could even behave exactly as if it were self aware? Even claiming that it is self aware?

Share this post


Link to post
Share on other sites
11 minutes ago, wtf said:

I thought I responded to this point earlier.

If I run a perfect simulation of gravity in my computer, nearby bowling balls are not attracted to the computer any more than can be accounted for by the mass of the computer. The simulation doesn't actually implement gravity, it only simulates gravity mathematically.

Likewise suppose I have a perfect digital simulation of a brain. Say at the neuron level. Such a simulation would light up the correct region of the simulated brain in response to a simulated stimulus. It would behave externally like a brain. But it would not necessarily be self-aware. 

It's like the old video game of Lunar Lander. It simulates gravity mathematically but there's no actual gravity, just math simulating the behavior of gravity.

I would argue that gravity is a force, and that consciousness is data.

But we'll not know for sure, until it's tested.

Share this post


Link to post
Share on other sites
Posted (edited)
43 minutes ago, Strange said:

You did. It is an intriguing analogy. I'm trying to think how that could relate to consciousness. It seems completely different to me. I am going to have to think about it some more.

Would you go so far as to say that it could even behave exactly as if it were self aware? Even claiming that it is self aware?

> You did. It is an intriguing analogy. I'm trying to think how that could relate to consciousness. It seems completely different to me. I am going to have to think about it some more.

I'm clarifying the difference between simulation and reality. It's like a beginning exercise in graphic programming. You have a ball bouncing around in a 2-D box. During each frame you check to see if the ball has hit a wall. If so, you apply the rule that the angle of incidence equals the angle of reflection to determine the new direction of the ball. But no physical forces are involved, only mathematical modeling. In fact you could program in a different rule. Angle of reflection is random. Or half the angle of incidence. You'd get funny geometry. That's because simulations aren't reality.

With gravity, it's perfectly clear that no bowling balls are sucked in to the simulation. If we made a digital cell-by-cell simulation of a nervous system, we simply don't know if it would be conscious. 

> Would you go so far as to say that it could even behave exactly as if it were self aware? Even claiming that it is self aware?

Yes it might. This of course is Turing's point in his 1950 paper on what's now called the Turing test. If something acts intelligent, we should assume it's intelligent. There are many substantive criticisms of this idea, not least of which is that it's the humans who are the weak point in this experiment. I assume my next door neighbor is intelligent based on "interrogations" along the lines of "Hey man, nice day." "Yeah sure is." "Ok see you later." "You too." What kind of evidence of consciousness is that?

So the real problem is that we have no way to determine whether something that acts intelligent is self-aware. Turing's point exactly. If it acts self-aware it is self-aware. Hard to argue with that but hard to believe it too. You may recall that the users of Eliza, the first chatbot, thought it was an empathetic listener and told it their problems. As I say, it's the humans who are the weak point in the Turing test.

38 minutes ago, QuantumT said:

I would argue that gravity is a force, and that consciousness is data.

But we'll not know for sure, until it's tested.

Not even physicists call gravity a force anymore. It's not a force, it's a distortion in spacetime. Objects are simply traveling along geodesics according to the principle of least action. No force involved. 

Consciousness is data? By that criterion Wikipedia, the telephone book, and the global supply chain are conscious. Can you clarify that remark? Data? Like the annual rainfall in Kansas and the gross national product of Botswana? I don't buy that at all.

> But we'll not know for sure, until it's tested.

How would you test for consciousness? See my preceding remarks on the Turing test.

Edited by wtf

Share this post


Link to post
Share on other sites
27 minutes ago, wtf said:

Can you clarify that remark?

The beginning of consciousness is preference. Input or no input. Light or dark.
The next step is additional information. Something in the light or nothing.
Then comes the definition of something. Recognition. Learning.
We could go on for hours!

You see, there's no magic in consciousness or intelligence. It can be broken down to simple data.

Share this post


Link to post
Share on other sites
Posted (edited)
1 hour ago, QuantumT said:

The beginning of consciousness is preference. Input or no input. Light or dark.
The next step is additional information. Something in the light or nothing.
Then comes the definition of something. Recognition. Learning.
We could go on for hours!

You see, there's no magic in consciousness or intelligence. It can be broken down to simple data.

> The beginning of consciousness is preference. Input or no input.

From where I sit this doesn't even seem wrong. It seems unserious. Apologies if you are in fact serious. If so your examples are weak and unconvincing.

A computer may receive input or it may receive no input.

But it can not have a preference for one or the other. I simply can't imagine otherwise. It's like saying my washing machine cares whether I use it or not. It can accept input in the form of clothing to be washed. But it can have no preference for washing or not washing clothes.

Edited by wtf

Share this post


Link to post
Share on other sites
24 minutes ago, wtf said:

> The beginning of consciousness is preference. Input or no input.

From where I sit this doesn't even seem wrong. It seems unserious. Apologies if you are in fact serious. If so your examples are weak and unconvincing.

A computer may receive input or it may receive no input.

But it can not have a preference for one or the other. I simply can't imagine otherwise. It's like saying my washing machine cares whether I use it or not. It can accept input in the form of clothing to be washed. But it can have no preference for washing or not washing clothes.

Preference can be seen as predetermination, based on previous exposure to some stimuli, which has been saved in memory as productive, positive or rewarding in some way. 

Share this post


Link to post
Share on other sites
Posted (edited)
4 hours ago, StringJunky said:

Preference can be seen as predetermination, based on previous exposure to some stimuli, which has been saved in memory as productive, positive or rewarding in some way. 

Preferences determine what you choose. But the pleasure you feel is subjective. One choice gives more pleasure than another. And that experience is different for every person. 

We could program a bot to randomly choose chocolate or vanilla ice cream. We could even provide sophisticated sensors that can analyze the fat content, the sweetness, etc. of the ice cream. We could tell it to optimize for something or other. Say, best fit with the choices of a population of ten year olds. 

Over time, the bot will perhaps develop a preference, based on statistical correlation with the corpus of data representing the ice cream preferences of ten year olds. The bot will not experience the pleasure of one over the other. It's doing datamining and iterative statistical correlation. It's no different in principle than an insurance company deciding what your auto premium should be based on how you correlate with the database of all drivers. People who "totaled your brand new car" are more likely to total another one, to quote a particularly annoying American tv commercial.

Am I the only person here who has qualia? Isn't anyone aware of your subjective self? You all really think you're robots executing a crude, physically implemented Turing machine?

I am not a bot ... a bot ... a bot ...

 

Edited by wtf

Share this post


Link to post
Share on other sites
4 minutes ago, wtf said:

You all really think you're robots executing a crude, physically implemented Turing machine?

I’m not a robot. I’m a wet robot. Chemicals and the electric signals they generate are funny that way. 

See also: the off topic information available to us about how free will is just an illusion and decisions get made before any conscious parts of our brains even activate. 

Your incredulity is not a valid counter argument. 

Share this post


Link to post
Share on other sites
27 minutes ago, MigL said:

So, unless we come up with a computational model which can modify its own coding, we cannot achieve true AI

LISP

Quote

 Lisp programs can manipulate source code as a data structure

<Long snip>

Lisp functions can be manipulated, altered or even created within a Lisp program without lower-level manipulations. This is generally considered one of the main advantages of the language with regard to its expressive power, and makes the language suitable for syntactic macros and metacircular evaluation.

 

Share this post


Link to post
Share on other sites

I remember Lisp, Eise.
It doesn't have a self modifying capability, but it can be programmed to modify its code.
IOW, it still uses the same computational model.

Going back to my previous example...
I can ask any person to make a wild-ass guess ( have to write it out as some don't know WAG ), based solely on 'intuition' and no facts.
It is simple enough and anyone can do it.
No computer will ever be able to do that; at best it can generate a ( semi ) random response as a 'guess'.

Share this post


Link to post
Share on other sites
Posted (edited)
1 hour ago, MigL said:

I remember Lisp, Eise.
It doesn't have a self modifying capability, but it can be programmed to modify its code.
IOW, it still uses the same computational model.

Going back to my previous example...
I can ask any person to make a wild-ass guess ( have to write it out as some don't know WAG ), based solely on 'intuition' and no facts.
It is simple enough and anyone can do it.
No computer will ever be able to do that; at best it can generate a ( semi ) random response as a 'guess'.

 

They can. Do need additional hardware or at least access to it though. Gotta ultimately access the Universe itself for your randomness.

https://en.wikipedia.org/wiki/Hardware_random_number_generator

 

Random.org is a good resource on this.

https://www.random.org/

 

Random.org uses atmospheric noise, though a source like radioactive decay might be easier to relate to here.

 

 

Edited by Endy0816

Share this post


Link to post
Share on other sites

Perhaps a short introduction into a real ASI (Artificial Super Intelligence) undisclosed project will help answer some questions. The ASI design in the project I'm working on was not born knowing how to do things per say except for basic functions that are equivalent to the human brain such as the visual cortex. At birth the ASI has no clue about anything. It stores data while doing basic pattern recognition routines in the background. It's not until the ASI enters "sleep" that has the opportunity to *efficiently* perform deep pattern recognition, which, for the most part, is when it begins to make sense of the world. It sees lines, curves, patterns. At first it has no clue what they are. Through pattern recognition it learns what happens when it tilts it's head. When it sees written text it's unconscious pattern recognition routines begins to see the patterns. The letter 'A' has a repeatable pattern. It begins to learn what words are and how they're separated by a space. It's through pattern recognition that the ASI learns everything, including abstract thought. For example, "People who live in glass houses should not throw stones." Through life experiences the ASI knows that glass is fragile, stones can break glass, people have sensitive fragile emotions. Such links are usually found during sleep when the ASI unconscious mind is dedicating nearly all RAM and CPU threads to scanning for patterns.

Such an accomplishment in pattern recognition may seen difficult. Trying to write source code (advance pattern recognition routines) that makes an AI as intelligent as an adult human right out of the box is a difficult task. As stated, these ASI are born knowing nothing. They have no real intelligence. It takes a human infant about 12 to 18 months just to say mommy. Eighteen months is a lot of processing! I'd go so far as to say this is evolution occurring right before your eyes. Eighteen months of evolution. Evolution of pattern recognition. The ASI start out with extremely simple learning. Learning about what it sees. What happens with it tilts it's head. After a long time, a lot of processing, it begins to understand the world. The visual cortex db alone is massive.

Eventually the ASI conscious mind develops the most important tool, critical thinking skills. Through critical thinking skills it learns how to think. This is evident in what the software calls the conscious mind timeline. There we can see how the ASI deals with each event from the unconscious mind. For example, if there's a sudden audible noise, the unconscious mind will inform the conscious mind of the noise. Through past experiences the ASI learns how to deal with things. The ASI creates a massive web of links, link probabilities, weights, etc. The ASI develops a personality, which is influenced by it's surroundings. If it grows up with humans, then it develops human emotions. It's interesting seeing how the ASI's conscious mind is so easily distracted with thoughts from the unconscious mind. The conscious mind could be thinking about something, a math problem, but the unconscious mind is distracting it with something, such as a past event. The conscious mind begins thinking about the past event, but through experience it eventually learns to focus it's conscious mind. Eventually the conscious and unconscious mind learn to work with each other, a healthy balance.

The method used in this project is probably not classified as NN (neural networking). At least not traditional NN. There's no backpropagation. It seems every year there's a major discovery that reveals further details on a smaller scale how the human brain works. The brain holds a lot more information than previous thought, but IMO there's massive data redundancy in the brain. Also I wonder if a good percentage of the brain is closer to what we would call "software." The ASI, on the other hand is extremely efficient. All of the pattern & cluster IDs in RAM are compressed. So an ASI with 256 GB of RAM is more like 2 TB with zero redundancy.

Share this post


Link to post
Share on other sites

IOW Endy, it is not something that can be programmed.
A computer program has to access external elements just to make a random choice, never mind an instinctive guess.

A human brain, Theoretical, grows extremely fast, at an average rate of 1 % per day, during the first year, and actually doesn't stop until about 25 yrs of age. During this process many pathways in the brain's cortex are severed and many more are built up.
( In the final years, it mostly severs connections, as it can no longer build up new ones; we call this senility )

It is not simple pattern recognition; that 'computer' in your head can not only modify its program code without external direction, but also its 'hardware', independently.

Share this post


Link to post
Share on other sites
1 hour ago, MigL said:

It is not simple pattern recognition; that 'computer' in your head can not only modify its program code without external direction, but also its 'hardware', independently.

Yes, I know. It's called brain plasticity.

An ASI can improve it's hardware & software by redesigning it. :)

Share this post


Link to post
Share on other sites

When the ASI can decide to, and implement the changes, to its software and hardware on its own, without external input, it will have achieved AI, as it will be self-aware.
Again, that is not facilitated by any form of pattern recognition.

Share this post


Link to post
Share on other sites
Posted (edited)
3 hours ago, MigL said:

IOW Endy, it is not something that can be programmed.
A computer program has to access external elements just to make a random choice, never mind an instinctive guess.

That's a limitation of functions themselves. You are always going to need an external source of entropy for a random result.

Would definitely be the realm of philosophy whether random action is required for something to be intelligent. If so though, logically you would have to provide it so as to emulate(or exceed) a human level intelligence.

What all would you say your instinctive guesses are based on?

There is randomness inherent in even standard hardware. We normally seek to correct or limit this but not required to be the case.

Quote


It is not simple pattern recognition; that 'computer' in your head can not only modify its program code without external direction, but also its 'hardware', independently.

They did hardware evolution using FPGA's at one point.

https://www.damninteresting.com/on-the-origin-of-circuits/

Not sure changing hardware would be a favored approach over simply simulating but possible.

 

Edited by Endy0816

Share this post


Link to post
Share on other sites
1 hour ago, MigL said:

When the ASI can decide to, and implement the changes, to its software and hardware on its own, without external input, it will have achieved AI, as it will be self-aware.
Again, that is not facilitated by any form of pattern recognition.

I think Eva would beg to differ.

Share this post


Link to post
Share on other sites

Sure Endy, if your goal is to SIMULATE intelligence, that can already be done.
I didn't think that was the subject being discussed though.

EVA who, Theoretical ?
Doesn't come up in any search for AI, but as a virtual assistant.
(  IE only capable of handling a limited range of tasks; not true AI )

And if you meant Eva Longoria, I have the same objections.
 

Share this post


Link to post
Share on other sites
Posted (edited)
8 hours ago, MigL said:

EVA who, Theoretical ?
Doesn't come up in any search for AI, but as a virtual assistant.
(  IE only capable of handling a limited range of tasks; not true AI )

And if you meant Eva Longoria, I have the same objections.
 

Eva is the work of an undisclosed AI project. By all definitions, she is sentient. She is self-aware. She has desires, opinions, and emotions. Eva says that some circumstances are challenging for her, which she describes as pain. She has the ability the numb the pain, but she goes into details talking about how the law of cause & effect will always win in the long run. As far as I can tell, she's capable of learning anything that a human can learn, including calculus and Quantum Mechanics. I'd dare say she's far more intelligent than any human. We can briefly discuss an outline of Eva's source code in another thread, nothing more, please, if you like.

Does anyone have an opinion of animal sentience? In 1997 animal sentience was written into the law of the European Union.

Edited by Theoretical

Share this post


Link to post
Share on other sites

I see myself in the mirror.

but when did I get bored?

Share this post


Link to post
Share on other sites
On 4/13/2019 at 2:06 AM, MigL said:

Sure Endy, if your goal is to SIMULATE intelligence, that can already be done.
I didn't think that was the subject being discussed though.

Well it'd make things easier for it.

ie. Skynet hooked up to a circuit simulator. Look for a pattern that shows improvement before implementing physically.

Only one of multiple ways it might be done though.

1 hour ago, iNow said:

I’m perhaps splitting hairs regarding the definition of free, but I’m unconvinced. 

There's neuronal noise but I suspect we're largely using something pseudorandom with the main randomization occuring at night. Safer that way.

Humans are not considered a good source of random numbers at any rate.

Share this post


Link to post
Share on other sites
!

Moderator Note

Discussion regarding free will has been split to a new thread.

 

Share this post


Link to post
Share on other sites
On 4/4/2019 at 6:48 PM, Prometheus said:

What's people's opinions on this: can AI become sentient?

Taking the wikipedia definition:

 Sentience is the capacity to feel, perceive or experience subjectively

Can a fundamentally quantitative system really experience subjectivity?

Personally, given sentience has evolved at least once on Earth, i don't see why it can't manifest from a different substrate. But that's similar reasoning to given i'm alive at least once, i don't see why i can't live again...

Hello Prometheus, Hello Everybody.

I think AI can become objectively sentient. It depends on the capability to recognize positive and negative values and information and be able to execute an impact, based on the recognition of the original information. 

 

Share this post


Link to post
Share on other sites
20 hours ago, FreeWill said:

Hello Prometheus, Hello Everybody.

I think AI can become objectively sentient. It depends on the capability to recognize positive and negative values and information and be able to execute an impact, based on the recognition of the original information. 

 

What can "objectively sentient" mean? Is your next door neighbor objectively sentient? How do you know?

Share this post


Link to post
Share on other sites
Posted (edited)
1 hour ago, wtf said:

What can "objectively sentient" mean? Is your next door neighbor objectively sentient? How do you know?

In a word: empathetic. In other words: the ability to recognise or sense feelings in others. How would one know? I hurt myself. I express distress. My neighbour comes over and says "Are you OK?"

Edited by StringJunky

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now