Jump to content

Evidence for Strong AI


Bernd.Brincken

Recommended Posts

Machine learning employs 3 primary tools for learning; unsupervised techniques in which structures in data sets (such as visual, audio or text) are sought without any additional information, supervised learning in which a label is associated with each instance of learning (i.e. a cat picture is labelled as 'cat'), and reinforcement learning in which a score is used to optimise an algorithm (often used in gaming and anything that can can apply some metric to the state the algorithm sees at any instance).

A robot may then learn to walk by the experience of continually falling down via reinforcement learning. No words are needed, only a sense of balance.

Edited by Prometheus
Link to comment
Share on other sites

30 minutes ago, Bernd.Brincken said:

For training you need some kind of media or language to transfer knowledge from trainer to trainee.
As described in 'interpersonal communication' (WP), many signals are not expressed in words, and may not even be expressible in language.
Experience does not need this.
A (human / AI) 'intelligent' system can interpret such a signal out of the experience with the same situation.

Training can use exactly the same techniques as "experience".

Training and experience are the same thing.

Link to comment
Share on other sites

2 hours ago, Strange said:

Training can use exactly the same techniques as "experience".

Training and experience are the same thing.

It does not make it easier for the AI cause if you prefer to mix the terms in this way 😉

There are non-language interpersonal signals in human relations which rely on similar experiences:
Joy, grief, fear, success, embarrassment, mobbing, humor, desparation, optimism etc.
How do you imagine an AI to gather (or be trained on) these experiences, in order to be able to understand humans?

2 hours ago, Prometheus said:

Machine learning employs .. unsupervised techniques .. supervised learning .. and reinforcement learning ..

A robot may then learn to walk by the experience of continually falling down via reinforcement learning. No words are needed, only a sense of balance.

Yes, this works.
But what is the relation to human-human ( /-AI) interpersonal communication?
Walking either works or not. As long as I fall - I have to try again, wisely with modified movements.
In interpersonal communication - and further into society - there is no direct, binary success-feedback of this kind.

Edited by Bernd.Brincken
Link to comment
Share on other sites

1 hour ago, Bernd.Brincken said:

There are non-language interpersonal signals in human relations which rely on similar experiences:
Joy, grief, fear, success, embarrassment, mobbing, humor, desparation, optimism etc.
How do you imagine an AI to gather (or be trained on) these experiences, in order to be able to understand humans?

Some of these may be instinctive in humans. (Although I don't think that is certain.) In which case, you build the same knowledge into the AI.

Others are learned by experience. And an AI could do the same.

1 hour ago, Bernd.Brincken said:

Yes, this works.
But what is the relation to human-human ( /-AI) interpersonal communication?
Walking either works or not. As long as I fall - I have to try again, wisely with modified movements.
In interpersonal communication - and further into society - there is no direct, binary success-feedback of this kind.

Walking is not a binary thing, either. The AI robot's first attempt, after falling over once, might be to proceed by falling, dragging itself forward with its arms, standing up and then falling over again. It might progress from that to crawling. Then eventually staggering. Then walking in the most efficient way.

Similar stages of progression could take place in communication skills.

All of your counter arguments seem to consist of: "this thing [which is well known and demonstrated in practical systems] could never happen".

I think you should go and study some examples of machine learning (for communication, interaction, mechanical skills, etc) before dismissing it as implausible. An argument from ignorance or incredulity is never convincing. As I said in my first post.

Link to comment
Share on other sites

On 2/10/2020 at 2:02 PM, dimreepr said:

Strong AI (AGI) isn't trying to-be-human.

That depends on what data and how, are delivered to AI. If AI has to understand human words must be able to see and hear. e.g. "elephant" for algoritm is just a word. Sequence of characters. Algorithm will read Wikipedia page about elephant. Read the all books about animal but won't be able to truly understand. Will have the all text information about animal. Like chatbot. Repeating the same sentences over and over again. The all info except images and sounds etc. Human without data from other senses is unable to imagine. During teaching of human, words are correlated with images and sounds and touch and smell etc. Together they are full information about the subject. Try to explain colors and sounds to somebody who can't see and hear since birth. That's kinda like AI without eyes and ears. What means distance for somebody who can't touch and see and hear? Something is far or near? Without eyes and ears impossible to explain. "Website or server is far away therefor long delay to load it..." to explain "distance" to algorithm? 

If one programmer is implementing AI using human brain simulating techniques, then AI will identify itself as human.. 

If I will ask you: who are you? You will answer that you're human. Are you sure? How can you know that? Maybe you are just AI?

Link to comment
Share on other sites

13 hours ago, Sensei said:

That depends on what data and how, are delivered to AI. If AI has to understand human words must be able to see and hear. e.g. "elephant" for algoritm is just a word. Sequence of characters. Algorithm will read Wikipedia page about elephant. Read the all books about animal but won't be able to truly understand. Will have the all text information about animal. Like chatbot. Repeating the same sentences over and over again. The all info except images and sounds etc. Human without data from other senses is unable to imagine. During teaching of human, words are correlated with images and sounds and touch and smell etc. Together they are full information about the subject. Try to explain colors and sounds to somebody who can't see and hear since birth. That's kinda like AI without eyes and ears. What means distance for somebody who can't touch and see and hear? Something is far or near? Without eyes and ears impossible to explain. "Website or server is far away therefor long delay to load it..." to explain "distance" to algorithm?

Not a perfect correlation, but AI might gain a sense of distance in terms of its own processing delays and ping times(roughly via d/c=t).

It could match that against a database to gain a sense of the larger world.

Link to comment
Share on other sites

On 2/17/2020 at 9:08 PM, Strange said:

Some of these may be instinctive in humans. (Although I don't think that is certain.) In which case, you build the same knowledge into the AI.
...
I think you should go and study some examples of machine learning (for communication, interaction, mechanical skills, etc) before dismissing it as implausible. An argument from ignorance or incredulity is never convincing. As I said in my first post.

Instinct, ok. How do creatures learn instinctive behaviour? Or how is it transfered to them?
As long as the specific path of transfer is not understood, the idea of instincts in an AI is pure speculation, pseudoscience.

BTW, I did study machine learning, and I did lectures on it, the first in 1989, and I talked to Marvin Minsky about it in Linz in 1990 (ars electronica).
Little has changed since then. - Neural networks do work, no doubt, and this alone is still astonishing to most 'classical' programmers.
But no system in the field of AI has reached even the level of a cockroach, yet.
Not in 1990, not in 2000, not in 2010, etc.
So, regarding good advices, maybe you should read my book "Künstliche Dummheit" (Artificial Stupidity).

On 2/17/2020 at 9:08 PM, Strange said:

Walking is not a binary thing, either. The AI robot's first attempt, after falling over once, might be to proceed by falling, dragging itself forward ...

If it is not binary - i.e. one can clearly say "I can walk" vs. "I can not yet walk" - how does the AI know if it should continue dragging?

Edited by Bernd.Brincken
Link to comment
Share on other sites

On 2/18/2020 at 3:43 AM, Sensei said:

If AI has to understand human words must be able to see and hear. ..  Human without data from other senses is unable to imagine. During teaching of human, words are correlated with images and sounds and touch and smell etc. Together they are full information about the subject.

Exactly.
This wisdom has been formulated in the 'semiotic trinangle' concept:
https://en.wikipedia.org/wiki/The_Meaning_of_Meaning

2 minutes ago, dimreepr said:

Evolution, at a guess...

Ok, so the AI species would also have to undergo their own evolution in order to gain instincts.
Then again, it is barely probable that they attain (/acquire) the same instincts as humans - they could still not understand humans, in the same way that humans understand humans.

Link to comment
Share on other sites

4 minutes ago, Bernd.Brincken said:

Ok, so the AI species would also have to undergo their own evolution in order to gain instincts.
Then again, it is barely probable that they attain (/acquire) the same instincts as humans - they could still not understand humans, in the same way that humans understand humans.

Are you sure you understand... humans???

Link to comment
Share on other sites

8 minutes ago, Bernd.Brincken said:

Then again, it is barely probable that they attain (/acquire) the same instincts as humans - they could still not understand humans, in the same way that humans understand humans.

Humans aren't born able to recognise faces, but learn. We are born with the instinct to track faces though, greatly helping the learning process. It's not too hard to imagine AI able to track faces - i imagine your smartphone can already do it - and thereafter learn to distinguish individual faces.

I'm sure dogs have some understanding of humans and that it's quite unlike our own understanding (rooted in smell for instance). I can well imagine AGI not understanding humans the way humans do, but having some understanding, unless the emulation pathway is successful.

Link to comment
Share on other sites

6 minutes ago, Prometheus said:

Humans aren't born able to recognise faces, but learn. We are born with the instinct to track faces though, greatly helping the learning process. It's not too hard to imagine AI able to track faces - i imagine your smartphone can already do it - and thereafter learn to distinguish individual faces.

I'm sure dogs have some understanding of humans and that it's quite unlike our own understanding (rooted in smell for instance). I can well imagine AGI not understanding humans the way humans do, but having some understanding, unless the emulation pathway is successful.

I was reading recently that Chinese street surveillance cameras can pick out people even  when they are wearing surgical masks. I'm guessing by reading the IR output.

Edited by StringJunky
Link to comment
Share on other sites

1 hour ago, Bernd.Brincken said:

Instinct, ok. How do creatures learn instinctive behaviour? Or how is it transfered to them?

They don't learn it. That is what "instinctive" means. Although very few things are purely instinctive (in higher animals). 

1 hour ago, Bernd.Brincken said:

As long as the specific path of transfer is not understood, the idea of instincts in an AI is pure speculation, pseudoscience.

If you wanted to build certain behaviours in to an AI (by suitable programming) I'm not sure why that would be "pseudoscience".

1 hour ago, Bernd.Brincken said:

But no system in the field of AI has reached even the level of a cockroach, yet.

I heard an expert in AI saying that we probably hadn't reached the level of a frog yet. 

1 hour ago, Bernd.Brincken said:

If it is not binary - i.e. one can clearly say "I can walk" vs. "I can not yet walk" - how does the AI know if it should continue dragging?

Presumably it will have been given the goal of finding the most efficient mode of locomotion. Or perhaps of imitating human locomotion.

 

1 hour ago, Bernd.Brincken said:

Ok, so the AI species would also have to undergo their own evolution in order to gain instincts.

Quite possibly. Genetic algorithms are commonly used to develop  used to develop optimal engineering solutions.

But we can also short-circuit the need for evolution by doing "intelligent design". 

1 hour ago, Bernd.Brincken said:

Then again, it is barely probable that they attain (/acquire) the same instincts as humans

Who says they need to be the same?

Who says they need to be acquired? 

1 hour ago, Bernd.Brincken said:

they could still not understand humans, in the same way that humans understand humans.

Baseless assertion. 

And who says that they need to, anyway?

 

Link to comment
Share on other sites

Sorry for the delay, I was ordered to empiric social research in Cologne carnival.
 

On 2/20/2020 at 4:43 PM, dimreepr said:

Are you sure you understand... humans???

Just a tiny bit of them, and I would expect AGI only to understand that tiny bit as well.
Understanding not neccesarly in logic or scientific analysis, also the aforementioned (probably) intuitive things like interpersonal communication.

 

On 2/20/2020 at 5:46 PM, Strange said:

If you wanted to build certain behaviours in to an AI (by suitable programming) I'm not sure why that would be "pseudoscience".

If you want to build 'certain behaviour' into an AI, you would have to understand this - instinct.
So (how) do we know the logic or content or message of instinct?
To me, 'instinct' sounds like a term for 'some behaviour the sources of which we can not (yet) explain'.
But I'm willing to learn.

On 2/20/2020 at 5:46 PM, Strange said:

But we can also short-circuit the need for evolution by doing "intelligent design". 

Oh, I have to look that up.
"Intelligent design (ID) is a pseudoscientific argument for the existence of God" (WP)
And this is a science forum, right?

On 2/20/2020 at 5:46 PM, Strange said:

who says that they need to [understand humans in the way humans do], anyway?

Me. If they can not understand humans in the way we do, they can not understand human behaviour, interactions, wishes, markets, politics, culture.
In this case, why would we want to attribute any 'intelligence' to these systems?

Edited by Bernd.Brincken
de-merge
Link to comment
Share on other sites

2 hours ago, Bernd.Brincken said:

If you want to build 'certain behaviour' into an AI, you would have to understand this - instinct.

You would just need to understand the behaviour you wanted to build in. We do that all the time with current computers. 

2 hours ago, Bernd.Brincken said:

So (how) do we know the logic or content or message of instinct?

We observe behaviour.

2 hours ago, Bernd.Brincken said:

To me, 'instinct' sounds like a term for 'some behaviour the sources of which we can not (yet) explain'.

The "source" is that it is hardwired. In animals, that means it is encoded in the genes.

You seem to have fallen back on the "it is mysterious therefore impossible" argument. Why not just use the word "soul" and have done with it.

3 hours ago, Bernd.Brincken said:

Oh, I have to look that up.
"Intelligent design (ID) is a pseudoscientific argument for the existence of God" (WP)
And this is a science forum, right?

It was a joke. Because of your quasi-religious argument.

We could try and evolve AIs to do what we want. But we don't need to rely on millions of years of evolution because we can design a system with the characteristics we want.

3 hours ago, Bernd.Brincken said:

Me. If they can not understand humans in the way we do, they can not understand human behaviour, interactions, wishes, markets, politics, culture.
In this case, why would we want to attribute any 'intelligence' to these systems?

We can understand non-human behaviour.

We can also attribute intelligence to non-human behaviours.

So these arguments seem pretty shallow.

Do you have anything to say other than "humans are special therefore AI is impossible"?

Link to comment
Share on other sites

16 hours ago, Strange said:

You would just need to understand the behaviour you wanted to build in. We do that all the time with current computers. 

AI does not feel anything. Emotions are just fakes. Facial expressions presented to human being. Smiling face robot is as happy as sad face robot.

In true living beings emotions are connected with release of neurotransmitters, hormones and other chemicals in the brain.

e.g. human after seanse of comedy movie, or after eating piece of chocolate, is really feeling better. AI plugged to electricity (equivalent of food) does not feel better. It does not feel anything.

Edited by Sensei
Link to comment
Share on other sites

7 minutes ago, Sensei said:

In true living beings emotions are connected with release of neurotransmitters, hormones and other chemicals in the brain.

Are you stating that emotions can only be felt as the result of neurotransmitters, hormones and other chemicals in the brain?

It's a bit of a black swan situation isn't it. All we've seen are white swans so far. The problem is, when it comes to the mental states of other beings, we are colour blind.

Link to comment
Share on other sites

38 minutes ago, Sensei said:

AI does not feel anything. Emotions are just fakes. Facial expressions presented to human being. Smiling face robot is as happy as sad face robot.

That is tru for systems that are called "AI" today. How do you know it will always be true?

39 minutes ago, Sensei said:

In true living beings emotions are connected with release of neurotransmitters, hormones and other chemicals in the brain.

Do they have to be? What if we encounter aliens who do not have neurotransmitters, hormones and other chemicals like su? Would you argue that they couldn't have "real emotions"?

Quote

e.g. human after seanse of comedy movie, or after eating piece of chocolate, is really feeling better. AI plugged to electricity (equivalent of food) does not feel better. It does not feel anything.

This is just yet another version of the empty "humans are special so AI is impossible" argument. It is purely based on belief.

I am quite sceptical, myself, about the possibility of true AI (for exactly these "faith based" reasons). But I have never heard any good, rational arguments why it is not possible. On the other hand, I have heard lots of very convincing arguments why it might be. So I know my scepticism is pretty baseless.

33 minutes ago, Prometheus said:

Are you stating that emotions can only be felt as the result of neurotransmitters, hormones and other chemicals in the brain?

And, if that were true (and I see no reason why it should be) then we would just add those components to an attempt to build an AI.

But this seems to be confusing the implementation with the resulting behaviour. More than one way to skin a cat, as they say.

Link to comment
Share on other sites

22 hours ago, Strange said:

Do you have anything to say other than "humans are special therefore AI is impossible"?

I did not say that, not in these nor other words.
And by the way, I also do not support it - see my proposal to choose an animal species as a benchmark for AI progress.
In principle, an AI species could learn everything that humans did, but they would need a similar time; then just economy-wise it is not probable.
Seems like you want to hear this simplification because it is easy to argument against.

About instincts, the theory that they are genetically based is not supported by genetics - AFAIK no instinct has been identified in any genome up to date. Always willing to learn.

Basically, let us come back to the title of this thread - Evidence for Strong AI
No evidence has been presented so far - right?

Edited by Bernd.Brincken
Back to the thread title
Link to comment
Share on other sites

7 hours ago, Prometheus said:

It's a bit of a black swan situation isn't it. All we've seen are white swans so far. The problem is, when it comes to the mental states of other beings, we are colour blind.

Only if you're equipped just with eyes and ears etc... i.e. typical human-human or human-animal or human-cyborg/AI interactions..

But if you are equipped with e.g. MRI or can take blood sample, you will detect whether somebody is happy or sad, at that particular moment of examination, on monitor screen or on graph showing amount of chemicals flowing through blood of somebody.. With AI you cannot measure current and voltage to check how happy or sad is particular AI..

 

Edited by Sensei
Link to comment
Share on other sites

4 minutes ago, Sensei said:

But if you are equipped with e.g. MRI or can take blood sample, you will detect whether somebody is happy or sad, at that particular moment of examination, on monitor screen or on graph showing amount of chemicals flowing through blood of somebody.. With AI you cannot measure current and voltage to check how happy or sad is particular AI..

We only know certain biomarkers correspond to certain mental states because people were asked how they feeling at the time they were taken: you still needed to trust someone was truthfully reporting their mental state at some point. We still don't directly experience someone else's mentation.

Anyway, all this is just dancing around the question of why the substrate matters. Humans have a mental state because of biology. Why does that preclude AI having mental states?

 

Link to comment
Share on other sites

1 hour ago, Bernd.Brincken said:

In principle, an AI species could learn everything that humans did, but they would need a similar time

Only if based on the same underlying technology. Other implementation methods could be faster (or slower). Or the learning process could be guided to be more efficient.

1 hour ago, Bernd.Brincken said:

About instincts, the theory that they are genetically based is not supported by genetics - AFAIK no instinct has been identified in any genome up to date.

Please provide a citation to support the claim that instincts are not transmitted genetically.

It sounds like you are looking for "the gene for web-building" (in spiders) or "the gene for language learning" (in humans). That is obviously not going to happen. 

1 hour ago, Bernd.Brincken said:

Basically, let us come back to the title of this thread - Evidence for Strong AI
No evidence has been presented so far - right?

Of course not.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.