Jump to content

Evidence for Strong AI


Bernd.Brincken

Recommended Posts

'Artificial general intelligence (AGI) .. [or strong AI] is the intelligence of a machine that can understand or learn any intellectual task that a human being can.' (Wikipedia)

Is there any evidence for such a system?
I have done a good year of research and published on the matter, and up to now I have not read about such a system.
Furthermore, predictions of strong AI soon to come have been given by reputable scientist like Marvin Minsky several times - and failed.

For the time being, Strong AI looks like Speculation to me.
But maybe I have a blind spot, are lack of access to the latest developements.
Enlightenment please.
 

Edited by Bernd.Brincken
Link to comment
Share on other sites

!

Moderator Note

Moved to Computer Science (the Speculations forum is for people to present speculative scientific theories). 

But maybe Philosophy would be better; we can see how it goes.

 
10 minutes ago, Bernd.Brincken said:

Is there any evidence for such a system?

Existing now? No. Nowhere near.

I think we are a long way away from anything close to string AI. If it is even possible. I have seen arguments on both sides but, generally, the "not possible" arguments seem to boil down "because we are special". The arguments that it is possible are not completely convincing, but at least they seem to be logical arguments based on facts.

Link to comment
Share on other sites

Strange, I have explained "because we are x" in more detail in my publication.
In short: Human beings, and their mental features, can not be derived from individual properties ('brain') alone. When they appear in groups (german 'Angehäuftsein'), new properties, features and powers come into play. Strong AI would have to communicate _with_ humans to become part of, or extend, these features. To communicate, it has to understand (more than words). And to understand, it has to experience a human life. So 'computer AI' would not suffice, it would have to be life (-like), in several aspects.

That aside, we see the phenomenon that strong AI is presented as something to happen soon, not only by Scifi authors and 'visionary' scientists, like in previous waves, but by politicians, fortune-500 companies and many mass media. Why do they do that? Can we call Strong AI 'Pseudoscience'?

Link to comment
Share on other sites

It could, in principle.
But it would take as much time as among humans alone - around 20 years.
The outcome would also be uncertain, as among humans.
And, socially, the AI would surely experience exclusion ('racism'), with various side effects.
Anyhow, such a project could be worse just for the insights.
But, before, one could expect a system on a lower level of complexity.
My proposal was recently: blatella germanica

 

Link to comment
Share on other sites

18 hours ago, Bernd.Brincken said:

'Artificial general intelligence (AGI) .. [or strong AI] is the intelligence of a machine that can understand or learn any intellectual task that a human being can.' (Wikipedia)

Is there any evidence for such a system?
I have done a good year of research and published on the matter, and up to now I have not read about such a system.
Furthermore, predictions of strong AI soon to come have been given by reputable scientist like Marvin Minsky several times - and failed.

For the time being, Strong AI looks like Speculation to me.
But maybe I have a blind spot, are lack of access to the latest developements.
Enlightenment please.
 

If such being would exist you could not distinguish him/her/it during online discussion from normal human.

18 hours ago, Strange said:

Existing now? No. Nowhere near.

How would you know?

You assume that programmers will openly brag about the mass media.

Even their AI doesn't need to know it's AI. He/she will claim to be human.

Link to comment
Share on other sites

16 minutes ago, Bernd.Brincken said:

It could, in principle.

If you accept the principle.

17 minutes ago, Bernd.Brincken said:

But it would take as much time as among humans alone - around 20 years.
The outcome would also be uncertain, as among humans.
And, socially, the AI would surely experience exclusion ('racism'), with various side effects.
Anyhow, such a project could be worse just for the insights.

why would this follow?

9 minutes ago, Sensei said:

You assume that programmers will openly brag about the mass media.

They have, so a reasonable assumption.

Link to comment
Share on other sites

19 minutes ago, Bernd.Brincken said:

But it would take as much time as among humans alone - around 20 years.

Why do think it would take 20 years? It conceivable that an AI could have conversations with thousands of humans simultaneously, and not need to sleep, reducing that 20 years considerably . It also assumes that AI would learn at the same rate as humans - currently its much slower (babies don't see thousands of cats and dogs before learning to distinguish the two like AI currently does), but in time it could become much faster (learning on sparse data in a very active research field).

 

24 minutes ago, Bernd.Brincken said:

But, before, one could expect a system on a lower level of complexity.

So you advocate brain emulation as opposed to 'pure' AI solutions? In theory that should make for more human-like AGI, but if pure AI systems develop AGI they may not require the same learning environment as biological systems (e.g. learning could take place in entirely virtual environments).

 

15 minutes ago, Sensei said:

You assume that programmers will openly brag about the mass media.

Even their AI doesn't need to know it's AI. He/she will claim to be human.

I suspect we'd know pretty quickly: someone would want to collect their nobel prize, manipulate the stock markets or just lose control of it.

Link to comment
Share on other sites

15 hours ago, Bernd.Brincken said:

In short: Human beings, and their mental features, can not be derived from individual properties ('brain') alone.

The usual argument from incredulity that "we are not just brains". I have never seen any evidence, or a much deeper argument, for this position.

38 minutes ago, Bernd.Brincken said:

But it would take as much time as among humans alone - around 20 years.

Humans learn to understand speech in about 12 months and to produce it in 24 or more. And they are asleep for a large part of that time. That is learning from first principles and so discovering/inventing the rules of phonetics, the meanings of sounds, syntax, morphology, grammar, etc (there is some debate whether the brain is already hardwired for the concepts of grammar, etc. or whether it uses pre-existing abilities related to pattern matching and organising information).

41 minutes ago, Bernd.Brincken said:

The outcome would also be uncertain, as among humans.

I don't disagree.

One problem I have with the idea of strong AI being massively smarter than humans is that it is based on the idea that computers can do individual tasks, such as calculation, much faster than humans. But a true, general AI would, presumably, be devoting much of its processing capabilities to just being intelligent that it might be just as poor at mental arithmetic as I am.

Plus, I can imagine dialogs such as:

Human: "Can you calculate the most efficient trajectory for our return to Earth?"

AI: "No. I'm sick of doing that. You do it. I'm going to watch a soap opera."

46 minutes ago, Bernd.Brincken said:

And, socially, the AI would surely experience exclusion ('racism'), with various side effects.

That is not an argument against the possibility of AI. In fact, one piece of evidence that the development of strong AI has succeeded might be that every AI "being" ends up with a different character through their different experiences of things like this.

48 minutes ago, Bernd.Brincken said:

But, before, one could expect a system on a lower level of complexity.
My proposal was recently: blatella germanica

And people are attempting to model simpler organisms at various levels of detail.

There is at least one project to attempt to simulate the entire metabolism of a single cell. There are attempts to model the nervous system of organisms like cockroaches. (I am not up to date with any of these, I have just noticed articles on them over the years.)

44 minutes ago, Sensei said:

How would you know?

You assume that programmers will openly brag about the mass media.

It seems unlikely that one group could have made massive breakthroughs that are years/decades/centuries in advance of anyone else. And then kept it secret. Why would they?

Link to comment
Share on other sites

33 minutes ago, dimreepr said:

They have, so a reasonable assumption

How often do you hear news from top secret military programs? Only if there is leakage from whisleblower. 

How often do you hear about lonely programmer projects? Only if he or she wants to monetize project or gain investors and they will reveal details of project against NDA. Programmer's private projects largely remain private.

 

Link to comment
Share on other sites

1 minute ago, Sensei said:

How often do you hear news from top secret military programs?

Can you name one military program where the technology was decades ahead of anything anyone else was doing? They may be secret, but they use the same technology as everyone else.

3 minutes ago, Sensei said:

How often do you hear about lonely programmer projects?

And when did a lone programmer last produce something that millions of others were not able to do (and are not likely to do for decades)?

 

Link to comment
Share on other sites

10 minutes ago, Sensei said:

How often do you hear news from top secret military programs? Only if there is leakage from whisleblower. 

How often do you hear about lonely programmer projects? Only if he or she wants to monetize project or gain investors and they will reveal details of project against NDA. Programmer's private projects largely remain private.

 

I hear the pr from top companies on the subject.

Link to comment
Share on other sites

19 hours ago, dimreepr said:

If you accept the principle.

why would this follow?

They have, so a reasonable assumption.

Sorry, I meant 'worth' not worse.
So again, the AI-learns-to-be-human project could be worth the insight, even if it is costly, lengthy and - IMHO - prone to failure.

About the 'secret program' - the situation is analog to the UFO situation - 'Maybe they are already among us'.
Yes, maybe they are, but as long as they do not give press conferences at  the Cologne cathedral, 5th avenue and Taj Mahal, we (non-secret-service-members) can not know - and should not care.

The other argument was - if super-AI is possible or in delopement behind doors, a lesser, animal-like AI should be possible in front of the doors.
That would also be a scientific and mass media phenomenon that the project team would be proud to present.

Edited by Bernd.Brincken
Link to comment
Share on other sites

22 hours ago, Prometheus said:

[1.] Why do think it would take 20 years? It conceivable that an AI could have conversations with thousands of humans simultaneously...

[2.] So you advocate brain emulation as opposed to 'pure' AI solutions?

Ad 1, conversation among humans is not restricted to logical statements, see the 'Semiotic triangle' as a small hint, and 'Sociology' as the broader perspective.
Practically, there is barely any interaction in day-to-day life where symbolic and recursive elements ('I respect anyone who respects me') are _not_ at play.
For the AI project, without showing its own weaknesses, failures, ambitions - basically emotions - it is not plausible that humans will open their 'heart and mind' for the AI conversation.
My theory is that these emotions - based on experience - can only evolve among humans in the timeframe they are used to.
As a side aspect, how do you imagine the physical appearance of the AI that has 'conversations with thousands of humans simultaneously'? Purely virtual, like a chat bot?

Ad 2, first I try to restrain from any 'advocation', I just want to offer the AI proponents any optimistic assumption that could help their case 😉
If 'pure AI' would facilitate a Super-human intelligence, ok fine. Then why should the same technology (or phenomenon) not facilitate a Super-cockroach intelligence?

Edited by Bernd.Brincken
Link to comment
Share on other sites

3 hours ago, Bernd.Brincken said:

Sorry, I meant 'worth' not worse.
So again, the AI-learns-to-be-human project could be worth the insight, even if it is costly, lengthy and - IMHO - prone to failure

Strong AI (AGI) isn't trying to-be-human.

3 hours ago, Bernd.Brincken said:

About the 'secret program' - the situation is analog to the UFO situation - 'Maybe they are already among us'.
Yes, maybe they are, but as long as they do not give press conferences at  the Cologne cathedral, 5th avenue and Taj Mahal, we (non-secret-service-members) can not know

Well, we know they aren't already among us. 😉

3 hours ago, Bernd.Brincken said:

The other argument was - if super-AI is possible or in delopement behind doors, a lesser, animal-like AI should be possible in front of the doors.
That would also be a scientific and mass media phenomenon that the project team would be proud to present.

Why not the greater delopement?

 

Link to comment
Share on other sites

6 hours ago, dimreepr said:

1. Strong AI (AGI) isn't trying to-be-human.

2. Well, we know they aren't already among us. 😉

3. Why not the greater delopement?

 

Ad 1: I was answering to the statement 'Babies are not born understanding, but learn to. Why, in principle, could this not be the same of AGI?'
I did not say that AGI tries to be something.

Ad 2: I don't get the joke, sorry. Whom do you mean?

Ad3: Greater developement is fine. Just by experience, simpler solutions can be reached before.
Before human being get cloned, we are presented cloned sheep.
Before people set food on Mars, they do it on the moon, and robots land on Mars.
Before we all change over to electric cars, some models are already available and we can learn from them.
Why should AGI appear in one step, out-of-nowwhere?

Link to comment
Share on other sites

13 hours ago, Bernd.Brincken said:

Ad 1: I was answering to the statement 'Babies are not born understanding, but learn to. Why, in principle, could this not be the same of AGI?'
Ad3: Greater developement is fine. Just by experience, simpler solutions can be reached before.
Before human being get cloned, we are presented cloned sheep.
Before people set food on Mars, they do it on the moon, and robots land on Mars.
Before we all change over to electric cars, some models are already available and we can learn from them.
Why should AGI appear in one step, out-of-nowwhere?

I think we are talking past each other.

13 hours ago, Bernd.Brincken said:

Ad 2: I don't get the joke, sorry. Whom do you mean?

Aliens. 😉

 

Link to comment
Share on other sites

1 hour ago, Bernd.Brincken said:

Ad 1, conversation among humans is not restricted to logical statements, see the 'Semiotic triangle' as a small hint, and 'Sociology' as the broader perspective.
Practically, there is barely any interaction in day-to-day life where symbolic and recursive elements ('I respect anyone who respects me') are _not_ at play.
For the AI project, without showing its own weaknesses, failures, ambitions - basically emotions - it is not plausible that humans will open their 'heart and mind' for the AI conversation.
My theory is that these emotions - based on experience - can only evolve among humans in the timeframe they are used to.
As a side aspect, how do you imagine the physical appearance of the AI that has 'conversations with thousands of humans simultaneously'? Purely virtual, like a chat bot?

Online chatbots, Siri, Alexa, stuff like that.

What's the relevance of human conversation not being restricted to logical statements? Do you imagine that computers are limited to receiving logical statements as inputs?

Also not sure of the relevance of humans not 'opening their heart and mind'. It's interesting tangent: i think the tendency of humans to anthropomorphise means it is eminently plausible.

Interesting theory. Why would you think that? How would you test it?

Link to comment
Share on other sites

On 2/11/2020 at 3:07 PM, Prometheus said:

What's the relevance of human conversation not being restricted to logical statements? Do you imagine that computers are limited to receiving logical statements as inputs?

You expected AGI to manifest in chat bots, right?
Then they are restricted to language, how ever logical it may be.
So let me broaden the argument:
Communication among humans is not restricted to language.
For the difference between (online) chat and (RL) communication, see WP: Interpersonal communication

Link to comment
Share on other sites

41 minutes ago, Bernd.Brincken said:

You expected AGI to manifest in chat bots, right?
Then they are restricted to language, how ever logical it may be.
So let me broaden the argument:
Communication among humans is not restricted to language.
For the difference between (online) chat and (RL) communication, see WP: Interpersonal communication

Computers can recognise human expressions and gestures and modify their responses appropriately. They can also have (physical or virtual) avatars that create appropriate expressions and gestures.

Link to comment
Share on other sites

On 2/12/2020 at 5:28 PM, Strange said:

Computers can recognise human expressions and gestures and modify their responses appropriately. They can also have (physical or virtual) avatars that create appropriate expressions and gestures.

And how did they learn these facial expressions and gestures?
How do humans learn them?

Link to comment
Share on other sites

11 minutes ago, Bernd.Brincken said:

And how did they learn these facial expressions and gestures?

They are trained to recognise them (by showing them large numbers of examples).

12 minutes ago, Bernd.Brincken said:

How do humans learn them?

Probably a combination of some innate knowledge and some learning, I guess.

Link to comment
Share on other sites

1 hour ago, Bernd.Brincken said:

Let's assume humans learn them by experience.

If experience was not neccessary, if this wisdom could be transfered completely by training - in what kind of media would you explain the meaning of a gesture to our AI? In written words?

What is the difference between experience and training? 

Humans and computers can learn these things the same way (apart from the fact human babies might be "hard-wired" to recognise some expressions).

Link to comment
Share on other sites

For training you need some kind of media or language to transfer knowledge from trainer to trainee.
As described in 'interpersonal communication' (WP), many signals are not expressed in words, and may not even be expressible in language.
Experience does not need this.
A (human / AI) 'intelligent' system can interpret such a signal out of the experience with the same situation.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.