Jump to content

Evidence for Strong AI


Recommended Posts

Strange, if an AI species seeks to understand human behaviour, it can (likely) not read the human dendrite's signal directly or drink their brain fluid to gather this understanding. It (likely) has to interact with humans in their interaction patterns. How much time do you typically know a person before you trust her and report about your feelings, fears, dreams etc. - days? Or more like months or years?
So, IMHO, this process can not be significantly accelerated by technology.

Instincts not transmitted genetically - Hermeneutics tell me that it is impossible to proove that something does not exist.
But it looks like the dominating attitude among biologists. Like:
"Accordingly [to Hailman], instincts are not preprogrammed, hardwired, or genetically determined; rather, they emerge each generation through a complex cascade of physical and biological influences"

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5182125/

Edited by Bernd.Brincken
Link to post
Share on other sites
  • Replies 68
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

! Moderator Note Moved to Computer Science (the Speculations forum is for people to present speculative scientific theories).  But maybe Philosophy would be better; we

AI does not feel anything. Emotions are just fakes. Facial expressions presented to human being. Smiling face robot is as happy as sad face robot. In true living beings emotions are connected wit

That is tru for systems that are called "AI" today. How do you know it will always be true? Do they have to be? What if we encounter aliens who do not have neurotransmitters, hormones and other

39 minutes ago, Bernd.Brincken said:

Instincts not transmitted genetically

How then are they transmitted from one generation to another.

40 minutes ago, Bernd.Brincken said:

"Accordingly [to Hailman], instincts are not preprogrammed, hardwired, or genetically determined; rather, they emerge each generation through a complex cascade of physical and biological influences"

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5182125/

Yep. That sounds like a good summary.

Very few things that are transmitted by genes are absolutely determined. A few traits and a small number of diseases. But anything more complex requires both genetics and the appropriate environment and experiences (I have alluded to this before, but only briefly because it is pretty much off topic).

You could, of course, design a system that learns its instinctive behaviour in the same way (if that was useful). 

44 minutes ago, Bernd.Brincken said:

Strange, if an AI species seeks to understand human behaviour, it can (likely) not read the human dendrite's signal directly or drink their brain fluid to gather this understanding. It (likely) has to interact with humans in their interaction patterns. How much time do you typically know a person before you trust her and report about your feelings, fears, dreams etc. - days? Or more like months or years?
So, IMHO, this process can not be significantly accelerated by technology.

Who knows. But I fail to see the relevance. You seem to be just throwing around random things that you think are difficult or time consuming. I cant see that most of these have any particular relevance to AI. And even if they do, they are just arguments from incredulity.

 

Link to post
Share on other sites

We were discussing the probability of strong AI, and my main argument was:

If an AI species seeks to understand human behaviour, it has to interact with humans in their interaction patterns.

This is important for anybody interested in the topic of this thread. Note it.
If the topic does not interest you, you will surely find better entertainment elsewhere.

 

On 2/29/2020 at 8:47 PM, Strange said:

You could design a system that learns its instinctive behaviour in the same way

Yes, but then you have build the whole environment around it that made humans learns their instincts.
Reminds me of the 42 chapter in Douglas Adams' Hitchhikers Guide to the Galaxy.

Link to post
Share on other sites
36 minutes ago, Bernd.Brincken said:

If an AI species seeks to understand human behaviour, it has to interact with humans in their interaction patterns.

This is important for anybody interested in the topic of this thread. Note it.
If the topic does not interest you, you will surely find better entertainment elsewhere.

But how to understand it?

What use of the term "species" is employed here. There are many possible uses, I know one use of the term in Combinatorial Enumeration. There is a common use of the term in biology, is that the intended one? With one basic treat being the ability to produce fertile offspring?

Would it be important in some context to understand the statement If a human researcher seeks to understand the behaviour of ants, they have to interact with ants in their interaction patterns

Link to post
Share on other sites
21 hours ago, taeto said:

 The term "species" .. there is a common use of the term in biology, is that the intended one?

Would it be important in some context to understand the statement If a human researcher seeks to understand the behaviour of ants, they have to interact with ants in their interaction patterns

Yes, I meant 'species' in biologic terms, like blatella germanica, see one of my former posts.

About AI species vs. human researcher - there are some slight differences:

  • The AI species still has to learn everything about its environment, all natural phenomenon, its own survival 
    - and finally about human beings (to make the step from AI to AGI).
  • The human researcher species went through these processes already, over thousands of years, or millions if you count his ancestor species.

So, no, the human researcher need not interact with ants to understand them enough to be able to, for example, program (important parts of) their behaviour in software.

Edited by Bernd.Brincken
Link to post
Share on other sites
20 hours ago, Bernd.Brincken said:

Yes, I meant 'species' in biologic terms, like blatella germanica, see one of my former posts.

Except that such a species is assumed to be created by humans, as opposed to by evolution?

Now if an AI is created which reproduces itself, then it must have been constructed so that it has this property, is that not true?  By a human, you think? Or by another AI, which was itself constructed by human?

Link to post
Share on other sites

taeto, AFAIK the idea is that AI (-life?) evolves amid a soup of resources, materials and energy and patterns around it; which may have been supplied by humans.
Creation has an aspect of intention - which may not need to be the case.

Given this scenario, I can not understand why a 'lower' AI lifeform would not manifest before a 'higher' one.
So one of these 'lower' - not-yet-strong - AIs would be the first evolved AI species that we humans see.
Where is it?

Edited by Bernd.Brincken
Link to post
Share on other sites
8 hours ago, Bernd.Brincken said:

taeto, AFAIK the idea is that AI (-life?) evolves amid a soup of resources, materials and energy and patterns around it; which may have been supplied by humans.
Creation has an aspect of intention - which may not need to be the case.

Given this scenario, I can not understand why a 'lower' AI lifeform would not manifest before a 'higher' one.
So one of these 'lower' - not-yet-strong - AIs would be the first evolved AI species that we humans see.
Where is it?

Look at alife simulations.

Not necessary for AI to evolve though. Evolution is slow and resource intensive. We'll likely instead jump over that stage.

Link to post
Share on other sites

The scenario of AI evolvement was a concession that the 'creators' need not understand and design an AI in order to .. let it happen.

If we humans 'jump over that stage' in order to achieve the aim less 'slow and resource intensive' - we'd have to understand the conditions, techniques and paths.
And then we encounter precisely the obstacles that were discussed before in this thread.

Link to post
Share on other sites
On 3/5/2020 at 6:02 PM, Bernd.Brincken said:

taeto, AFAIK the idea is that AI (-life?) evolves amid a soup of resources, materials and energy and patterns around it; which may have been supplied by humans.
Creation has an aspect of intention - which may not need to be the case.

Given this scenario, I can not understand why a 'lower' AI lifeform would not manifest before a 'higher' one.
So one of these 'lower' - not-yet-strong - AIs would be the first evolved AI species that we humans see.
Where is it?

It seems an interesting perspective. The problem is the huge gap between the original Church-Post-Turing (hypo)thesis about AI and what you describe. They suggested that whatever computational problem can be solved can in particular be solved by mechanical means. It does not necessarily include the problem of "to come alive" or "how to think". We know a lot about what we can do with computers, and yet we have no answer to CPT, and it appears unlikely that it is scientifically possible to discover or describe one. Maybe we suspect that "being alive" or "thinking" represent solutions to computational challenges. But that seems purely philosophical and by far not in the scope of science.      

Link to post
Share on other sites

It is also not in the scope of this thread 😉 - which is about Strong AI, or Artificial General Intelligence (AGI).
Wikipedia describes it, following other sources, as "the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can."
This is much more than "computational problems", because humans solve many problems that are not computational, esp. those with interactions between humans.
But if AGI might (only) be achieved by "coming alive", that's a separate discussion.
Just "evolving out of resources" is not neccessarily life, it just means that a system was not created intentionally.

BTW, I added the "hypothetical" in the WP article, plus two sources, in the course of this discussion.

Edited by Bernd.Brincken
Link to post
Share on other sites

It seems that "intelligence" is a key notion. Used for the name of certain concepts, and also used in the definition of those same concepts. We will only be equipped to recognize an AI/AGI when we have learned the exact meaning of "intelligence". We may hope that once an AGI has formed, we will be able to get into its head.  

Link to post
Share on other sites
17 minutes ago, taeto said:

It seems that "intelligence" is a key notion. Used for the name of certain concepts, and also used in the definition of those same concepts. We will only be equipped to recognize an AI/AGI when we have learned the exact meaning of "intelligence".

There is no exact meaning, it's nebulous; much like the recognition of intelligence in other species...

Link to post
Share on other sites
4 minutes ago, dimreepr said:

There is no exact meaning, it's nebulous; much like the recognition of intelligence in other species...

Darn. You cut away my following sentence which explained how we will figure it out.

Link to post
Share on other sites

Of all entities, you in particular should appreciate the fact that it is only when we face a full-fledged AI and crack open its scull to see what is inside, that we will truly know the meaning of "intelligence". Wouldn't work with you, apparently.

Link to post
Share on other sites

Why would you expect any more insight if you "get into its head"?
Is current medicine not able to get into peoples head? And what have they learned?

And IMHO, an exact definition of 'intelligence' is not neccessary. Look at the AGI description:
"... understand or learn any intellectual task that a human being can."
This can be verified or falsified, with a good chance for consensus, without a discourse about this term.

Link to post
Share on other sites
18 hours ago, Bernd.Brincken said:

And IMHO, an exact definition of 'intelligence' is not neccessary. 

Now that depends on your definition of AGI...

18 hours ago, Bernd.Brincken said:

Look at the AGI description:
"... understand or learn any intellectual task that a human being can."

Is it though???

 

Quote

 

Requirements[edit]

Main article: Cognitive science

Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone.[12] However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:[13]

 

  •  
Link to post
Share on other sites

No disagreement, or 'no dichotomy', as my friend Petr would say.

You can describe intelligence in this way, but you will find other, much tighther definitions, for example as the basis of intelligence tests.

But this is a question completely seperate from the title of this thread; and IMHO fruitless to follow.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.