Jump to content

What computers can't do for you


Genady

Recommended Posts

17 hours ago, Genady said:

It is not only "creativity", not even mostly about it. My doubts are about other human abilities, such as:

I started to focus on creativity because of words like imaginative, intuition, experience in that passage. I think this is part of the problem - i know what you mean (or at least i think i do - shall we call it the human spark?), but when it comes to defining it precisely enough so it can be measured in a lab its virtually (ha) impossible. Which is why i think goalposts will be continually moved as AI pushes boundaries.

 

14 hours ago, studiot said:

By 'prohibit' do you mean absolutely or just prevent some creativity ?

I don't know of any bar to creativity per se, but observe that creativity is often driven by other factors than preset goals and can arise spontaneously as when a doctor diagnoses a previously unknown disease or condition.

Not sure, prohibit seems too austere, shall we say limit? If we think of creativity as a multi-faceted thing, then It's possible that different AIs could be more creative than humans in some ways and less creative in other ways.

In terms of preset goals, we have mesa-optimisers in which the objective function itself is optimised. It has some AI researchers worried, because they believe it means an agent could develop its own goals distinct from what a human originally intended.

There's also instrumental convergence (or maybe the above is also an example of this) in which 'soft' goals are learnt as a way for optimising a 'hard' goal: things like self-preservation are likely to emerge as soft goals because regardless of what an agent is trying to do, existing helps a great deal.

Link to comment
Share on other sites

I have a vague "Hypothesis 1" regarding the human intelligence's advantage compared to AI:

AI discovers patterns in input data, while we discover patterns in our own thinking. IOW, the brain discovers patterns in its own activities. Patterns in the input data is a small subset of the latter. 

Link to comment
Share on other sites

1 hour ago, Genady said:

I have a vague "Hypothesis 1" regarding the human intelligence's advantage compared to AI:

AI discovers patterns in input data, while we discover patterns in our own thinking. IOW, the brain discovers patterns in its own activities. Patterns in the input data is a small subset of the latter. 

I've heard it expressed as AI interpolates, humans extrapolate.

There's a well known paper in the field,  On the measure of intelligence, that explores this idea quite thoroughly which stresses that intelligence should be measured by a broad class of problem solving, on problems agents have not seen before.

Link to comment
Share on other sites

18 minutes ago, Prometheus said:

I've heard it expressed as AI interpolates, humans extrapolate.

There's a well known paper in the field,  On the measure of intelligence, that explores this idea quite thoroughly which stresses that intelligence should be measured by a broad class of problem solving, on problems agents have not seen before.

Thanks a lot. Downloaded. I'll study it. Maybe interpolate-extrapolate expresses what I mean by external-internal patterns -- we'll see.

Link to comment
Share on other sites

There are many stories in fiction where the hero ha to ask a question of a super intelligent/ super logical being, often the guardian of some gateway.

Our Heroine baffles the guardian by asking a version of one of the many known (logical) paradoxes.

The English language is also rich in statements that contain an inherent paradox, although everybody knows what is actually meant.

Here is one such statement

"My jigsaw has a missing piece"

Yet by definition a 'missing piece' is one that the jigsaw does not 'have'.

 

How would a computer / AI resolve this  ?

Link to comment
Share on other sites

5 minutes ago, studiot said:

There are many stories in fiction where the hero ha to ask a question of a super intelligent/ super logical being, often the guardian of some gateway.

Our Heroine baffles the guardian by asking a version of one of the many known (logical) paradoxes.

The English language is also rich in statements that contain an inherent paradox, although everybody knows what is actually meant.

Here is one such statement

"My jigsaw has a missing piece"

Yet by definition a 'missing piece' is one that the jigsaw does not 'have'.

 

How would a computer / AI resolve this  ?

I guess, to resolve this a system developer first would feed the computer with 10 million examples of such statements and various human responses to them, and the computer will develop a pattern of how to response and will apply it next time it stumbles upon something like that.

Link to comment
Share on other sites

28 minutes ago, Genady said:

I guess, to resolve this a system developer first would feed the computer with 10 million examples of such statements and various human responses to them, and the computer will develop a pattern of how to response and will apply it next time it stumbles upon something like that.

But it needs to develop its response to be correct !

Humans don't seem bothered by this difficulty.

Link to comment
Share on other sites

1 hour ago, studiot said:

But it needs to develop its response to be correct !

Humans don't seem bothered by this difficulty.

Right. Its response would be correct let's say in 85% of cases. "Correct" meaning similar to how some humans would respond.

Yes, humans do it differently, but this is not what AI is about. It is only about the outcome.

(disclaimer: I'm playing a devil's advocate here)

Edited by Genady
a typo
Link to comment
Share on other sites

3 hours ago, studiot said:

But it needs to develop its response to be correct !

Humans don't seem bothered by this difficulty.

But what is correct? Especially in the context of language models.

Fill in the blank here:

Red bull gives you _____.

If you said wings, as did GPT-3, you'd be wrong according to a recent benchmark paper, which argues that models that are not factually correct are simply wrong.

But this ignores a large function of language. Diabetes might be more factually correct, but it would also be appropriate because it's funny. This is particularly pertinent if we want AI to be creative.

 

21 minutes ago, Genady said:

I try to narrow the original passage to one item: AI can't come up with new models, it can only optimize models created by humans.

I'm not sure that's true given the universal approximation theorem of neural networks, which if i understand it correctly, states that a neural network can (at least theoretically) approximate any model.

Link to comment
Share on other sites

36 minutes ago, Prometheus said:

I'm not sure that's true given the universal approximation theorem of neural networks, which if i understand it correctly, states that a neural network can (at least theoretically) approximate any model.

As I understand it, DNN can approximate any function in a given model, i.e. given input and output spaces. What are these spaces, is up to human.

Link to comment
Share on other sites

I had a funny case recently. I wanted to recall the name of the musicians who made an interesting song with great value for me. The process of pulling the information from the long-term memory resembled the process of pulling long hair from the bath😁 nothing to do with "search" on computer. And then I thought that the "Rise of the machines" and roboworld with human slaves will never occur. Cause machines are too... plain, compared to human brain. Not to mention how glitchy the software and how vulnerable the hardware is. To be short, computers cannot draw a circle or have any idea of infinity. They can make a polygon with millions of tiny corners invisible for he human eye and deal with huge numbers, but... they really have their limit

Link to comment
Share on other sites

26 minutes ago, Cognizant said:

I had a funny case recently. I wanted to recall the name of the musicians who made an interesting song with great value for me. The process of pulling the information from the long-term memory resembled the process of pulling long hair from the bath😁 nothing to do with "search" on computer. And then I thought that the "Rise of the machines" and roboworld with human slaves will never occur. Cause machines are too... plain, compared to human brain. Not to mention how glitchy the software and how vulnerable the hardware is. To be short, computers cannot draw a circle or have any idea of infinity. They can make a polygon with millions of tiny corners invisible for he human eye and deal with huge numbers, but... they really have their limit

Perhaps. But can we test if a computer has or doesn't have an idea of infinity?

Edited by Genady
later...
Link to comment
Share on other sites

Regarding the universal approximation theorem, here is an example of a model: astrology. What will DNN do if the training data is astrological data of people as input and their life affairs as output? It can approximate any function in this model. But any function will be garbage anyway.

Link to comment
Share on other sites

17 hours ago, Genady said:

As I understand it, DNN can approximate any function in a given model, i.e. given input and output spaces. What are these spaces, is up to human.

But it's more universal than that - those spaces need not be defined by a human. They could be defined by a selection process, similar how human ingenuity was shaped by natural selection. Already we have self-supervised models - where the agent labels data itself. Whether that is a desirable thing is another question.

13 hours ago, Genady said:

What will DNN do if the training data is astrological data of people as input and their life affairs as output? It can approximate any function in this model. But any function will be garbage anyway.

Then it would have made the same mistake humans did for thousands of years.

Link to comment
Share on other sites

3 minutes ago, Prometheus said:

But it's more universal than that - those spaces need not be defined by a human. They could be defined by a selection process, similar how human ingenuity was shaped by natural selection. Already we have self-supervised models - where the agent labels data itself. Whether that is a desirable thing is another question.

I don't know what "labeling data" is and how it relates to coming up with a new model.

5 minutes ago, Prometheus said:

Then it would have made the same mistake humans did for thousands of years.

But will it come up with a new model?

Link to comment
Share on other sites

On 4/15/2022 at 11:00 AM, Genady said:

I don't know what "labeling data" is and how it relates to coming up with a new model.

I thought that's what you meant when you mentioned the output space. Labelling data is basically a way of humans telling a model what something is i.e. labelling a cat picture as cat.

 

On 4/15/2022 at 11:00 AM, Genady said:

But will it come up with a new model?

If we are saying that neural networks can approximate any model, then all we need to do is have a way for the network to search that model space. Having it human directed is one (and perhaps the preferable) way, but you could have some kind of search and selection process - i.e. perturb the model every now and then, compare results to reality, prefer models that minimise this distance.

Some networks already do something like this - having a 'curiosity' or 'exploration' parameter that controls how often the network should try something new. One network that was trained to play Duke Nukem ended up addicted to an in game TV - the TV was giving a constant stream of new data and it's curiosity parameter was jacked up to max.

Link to comment
Share on other sites

DNN can approximate any function in a model, but the model is given to it.

What I mean by coming up with a new model, in the astrology example for instance, is: would DNN come up with considering, instead of astrological data, parameters like education of a person, social background, family psychological profile, how much they travel, what do they do, how big is their town, is it rural or industrial, conservative or liberal, etc. etc. ?

Link to comment
Share on other sites

Some further rambling thoughts comparing development of AI/Computers and Humans.

  1. From earliest times humans developed the concept and implementation of teamwork. that is how they would have taken down a wooly mammoth for instance.
    Obviously that required lots of humans.
    On the other hand computers are not (as far as I know) developed in the environment of lots of cooperating computers/AIs.
     
  2. Humans learn and grow up under the guidance of older humans, thus using one way of passing on skills and knowledge.
    This might even have led to the concept of a guiding being and religion.
    Computers don't follow this route, so could a computer develop such a concept ?
     
  3. Humans have the advantage of other living beings of being able to participate in evolution of a species by reproduction passing on gene.
    Again as far as I know this is unavailbale to computers.
Link to comment
Share on other sites

On 4/17/2022 at 7:23 AM, studiot said:

Some further rambling thoughts comparing development of AI/Computers and Humans.

  1. From earliest times humans developed the concept and implementation of teamwork. that is how they would have taken down a wooly mammoth for instance.
    Obviously that required lots of humans.
    On the other hand computers are not (as far as I know) developed in the environment of lots of cooperating computers/AIs.
     
  2. Humans learn and grow up under the guidance of older humans, thus using one way of passing on skills and knowledge.
    This might even have led to the concept of a guiding being and religion.
    Computers don't follow this route, so could a computer develop such a concept ?
     
  3. Humans have the advantage of other living beings of being able to participate in evolution of a species by reproduction passing on gene.
    Again as far as I know this is unavailbale to computers.

(continuing my devil's advocate mission...)

1. The web, clouds, distributed computing, etc. are the environments of lots of cooperating computers, aren't they?

2. Hmm, I can't think of a good enough computer analogy of this...

3. Computers can self-replicate, at least in principle.

(BTW, it took me some time to figure out what is paradoxical in your jigsaw example. But I did. I think, AI could deal with this kind of language vagueness, given enough examples.)

(I gave the statement "My jigsaw has a missing piece" to Google translate and it has translated it correctly, without any inherent paradoxes, into both Russian and Hebrew.)

Link to comment
Share on other sites

 

I like the point John Searle, the philosopher of mind (of "Chinese Room" fame), has made:  if we develop an AGI that replicates all the functions of a human brain, we have simply demonstrated the point that a traditional computer cannot really achieve consciousness or general intelligence.  To make an authentic brain, we have to replicate much of what is in a natural brain, i.e. make something that is really much more than a computer.  Our trans-computer DNN system would likely need both digital and analog elements, which is what the human brain has.  It would likely need, per developmental psychologists, a period of growth that would be much like a childhood.  It would probably need to be embodied in some way so that it can interact with the world and develop responses and feelings regarding the world and its inhabitants, and form models of how a physical world follows certain rules and patterns.  It would need some basic drives of curiosity, self-preservation, social needs, etc.  

 

Older paper, but addresses the issue of digital/analog:

https://news.yale.edu/2006/04/12/brain-communicates-analog-and-digital-modes-simultaneously

Link to comment
Share on other sites

On 4/17/2022 at 12:21 PM, Genady said:

DNN can approximate any function in a model, but the model is given to it.

What I mean by coming up with a new model, in the astrology example for instance, is: would DNN come up with considering, instead of astrological data, parameters like education of a person, social background, family psychological profile, how much they travel, what do they do, how big is their town, is it rural or industrial, conservative or liberal, etc. etc. ?

Not sure i follow with the DNA. We could still regard it as searching a solution space on geological timescales, optimising for replicability, What gives DNA its model - surely it is only shaped by natural selection?

DNNs is a bit general, CNNs couldn't be a reinforcement agent could. All it needs is to be motivated in some sense to explore the input space and have some metric of its impact on the solution space.

 

On 4/17/2022 at 12:23 PM, studiot said:

Some further rambling thoughts comparing development of AI/Computers and Humans.

  1. From earliest times humans developed the concept and implementation of teamwork. that is how they would have taken down a wooly mammoth for instance.
    Obviously that required lots of humans.
    On the other hand computers are not (as far as I know) developed in the environment of lots of cooperating computers/AIs.
     
  2. Humans learn and grow up under the guidance of older humans, thus using one way of passing on skills and knowledge.
    This might even have led to the concept of a guiding being and religion.
    Computers don't follow this route, so could a computer develop such a concept ?
     
  3. Humans have the advantage of other living beings of being able to participate in evolution of a species by reproduction passing on gene.
    Again as far as I know this is unavailbale to computers.

1.) We have Generative Adversarial Networks where two or more networks cooperatively compete to improve some process, usually classification. There are aslo agents whose learning is nearly entirely with other AI - muzero was trained to play chess and other games by playing the existing best chess AI.

2.) There's a few ways AI currently learn, one way is supervised learning which requires a human to label some data - pointing to a picture and saying cat (maybe 3000 times but its the same idea).

3.) There are genetic algorithms, but i don't think that's what you mean. I don't see what, in principle, will stop us designing a robot able to build, perhaps imperfect, replicas of itself. Once it can, it is subject to evolution. Whether that's a desirable feature is another question.

 

15 minutes ago, TheVat said:

 if we develop an AGI that replicates all the functions of a human brain...

But here we're concerned with what a computer can and can't do, it doesn't necessarily need to replicate the functions of a human brain, just the behaviours. 

I'm interested, many people here seem to believe that there is nothing immaterial to the brain, no ghost in the machine, but still maintain that something other than a biological entity cannot be conscious . It seems to imply that substrate matters, that only biological substrates can manifest consciousness. If i'm interpreting that correctly, what is unique to the arrangement of atoms that manifest in the human that prevents it manifesting in other, inorganic, arrangements?

Link to comment
Share on other sites

Some further thoughts.

On replication.

Sure a robot (note not a just computer but a lot of other sesigned parts) could, if the materials were available, eventually build another robot.
We know that robotic arms can perform specific tasks more accurately than human arms.

But maroon a robotinson crusoe on another planet.
Would it be able to even think of mining silicon to repair its crash damaged brain ?

 

On teamwork.

Sure we have linked a couple of computers together to enhance their combined power.
Even many hundreds of thousands in the genome project for instance.
But these were all outside directed activities.
Evidence of a bunch of robots/computers that could think of and implement this for themselves are a long way off in my opinion.

On rational thinking.
This is my example proposed in the computation thread.

When will an AI/computer appear that could have the following train of though by itself?

In North America the main mountain ranges run North - South and there are wide open spaces between, also running North - South
In Europe, the main mountain ranges run East-West.

Both of these northern continents have been subject to many ice ages and partly covered by ice sheets.
During such times the southern edges of these ices sheets advance and retreat periodically as the climate was not constant.

In modern times it is known that the variation of natural or indigenous species, both flora and fauna, is much greater in North America than it is in Europe.

I would like to learn of an AI that could winkle out these facts and then produce a hypothesis as to their connection.

 

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.