Jump to content

What computers can't do for you


Genady
 Share

Recommended Posts

In the very recent textbook from the University of Pennsylvania, Physical Models of Living Systems: Probability, Simulation, Dynamics by Philip Nelson, November 2021, there is a little section in the introduction for students, which I have attached below.

My questions for the discussions are: Do you agree with these limitations of computers? Are they temporary or fundamental?

2022-04-11.png.3e8988220aa189c309539c752a06f6a2.png

Link to comment
Share on other sites

It's certainly true of current computers - or more generally AI.

It'll be interesting to see how deep learning architectures progress. We have reinforcement learning agents that can learn to play one game, then do quite well on another game it has never played - the more similar the games the better it does and current work tries to make the generalisations it draws as broad as possible so it can tackle more disparate tasks. Could be interpreted as learning from past experiences.

The section indicates though that they are primarily interested in agents in a real lab setting - that might be a little further off, but it doesn't seem to be a different kind of task to playing games.

Link to comment
Share on other sites

9 minutes ago, Prometheus said:

The section indicates though that they are primarily interested in agents in a real lab setting - that might be a little further off, but it doesn't seem to be a different kind of task to playing games.

I rather think that a task in a real life setting is a different kind of task to playing games.

Link to comment
Share on other sites

5 minutes ago, Genady said:

I rather think that a task in a real life setting is a different kind of task to playing games.

But in what way? The pertinent feature surely is that both the virtual space and the real lab present an agent with an objective and obstacles. We may in time also want agents that can formulate their own objectives within some constraints.  In terms of creating an agent there is no difference between the real and the virtual world except the complexity of the former compared to the latter. This is what i meant with regard to them being of the same kind.

For instance, Tesla's self-driving agents are trained in large part in virtual worlds. It is particularly helpful for edge cases - cows in the middle of roads and other bonkers stuff that happens so rarely in the real world the agent struggles to learn from a real example, but that happens enough that the agent needs to learn to deal with it.

Link to comment
Share on other sites

1 minute ago, Prometheus said:

But in what way? The pertinent feature surely is that both the virtual space and the real lab present an agent with an objective and obstacles. We may in time also want agents that can formulate their own objectives within some constraints.  In terms of creating an agent there is no difference between the real and the virtual world except the complexity of the former compared to the latter. This is what i meant with regard to them being of the same kind.

For instance, Tesla's self-driving agents are trained in large part in virtual worlds. It is particularly helpful for edge cases - cows in the middle of roads and other bonkers stuff that happens so rarely in the real world the agent struggles to learn from a real example, but that happens enough that the agent needs to learn to deal with it.

Perhaps I need to clarify that I don't mean that every task in a real life setting is a different kind of task to playing games. I rather think that there is always a task in real life setting which is different. Then we'll make a computer to cover that kind. And then there will be another, and so on.

Link to comment
Share on other sites

Great topic that deserves more attention, though I'm glad it got the one it did.

I don't know if my comment will be useful, but I'm thinking of chess as an interesting testing ground.

Computers have far exceeded the capabilities of human minds. Computers play only on the grounds of pure combinatorics. Grand Masters, on the contrary, although they have powerful combinatoric minds by human standards, at some point through the complexity of the game, they must base a significant part of their reasoning on strategic, conceptual principles rather than pure if-then sequences. Strategic principles can deal with wide classes of combinatoric landscapes with the result of improving your chances of winning only on the average. It is not entirely impossible that computers get so good at calculating outcomes that the make our pattern-based reasoning obsolete.

It is entirely possible that if we insist on computers being conceptual, we'll force them to play on somebody else's turf.

On the game vs task discussion; I see no difference at all. But maybe I haven't thought about it hard enough.

 

Link to comment
Share on other sites

35 minutes ago, joigus said:

Great topic that deserves more attention, though I'm glad it got the one it did.

I don't know if my comment will be useful, but I'm thinking of chess as an interesting testing ground.

Computers have far exceeded the capabilities of human minds. Computers play only on the grounds of pure combinatorics. Grand Masters, on the contrary, although they have powerful combinatoric minds by human standards, at some point through the complexity of the game, they must base a significant part of their reasoning on strategic, conceptual principles rather than pure if-then sequences. Strategic principles can deal with wide classes of combinatoric landscapes with the result of improving your chances of winning only on the average. It is not entirely impossible that computers get so good at calculating outcomes that the make our pattern-based reasoning obsolete.

It is entirely possible that if we insist on computers being conceptual, we'll force them to play on somebody else's turf.

On the game vs task discussion; I see no difference at all. But maybe I haven't thought about it hard enough.

 

I'm not sure deep neural networks work by pure combinatorics, if-then sequences. I'd rather compare them to developing, during learning stage, and then applying strategic principles. Not "conceptual", this means something different to me, but strategic.

Link to comment
Share on other sites

53 minutes ago, joigus said:

Computers play only on the grounds of pure combinatorics. Grand Masters, on the contrary, although they have powerful combinatoric minds by human standards, at some point through the complexity of the game, they must base a significant part of their reasoning on strategic, conceptual principles rather than pure if-then sequences.

I think Deep Blue had some kind of combinatorial tree search when it beat Kasparov back in the 90s, but it's not quite true of Alpha Go which beat Sedol at Go. Apparently Go has 10^172 possible positions - far too much for a full search. Instead certain branches are selected by a neural network - analogous to how a human might just work on a few likely looking branches in their head before making a move. This blog explains it pretty well.

Link to comment
Share on other sites

28 minutes ago, Prometheus said:

I think Deep Blue had some kind of combinatorial tree search when it beat Kasparov back in the 90s, but it's not quite true of Alpha Go which beat Sedol at Go. Apparently Go has 10^172 possible positions - far too much for a full search. Instead certain branches are selected by a neural network - analogous to how a human might just work on a few likely looking branches in their head before making a move. This blog explains it pretty well.

Yes, a probabilistic classification function of DNN is reminiscent of strategic principles.

Here is another impressive application, not a game based: "a new AI system that can create realistic images and art from a description in natural language."

Link to comment
Share on other sites

4 hours ago, Prometheus said:

Instead certain branches are selected by a neural network - analogous to how a human might just work on a few likely looking branches in their head before making a move. This blog explains it pretty well.

To train the DDN they "use[d] the RL policy network to play more than 30 million games." How many games a human master plays or studies in their training?

Link to comment
Share on other sites

9 hours ago, joigus said:

.

On the game vs task discussion; I see no difference at all. But maybe I haven't thought about it hard enough.

 

I guess it's one thing to trust a computer to handle the situation in a game, another entirely to leave business to one when sh*t gets real B/

Link to comment
Share on other sites

9 hours ago, Genady said:

Yes, a probabilistic classification function of DNN is reminiscent of strategic principles.

In what sense is AlphaGo probabilistic? It searches a probability distribution, but depending on how the weights are initialised given identical inputs (although unlikely) it will give identical outputs.

5 hours ago, Genady said:

To train the DDN they "use[d] the RL policy network to play more than 30 million games." How many games a human master plays or studies in their training?

One shot learning architectures are being deployed so that metric is falling fas, at least for image and text classification. But is it relevant to your OP - as long as an agent can make 'high-level insights', does it matter that the learning regime is not like that for humans?

Link to comment
Share on other sites

Yes, good topic @Genady. +1

 

Humans are adaptable and one human can (and does as a matter of course)  learn many things not just be focused on one.

When Tesla's autopilot can ride the big wave into the beach on its surfboard,
Take the trollyebus to the car park,
Drive its car to the airport,
Fly a light aircraft to the ski resort in the Rockies,
Negotiatiate a major ski run,
Go to the bar and down a nightcap,
Before finally plugging itself in for an overnight recharge,

I will begin to believe that computers are beginning to catch up.

:)

Link to comment
Share on other sites

2 hours ago, studiot said:

Yes, good topic @Genady. +1

 

Humans are adaptable and one human can (and does as a matter of course)  learn many things not just be focused on one.

When Tesla's autopilot can ride the big wave into the beach on its surfboard,
Take the trollyebus to the car park,
Drive its car to the airport,
Fly a light aircraft to the ski resort in the Rockies,
Negotiatiate a major ski run,
Go to the bar and down a nightcap,
Before finally plugging itself in for an overnight recharge,

I will begin to believe that computers are beginning to catch up.

:)

I agree. The subtlety is, that we can make it to do just that, what ever that is, e.g. your scenario above. And it will be doing just that forever.

The just that includes a possibility to modify that in some ways. Then, it will be modifying that in these same ways forever...

Link to comment
Share on other sites

3 hours ago, Prometheus said:

given identical inputs (although unlikely) it will give identical outputs.

You're right. And this is a limitation.

15 hours ago, joigus said:

On the game vs task discussion; I see no difference at all.

I think that one big difference is this: in a game there exists a prescription for how to generate all possible moves; there is no such a prescription in real world.

Link to comment
Share on other sites

5 minutes ago, Prometheus said:

You want a self-driving car that will do different things given the same inputs?

No. But the topic is, what computers can't do for us. And sometimes a different output is good.

Link to comment
Share on other sites

Well, although they aren't the norm , there are probabilistic frameworks for deep learning if it is desirable.

The topic, as i understand it, is what computers can't, and will never be able to do. If we assume that there isn't anything supernatural in our wetware, surely it is only a matter of time before computers can at least recreate our creativity? 

Link to comment
Share on other sites

13 minutes ago, Prometheus said:

Well, although they aren't the norm , there are probabilistic frameworks for deep learning if it is desirable.

The topic, as i understand it, is what computers can't, and will never be able to do. If we assume that there isn't anything supernatural in our wetware, surely it is only a matter of time before computers can at least recreate our creativity? 

I agree that there is no reason that something artificial can't recreate our creativity. The question is, are computers as we know them capable for that, or we'll need different underlying principles?

Link to comment
Share on other sites

3 hours ago, Genady said:

I agree. The subtlety is, that we can make it to do just that, what ever that is, e.g. your scenario above. And it will be doing just that forever.

The just that includes a possibility to modify that in some ways. Then, it will be modifying that in these same ways forever...

I hope you realise that I mean one computer being able to do all those things a human can.

I have never met one.

 

Link to comment
Share on other sites

3 minutes ago, studiot said:

I hope you realise that I mean one computer being able to do all those things a human can.

I have never met one.

 

I think, in principle a robot could do any one of these things, and then they can be combined in one robot. In principle.

Link to comment
Share on other sites

5 minutes ago, Genady said:

I think, in principle a robot could do any one of these things, and then they can be combined in one robot. In principle.

You have been watching too much Terminator.

:)

Link to comment
Share on other sites

25 minutes ago, studiot said:

You have been watching too much Terminator.

What, in principle, do you believe will prohibit creative AI?

2 hours ago, Genady said:

The question is, are computers as we know them capable for that, or we'll need different underlying principles?

So far larger neural networks trained on more data haven't seen a plateau in performance which leads some to believe that sufficiently large networks will achieve human level ability. That seems consistent with the universal approximation theorem - assuming creativity is ultimately a computation. My personal guess, based on absolutely nothing, is that gradient based methods won't achieve it, and that some kind of evolutionary update will be required.

I suspect we will continue to redefine creativity to mean whatever humans can do that AI can't. There are some people who would argue that AlphaGo is creative - apparently people who play Go describe the moves it has created as creative. 

Link to comment
Share on other sites

39 minutes ago, Prometheus said:

What, in principle, do you believe will prohibit creative AI?

So far larger neural networks trained on more data haven't seen a plateau in performance which leads some to believe that sufficiently large networks will achieve human level ability. That seems consistent with the universal approximation theorem - assuming creativity is ultimately a computation. My personal guess, based on absolutely nothing, is that gradient based methods won't achieve it, and that some kind of evolutionary update will be required.

I suspect we will continue to redefine creativity to mean whatever humans can do that AI can't. There are some people who would argue that AlphaGo is creative - apparently people who play Go describe the moves it has created as creative. 

I appreciate what you say. Unfortunately, I cannot verbalize my "gut feeling" about it. Something like a difference between countable and uncountable infinities...

It is not only "creativity", not even mostly about it. My doubts are about other human abilities, such as:

a.png.5204ebf9e942ca097749cbc8ae4b269c.png

Link to comment
Share on other sites

3 hours ago, Prometheus said:

What, in principle, do you believe will prohibit creative AI?

By 'prohibit' do you mean absolutely or just prevent some creativity ?

I don't know of any bar to creativity per se, but observe that creativity is often driven by other factors than preset goals and can arise spontaneously as when a doctor diagnoses a previously unknown disease or condition.

Or, sticking with medical examples

Patient, "  I have sore tendons"

Doctor, "You have tendonitis"

Would any AI ever be cheeky enough to 'invent  such a condition ?

 

Or how about this questio for an AI

 

"Where do I start filling to create a particular embankment on sloping ground of unknown variable terrain" ?

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.