Jump to content

What computers can't do for you


Genady

Recommended Posts

1 hour ago, Prometheus said:

This survey of researchers in the field gives a 50% chance of human level intelligence in ~40 years ago. It's probably the most robust estimate we're going to get.

Pure guesswork, no better than the "we will have fusion within 20 years" guess of the 1950s.

Surely we are talking about now  ?

 

A further point I forgot to mention in my last post.

The big difference between my proposed human hypothesis and any known AI is that once one human has stated it, pretty soon many humans will understand it and be using it, discussing it and developing it.
As far as I know a human would have to reprogram this famous chess AI if you wanted it to 'diagnose like a doctor' or answer symbolic maths questions like Wolfram or MathCad  or ........

 

Link to comment
Share on other sites

2 minutes ago, studiot said:

Pure guesswork, no better than the "we will have fusion within 20 years" guess of the 1950s.

Surely we are talking about now  ?

If you're going to ask someone to guess when fusion is going be reality, you'd still give more credence to engineers and physicists guess than some random people on the internet wouldn't you?

Link to comment
Share on other sites

9 minutes ago, Prometheus said:

If you're going to ask someone to guess when fusion is going be reality, you'd still give more credence to engineers and physicists guess than some random people on the internet wouldn't you?

Why ?

Like the engineers and physicists of the 1890s 'guessed' the age of the Earth ?

Or like the E & S guessed that fusion was 20 years away in the 1950s, 1970s, 1990s, and again in the 2020s ?

 

Link to comment
Share on other sites

25 minutes ago, studiot said:

As far as I know a human would have to reprogram this famous chess AI if you wanted it to 'diagnose like a doctor' or answer symbolic maths questions like Wolfram or MathCad  or ........

 

Unless all these programs are already installed in the same computer.

Link to comment
Share on other sites

9 hours ago, Prometheus said:

I'm interested, many people here seem to believe that there is nothing immaterial to the brain, no ghost in the machine, but still maintain that something other than a biological entity cannot be conscious . It seems to imply that substrate matters, that only biological substrates can manifest consciousness. If i'm interpreting that correctly, what is unique to the arrangement of atoms that manifest in the human that prevents it manifesting in other, inorganic, arrangements?

I don't think that a substrate matters in principle, although it might matter for implementation. I think intelligence can be artificial. But I think that we are nowhere near it, and that current AI with its current machine learning engine does not bring us any closer to it.

Link to comment
Share on other sites

10 hours ago, Genady said:

I don't think that a substrate matters in principle, although it might matter for implementation. I think intelligence can be artificial. But I think that we are nowhere near it, and that current AI with its current machine learning engine does not bring us any closer to it.

So a matter of complexity? Fair enough. Thanks for answering so clearly - i ask this question a lot, not just here, and rarely get such a clear answer.

 

10 hours ago, Genady said:

But I think that we are nowhere near it, and that current AI with its current machine learning engine does not bring us any closer to it.

Not any closer?

There are some in the community who believe that current DNNs will be enough - it's just a matter of having a large enough network and suitable training regime. Yann Lecun is probably the most famous, the guy who invented CNNs.

Then there are many who believe that symbolic representations need to be engineered directly into AI systems. Gary Marcus is probably the biggest advocate for this.

Here's a 2 hour debate between them:

 

There are a number of neuroscientists using AI as a model of the brain. There are some interesting papers that argue what some networks are doing is at least correlated with certain visual centres of the brain - this interview with a neuroscientist details some of that research - around 30 mins in, although the whole interview might be of interest to you:

 

An interesting decision by Tesla was to use vision only based inputs - as opposed to competitors who use multi-modal inputs and combine visual with lidar and other data. Tesla did this because their series of networks were getting confused as the data streams sometimes gave apparently contradictory inputs - analogous to when humans get dizzy when their inner tells them one thing about motion and the eyes another thing.

Things like that make me believe that current architecture are capturing some facets of whatever is going on in the brain, even if its still missing alot, so i think they do bring us closer.

Link to comment
Share on other sites

20 hours ago, Prometheus said:

 

But here we're concerned with what a computer can and can't do, it doesn't necessarily need to replicate the functions of a human brain, just the behaviours. 

I'm interested, many people here seem to believe that there is nothing immaterial to the brain, no ghost in the machine, but still maintain that something other than a biological entity cannot be conscious . It seems to imply that substrate matters, that only biological substrates can manifest consciousness. If i'm interpreting that correctly, what is unique to the arrangement of atoms that manifest in the human that prevents it manifesting in other, inorganic, arrangements?

I should clarify.  I am not a biochauvinist, nor do I have any problems with an inorganic substrate.  What I was driving at (with lack of finesse, like a Tesla struck by an EMP perhaps) was that to do some things that humans do and might someday want for an AI to do, like listen sympathetically, an AI would need to implement consciousness.  Simply manifesting the behavior of "sympathetic listener," like one of David Chalmers' philosophic zombies (aka "p-zed") would not be enough.  An intelligent person, seeking comfort, would rightly point out, "it's a simulation, it doesn't really feel anything about my situation, so I need a real person!" (or a more advanced AI). Unlike John Searle (early Searle, anyway), I believe it's possible in principle that a DNN (perhaps interwoven with an analog system, CNNs, etc) could wake up and be conscious, have qualia, and thus achieve certain sorts of thought that really don't happen without self-awareness and "raw feels.". 

 

So my earlier comments were concerned with what a computer can do, insofar as some behaviors only bloom where there is someone there.  It certainly doesn't have to replicate being human (with all its attendant flaws and blind spots), but there is some process in our neural networks, deep in the wetware, that must also happen in silicon substrate in order for an AI to do the activity I've suggested.

Link to comment
Share on other sites

33 minutes ago, TheVat said:

So my earlier comments were concerned with what a computer can do, insofar as some behaviors only bloom where there is someone there.  It certainly doesn't have to replicate being human (with all its attendant flaws and blind spots), but there is some process in our neural networks, deep in the wetware, that must also happen in silicon substrate in order for an AI to do the activity I've suggested.

So, in order for a computer to think it has to be alive?

What level of mimicry is life?

Link to comment
Share on other sites

On 4/14/2022 at 10:03 PM, Genady said:

Perhaps. But can we test if a computer has or doesn't have an idea of infinity?

Mmm... I think that when PC is frozen (for example a lagging program takes all its memory and prints endless "1 0 1 0 ..." on the screen. It could be called infinity perhaps... but this infinity would be limited by electricity. So it would be an infinity, dependable on nature/human/ paying bills :D

As far as  know, computer has no "infinity" number inside BIOS or programs of higher level. Only very big numbers, only formulas with "n", but no ... well, no idea of infinity, as it seems to me.

I think it's impossible to "explain" the meaning of infinity to a pile of iron  hardware, it's something unexplainable even for us, humans. We simply catch the meaning by intuition, and hardware doesn't have "the 6th feeling"

Link to comment
Share on other sites

2 hours ago, Cognizant said:

Mmm... I think that when PC is frozen (for example a lagging program takes all its memory and prints endless "1 0 1 0 ..." on the screen. It could be called infinity perhaps... but this infinity would be limited by electricity. So it would be an infinity, dependable on nature/human/ paying bills :D

As far as  know, computer has no "infinity" number inside BIOS or programs of higher level. Only very big numbers, only formulas with "n", but no ... well, no idea of infinity, as it seems to me.

I think it's impossible to "explain" the meaning of infinity to a pile of iron  hardware, it's something unexplainable even for us, humans. We simply catch the meaning by intuition, and hardware doesn't have "the 6th feeling"

We cannot explain to other humans the meaning of finite numbers either. How do you explain the meaning of "two"?

Edited by Genady
Link to comment
Share on other sites

I've thought of a test for understanding human speech by an AI system: give it a short story and ask questions which require an interpretation of the story rather than finding an answer in it. For example*, consider this human conversation:

Carol: Are you coming to the party tonight? 
Lara: I’ve got an exam tomorrow.

On the face of it, Lara’s statement is not an answer to Carol’s question. Lara doesn’t say Yes or No. Yet Carol will interpret the statement as meaning “No” or “Probably not.” Carol can work out that “exam tomorrow” involves “study tonight,” and “study tonight” precludes “party tonight.” Thus, Lara’s response is not just a statement about tomorrow’s activities, it contains an answer and a reasoning concerning tonight’s activities.

To see if an AI system understands it, ask for example: Is Lara's reply an answer to Carol's question? Is Lara going to the party tonight, Yes or No? etc.

I didn't see this kind of test in natural language processing systems. If anyone knows something similar, please let me know.

*This example is from Yule, George. The Study of Language, 2020.


 

Link to comment
Share on other sites

4 hours ago, Genady said:

 

I didn't see this kind of test in natural language processing systems. If anyone knows something similar, please let me know.

 

There's a test called Winograd schema, named after Terry Winograd, an early computer scientist.

https://en.wikipedia.org/wiki/Winograd_schema_challenge

Winograd's original example was:

The city councilmen refused the demonstrators a permit because they [feared/advocated] violence.

A human knows that "they" refers to the councilmen if the verb is feared, and to the demonstrators if the verb is advocated. An AI presumably would have a hard time with that, because you need to know what the words mean.

Winograd worked in the pioneering days of AI, I have no idea if the current generation of machine learning algorithms can figure these kinds of things out. Although the following page refers to a Winograd schema challenge from 2016, and there's a link on that page to a report on Winograd schemas issued in 2020. So evidently these are still considered an interesting challenge for AIs. This page has a lot of other relevant links.

https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html

 

  

11 hours ago, Cognizant said:


As far as  know, computer has no "infinity" number inside BIOS or programs of higher level. Only very big numbers, only formulas with "n", but no ... well, no idea of infinity, as it seems to me.
 

Computers have no ideas of anything, they only consist of switches that can each be in one of two states, 1 or 0. The meaning is supplied by the humans who program and interpret the bit patterns.

Just as we can program a computer to do finite arithmetic, we can program a computer to do infinite arithmetic. For example given two cardinal numbers [math]\aleph_\alpha[/math] and [math]\aleph_\beta[/math], we have

[math]\aleph_\alpha + \aleph_\beta = \aleph_\alpha \cdot \aleph_\beta = \max(\aleph_\alpha, \aleph_\beta)[/math]

In other words cardinal addition and multiplication are ridiculously simple; the sum and product of two cardinals is just the larger of the two. And cardinal subtraction and division aren't defined. 

Computers can also represent many of the countable ordinals, namely all the ordinals below the Church-Kleene ordinal, which is the first non-computable ordinal. 

The computer wouldn't have any idea that it's computing with transfinite numbers, any more than it knows that it's adding 1 + 1. It's just flipping bits according to instructions. 

Computers don't have ideas. That's critical to understand in all discussions of machine intelligence. Computers flip bits. Even the latest deep neural net is just flipping bits. You can see that by remembering that the AI programs run on conventional hardware. Their code is ultimately reducible to bit flipping; and, as Turing pointed out, to making marks with a read/write head on a long paper tape. You could in principle execute AI code using pencil and paper. 

That doesn't mean a computer can't be conscious or self-aware. Some people (not me) believe that the human brain operates by bit flipping. I think that's wrong, but some people do believe it. 

But whatever it is that AI's do, they do not do anything different than what regular vanilla computer programs do. They're organized cleverly for data mining, but they are not doing anything computationally new.

 

Edited by wtf
Link to comment
Share on other sites

59 minutes ago, wtf said:

There's a test called Winograd schema, named after Terry Winograd, an early computer scientist.

https://en.wikipedia.org/wiki/Winograd_schema_challenge

Winograd's original example was:

The city councilmen refused the demonstrators a permit because they [feared/advocated] violence.

A human knows that "they" refers to the councilmen if the verb is feared, and to the demonstrators if the verb is advocated. An AI presumably would have a hard time with that, because you need to know what the words mean.

Winograd worked in the pioneering days of AI, I have no idea if the current generation of machine learning algorithms can figure these kinds of things out. Although the following page refers to a Winograd schema challenge from 2016, and there's a link on that page to a report on Winograd schemas issued in 2020. So evidently these are still considered an interesting challenge for AIs. This page has a lot of other relevant links.

https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html

 

  

Computers have no ideas of anything, they only consist of switches that can each be in one of two states, 1 or 0. The meaning is supplied by the humans who program and interpret the bit patterns.

Just as we can program a computer to do finite arithmetic, we can program a computer to do infinite arithmetic. For example given two cardinal numbers α and β , we have

α+β=αβ=max(α,β)

In other words cardinal addition and multiplication are ridiculously simple; the sum and product of two cardinals is just the larger of the two. And cardinal subtraction and division aren't defined. 

Computers can also represent many of the countable ordinals, namely all the ordinals below the Church-Kleene ordinal, which is the first non-computable ordinal. 

The computer wouldn't have any idea that it's computing with transfinite numbers, any more than it knows that it's adding 1 + 1. It's just flipping bits according to instructions. 

Computers don't have ideas. That's critical to understand in all discussions of machine intelligence. Computers flip bits. Even the latest deep neural net is just flipping bits. You can see that by remembering that the AI programs run on conventional hardware. Their code is ultimately reducible to bit flipping; and, as Turing pointed out, to making marks with a read/write head on a long paper tape. You could in principle execute AI code using pencil and paper. 

That doesn't mean a computer can't be conscious or self-aware. Some people (not me) believe that the human brain operates by bit flipping. I think that's wrong, but some people do believe it. 

But whatever it is that AI's do, they do not do anything different than what regular vanilla computer programs do. They're organized cleverly for data mining, but they are not doing anything computationally new.

 

Many thanks. +1

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.