Jump to content

Can Artificial Intelligence Ever Match Humans?


jimmydasaint

Recommended Posts

I don't know how, because in computational theory, it's stated that some problems which are solvable by humans are impossible for computers. How can a computer have human intelligence if it's mathematically impossible for certain things to be made into algorithms?
The brain IS a computer. It's just not made of transistors and whatnot.

 

Have you taken a computer science course? Not trying to come off as better than anyone, or anything - I mean I haven't taken a course in AI yet so I don't know how close AI is. But, to make a computer "feel" would first require defining what a feeling is. Second, is to classify all instances of a feeling, and then improve on it.

 

Have you ever taken a biology course? Oh look at that, your problem doesn't go away when talking about the brain. Who could have thought?

Link to comment
Share on other sites

The brain IS a computer.

That is a hypothesis (maybe a theory) but it is not a fact. The brain has some aspects of a computer, fact. The brain IS a computer on the other hand is a very strong statement. That the brain IS a computer is a widely held but by no means universal view. John Searle definitely holds an opposing view. For example, see http://users.ecs.soton.ac.uk/harnad/Papers/Py104/searle.comp.html

Link to comment
Share on other sites

John Searle definitely holds an opposing view. For example, see http://users.ecs.soton.ac.uk/harnad/Papers/Py104/searle.comp.html

 

Like Penrose, Searle pedals his own brand of neutral monism (he calls it "biological naturalism") which makes extensive use of false analogies to make his arguments, such as the Chinese Room. What part of the room did the thinking? The gestalt.

Link to comment
Share on other sites

  • 3 weeks later...

seriously, i do not think a machine could outdo the human body and mind. yes computers can compute things at speeds a human could only dream of and computers can have massive amounts of memory and some machines, most machines, can lift more and move faster. but once all of these things are integrated, and a computer is faced with the amount of input of ALL of the senses at once, how would it not freeze up? a computer cant gather "food" (energy) by itself, it cant make irrational decisions and even if it could, it could not be contained to the size of the human body and brain. it would not be mobile. so in my opinion no artificial intelligence could never match a human.

Link to comment
Share on other sites

a computer is faced with the amount of input of ALL of the senses at once, how would it not freeze up?

 

Your brain has the same issue, and it filters things a LOT, long before they even reach the brain, and once they reach the brain. In fact, the inability of the brain to properly filter and discard large amounts of information leads to *major* brain problems that reduce you to a nonfunctional vegetable.

 

a computer cant gather "food" (energy) by itself,

 

Given a robot body, it could, by visiting power outlets. And before you object to those, remember that we do the same thing - we don't make food, we just steal what plants and animals have gathered by eating them.

 

it cant make irrational decisions

 

Why is this a criterion for intelligence? And what's so say a machine intelligence couldn't have the occaisional glitch. We have glitches because we're not produced by design, but rather by an imperfect process of evolution.

 

even if it could, it could not be contained to the size of the human body and brain. it would not be mobile.

 

It couldn't *now*, but can you seriously look at one of the 1960's room-sized computers and then an iPhone, and tell me that technology won't advance to that level?

 

Mokele

Link to comment
Share on other sites

seriously, i do not think a machine could outdo the human body and mind.

 

Then you are so sadly mistaken, because they have surpassed us in many areas already, such as playing chess or driving. Also, they learn much faster than we can in general.

 

They far surpass us in body, and they can certainly do math better than we can. Whose to say that they won't one day understand things and think much better than we can too?

Link to comment
Share on other sites

they have surpassed us in many areas already, such as playing chess or driving.

 

The 'driving' one is a bit of a stretch - they can handle some aspects, and the DARPA Urban Challenge has shown they can drive, but as of yet, no program has been able to handle what a good human driver can. It'll happen, I have no doubt, but it's not there just yet.

 

Mokele

Link to comment
Share on other sites

The 'driving' one is a bit of a stretch - they can handle some aspects, and the DARPA Urban Challenge has shown they can drive, but as of yet, no program has been able to handle what a good human driver can. It'll happen, I have no doubt, but it's not there just yet.

 

Mokele

 

Alright, I'll concede that one, it is a bit of a stretch.

Link to comment
Share on other sites

Then you are so sadly mistaken, because they have surpassed us in many areas already, such as playing chess or driving.

While computers have surpassed humans in playing chess, they have not done so by means of anything one could connote as "intelligence". They have surpassed us by dint of pure dumb brute force.

 

I am a holdout that we humans are something more than Turing machines.

Link to comment
Share on other sites

While computers have surpassed humans in playing chess, they have not done so by means of anything one could connote as "intelligence". They have surpassed us by dint of pure dumb brute force.

 

In a way, that is "superior". They are smart enough that they can simply use brute force to solve a problem better than we can. I'd say intelligence is just a software problem for computers, though I also don't doubt that we've sacrificed a lot of our own brain power for our learning abilities. Eg we spend 8 hours sleeping, which is at least partly for learning and remembering.

 

I am a holdout that we humans are something more than Turing machines.

 

Unlike computers, we are not designed to be a Turing machine. But I don't doubt that a powerful enough Turing machine could simulate a person -- there's nothing magical about us as far as I can tell.

Link to comment
Share on other sites

In a way, that is "superior".

Superior in playing chess, yes. Superior, in general, no.

 

They are smart enough that they can simply use brute force to solve a problem better than we can.

That isn't intelligence. It's just dumb brute force. A chess playing computer program is no more aware of what it is doing than is a Jacquard Loom.

 

I'd say intelligence is just a software problem for computers,

I strongly disagree. I'm not alone in this regard. Penrose, Searle, Gödel, and a host of others are of the same opinion. The no-free-lunch theorems get in the way of a non-self-aware AI achieving strong AI.

 

Unlike computers, we are not designed to be a Turing machine. But I don't doubt that a powerful enough Turing machine could simulate a person -- there's nothing magical about us as far as I can tell.

Magic is not required to say that our minds are only Turing machines.

Link to comment
Share on other sites

That isn't intelligence. It's just dumb brute force. A chess playing computer program is no more aware of what it is doing than is a Jacquard Loom.

 

Is that different from what we do? What do you do when playing chess (especially online, where there are no social cues to use)? I'm an awful chess player, but I at least *try* to think through options, like "Ok, if I attack with my rook, that'll leave my queen open to attack, so I can't do that. If I use my queen, I can take his knight, but then he'll probably use his bishop to check me." etc.

 

I'm not sure what we do when playing chess is really any different, except in the volume of computations and the possibility of irrational moves due to being pissed the other player took your queen.

 

I strongly disagree. I'm not alone in this regard. Penrose, Searle, Gödel, and a host of others are of the same opinion. The no-free-lunch theorems get in the way of a non-self-aware AI achieving strong AI.

 

I don't support you could explain a bit more? I found that link pretty incomprehensible from the get-go.

 

Part of the reason it seems odd is that, well, we are simply circuitry. If we could map every neuron in the brain, invent some electronic device or segment of code to simulate a neuron, and put them all together in the known map, why wouldn't that create a human-like intelligence?

 

Mokele

Link to comment
Share on other sites

I'm not sure what we do when playing chess is really any different, except in the volume of computations and the possibility of irrational moves due to being pissed the other player took your queen.

 

Since I do play chess competitively, I will tell you all how professionals typically think about chess:

 

First thing is, is that calculation comes last when deciding on a move. They typically look for any patterns or recall previous positions that they have studied first, so that they can narrow their options. And then they look for any tactics (e.g. mates in 2, win a queen combo, etc). And then, they calculate moves that will lead them to the best position possible. "Brute force" methods are only necessary when the position is just so messy or when your options are so limited (e.g. one false move means you lose sort of thing).

 

The way computers do it is actually not at all different than we do. Early supercomputers did rely primarily on brute force, like Deep Blue (calculating up to 200 million moves per second I think...), but that's not the case any more. Most computer programs usually have access to endgame, middlegame, and opening databases, and they are able to calculate a few million moves per second.

 

But, that's also the same way we humans do it too, only that we can't remember nearly as much positions, or calculate as quickly, as computers do. Most of the time, grandmasters usually know what move to play because they've seen a position like it before, or studied it in detail. As such, they already have an idea of which moves tend to be better than others in a given situation. Our databases in our brains are quite limited compared to computers though.

 

 

 

Part of the reason it seems odd is that, well, we are simply circuitry. If we could map every neuron in the brain, invent some electronic device or segment of code to simulate a neuron, and put them all together in the known map, why wouldn't that create a human-like intelligence?

 

Mokele

 

I guess the question really is, do we even want a robot that has human-like intelligence. Experience shows that humans aren't necessarily the sharpest sticks in the woods; they are quite irrational and their ability to use logic/reason rests largely on their emotional state. A machine, on the other hand, won't have that kind of weakness, other than the occasional glitch or bug. It seems that some of us here just don't realize that intelligence doesn't have to be humanlike in order to register as such; that's why the Turing test fails, because it can only test for humanity, not intelligence.

 

Superior in playing chess, yes. Superior, in general, no.

 

What do you mean by "in general" though? Machines kick our ass in quite a bit of tasks.

 

That isn't intelligence. It's just dumb brute force. A chess playing computer program is no more aware of what it is doing than is a Jacquard Loom.

 

Read my note above about this.

 

I strongly disagree. I'm not alone in this regard. Penrose, Searle, Gödel, and a host of others are of the same opinion. The no-free-lunch theorems get in the way of a non-self-aware AI achieving strong AI.

 

Why though? NFL only refers to the limits of search algorithms, not whether or not artificial intelligence is possible.

 

Besides which, it is possible to get sentience from non-sentient processes; our DNA molecules do this all the time. And it is certainly possible for intelligence and consciousness to evolve from non-intelligent and unconscious lifeforms and processes, evolution on Earth is proof of concept.

 

So, it stands to reason that if a bag of electro-chemical stuff can produce sentience, why not silicon chips, given proper wiring and programming?

 

Magic is not required to say that our minds are only Turing machines.

 

Turing machines are only hypothetical. They state that given an infinitely long piece of tape and a set of instructions, you can do virtually any computation or task in a certain amount of time. In theory you could, given a sufficiently long piece of tape, write out instructions that simulate human intelligence.

Edited by Reaper
multiple post merged
Link to comment
Share on other sites

Is that different from what we do?

The difference between me (a potzer) and an expert is the expert's ability to see patterns without computation. They have an amazing ability to capture the gestalt of chess game in a glance. That is why experts can play dozens of games simultaneously. An article: http://gestalttheory.net/archive/arncomp.html.

 

I'm an awful chess player, but I at least *try* to think through options, like "Ok, if I attack with my rook, that'll leave my queen open to attack, so I can't do that. If I use my queen, I can take his knight, but then he'll probably use his bishop to check me." etc.

That is why you (and I) are "awful chess players". Your chess thinking is too much like that of a dumb computer chess program. You (and I) are further hindered by not being able to think that way very well. How many moves ahead can you see? Computer chess programs now have nine or ten (maybe higher?) move lookahead. Lookahead is brute force.

 

I don't support you could explain a bit more? I found that link pretty incomprehensible from the get-go.

That link is a bibliography to a slew of articles on the no-free-lunch theorems. In essence, there is no "best" machine learning algorithm. Given any two machine learning algorithms, there will always be some cases where algorithm A outperforms than algorithm B and other cases where algorithm B outperforms algorithm A -- including algorithm B being pure random pulling junk out of a hat.

 

One way out of the morass is self-awareness. We are able to pick and choose strategies -- which implies we have least a second-order logic.

 

Part of the reason it seems odd is that, well, we are simply circuitry. If we could map every neuron in the brain, invent some electronic device or segment of code to simulate a neuron, and put them all together in the known map, why wouldn't that create a human-like intelligence?

First off, the brain is more than neurons. There are glial cells after all that we *know* are associated with learning, and maybe even other agents that we do not yet have nailed down (particularly long-term memory). Secondly, there are many researchers in the field of artificial neural nets who claim that neural nets are more than Turing machines.

Link to comment
Share on other sites

That isn't intelligence. It's just dumb brute force. A chess playing computer program is no more aware of what it is doing than is a Jacquard Loom.

 

And for that matter, neither are we. Everything we do well, we do subconsciously. And whenever we do something well, doing it consciously reduces performance. Our consciousness is rather clumsy, but necessary for setting goals and training the subconscious in useful things.

 

Being a good chess player, I have to agree with Reaper on how chess is played. It really is very similar to how computers do it, only that we have greater strength in recognizing similar situations, while computers are better at recognizing exactly similar situations, and at calculating future moves. This is also why humans still beat computers in Go, as that has far more pattern recognition and is horrible to try to think ahead. But I don't see how it would be impossible for a software upgrade to change that.

 

I strongly disagree. I'm not alone in this regard. Penrose, Searle, Gödel, and a host of others are of the same opinion. The no-free-lunch theorems get in the way of a non-self-aware AI achieving strong AI.

 

Oh, I think that a self-aware AI is the way to go -- I can't see any other way to have a proper AI. Of course, that would be dangerous, but no doubt someone will do it anyways, eventually.

 

Magic is not required to say that our minds are only Turing machines.

 

Depends. Are you saying that our mind cannot be simulated by a Turing machine? But if we can simulate the physics of the atoms in the brain, shouldn't that would be enough to simulate the mind made up of them? Of course, it would be far more efficient to simulate the neurons only. I'm not sure a real computer could handle simulating every atom in a brain, but a Turing machine could.

 

First off, the brain is more than neurons. There are glial cells after all that we *know* are associated with learning, and maybe even other agents that we do not yet have nailed down (particularly long-term memory). Secondly, there are many researchers in the field of artificial neural nets who claim that neural nets are more than Turing machines.

 

True. There is chemistry -- hormones, nitric oxide, regulation of glucose/oxygen, etc. I've even heard that memory may be stored genetically (not as DNA, but as patterns of deactivated genes in neurons), but I'm not sure if it is true. These would make it difficult to calculate a brain, but not impossible.

Edited by Mr Skeptic
multiple post merged
Link to comment
Share on other sites

How can we speak of artificial intelligence, when modern psychologists cannot even agree on a definition of human intelligence? First, provide an operational definition of human intelligence; then, one can begin to speak of artificial intelligence.
But we know more or less what we mean by intelligence. It's just like that 'planet' case. There is no clear definition of what a planet is (AFAIK), but but the are plenty of them that we have detected in our galaxy.
Link to comment
Share on other sites

But we know more or less what we mean by intelligence.

 

Psychologists have a vague idea of what intelligence is, but they often get bogged down in the actual specifics as to what intelligence really means.

 

It's just like that 'planet' case. There is no clear definition of what a planet is (AFAIK), but but the are plenty of them that we have detected in our galaxy.

 

Even our definition of planet is constantly changing. Did you know that Pluto was recently demoted from its previous planetary status? It is now a "dwarf planet".

Link to comment
Share on other sites

we know our definition of intelligence is crap, but its 'good enough' to engage in conversation about an artificial variety.

 

while we can't define a specific boundary between intelligent and non-intelligent, we can definitely seperate those at the extremes. so while a not very intelligent program might get passed off as nonintelligent or a not very nonintelligent program might get passed of as intelligent a very intelligent program will be classed as intelligent.

Link to comment
Share on other sites

How can we speak of artificial intelligence, when modern psychologists cannot even agree on a definition of human intelligence?

 

If you come from a neurophysiological perspective, the topic at hand is the result of brain function. That's a concrete thing, unlike psychology.

 

If you're operating under another definition you'll have to articulate it clearly, such as how Minsky did in the Society of Mind. However, I don't think there's a lot of hope for that approach.

Link to comment
Share on other sites

This is all my point of view, and is not supported by any scientific evidence that I know of. Also, I didn't read through the entire thread, so if I'm repeating what someone else has already said, I apologize.

 

I think it's really a question of how many things it can do, not how fast. Yes, a computer is fast, but it is extremely limited. You have to have a special type of computer for every single task. The computer we all have at home can certainly solve any equation much faster than the human brain, but I would like to see how that same computer would tackle mowing the lawn. I think, the human brain isn't as fast as computers because it has to do an incredible amount of things at the same time. IMO, if you somehow made the human brain to concentrate completely on one single problem, eg. solving an equation, it would do that task much faster than a computer. But again, this is just me thinking aloud.

 

Cheers,

Gabe

Link to comment
Share on other sites

This is all my point of view, and is not supported by any scientific evidence that I know of. Also, I didn't read through the entire thread, so if I'm repeating what someone else has already said, I apologize.

 

I think it's really a question of how many things it can do, not how fast. Yes, a computer is fast, but it is extremely limited. You have to have a special type of computer for every single task. The computer we all have at home can certainly solve any equation much faster than the human brain, but I would like to see how that same computer would tackle mowing the lawn. I think, the human brain isn't as fast as computers because it has to do an incredible amount of things at the same time. IMO, if you somehow made the human brain to concentrate completely on one single problem, eg. solving an equation, it would do that task much faster than a computer. But again, this is just me thinking aloud.

 

Cheers,

Gabe

 

I'd like to see your brain mow the lawn.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.