Jump to content

Can Artificial Intelligence Ever Match Humans?


jimmydasaint

Recommended Posts

Just musing about the boundaries of Artificial Intelligence, I wonder if AI programming can ever take a machine to the point that it would be able to appreciate the sounds of a waterfall, or a gently murmuring stream or be able to appreciate the works of Wagner. Or to be happy when the UK win gold medals in the Olympics. In short, can our emotions ever be felt and appreciated by a computer, and could it then make mistakes based upon emotion?

Link to comment
Share on other sites

Do you guys honestly mean that a computer can 'feel' an emotion. I can pretend to be happy when my wife has spent my pay cheque on rubbish but it is a pretend emotion - not one that is actually felt. A simulation is not the same as a genuine appreciation involving various areas of the brain.

Link to comment
Share on other sites

Do you guys honestly mean that a computer can 'feel' an emotion.

 

Yes. Why couldn't it?

 

I can pretend to be happy when my wife has spent my pay cheque on rubbish but it is a pretend emotion - not one that is actually felt. A simulation is not the same as a genuine appreciation involving various areas of the brain.

 

Why not? What makes the watery lump of fat in your head "special" in a way silicon chips can't be?

Link to comment
Share on other sites

if you are simulating the entire brain then it does involve various areas of the brain. that is the point.

 

if simulating it at a cellular level isn't good enough for you, with enough computing power we could simulate an entire human body down the the subatomic level with a similarly detailed 'world' for it to interact with. it will display emotions its behaviour will be indistinguishable from a humans as the exact same rules govern it.

Link to comment
Share on other sites

OK. My mass of brain tissue is nothing special and I know that. So we are saying that it is a problem of complexity similar to the neural networks in a human brain. I was trying to establish if humans can do something that the computers cannot (at present). For example I seem to recall that Penrose could solve a tiling problem where a surface could be tiled with a small number of shapes without the pattern repeating - I think this is called non-periodic tiling. Apparently this tiling problem could not have an algorithm applied to it. Does anyone know of an algorithm which could solve this problem or come close to it at present?

 

This link is not a primary source:

 

Penrose is very much the mathematician. Not only does he mathematically model Black Holes, he solves extremely difficult math puzzles in his spare time. In the 1960’s it was mathematically proven that you could tile a surface without having the pattern ever repeat. They called it non-periodic tiling and the race was on to figure out who could find the least number of tile shapes that could be used for non-periodic tiling. The number started out with over 20,000 tile shapes which was quickly reduced to 104. In 1974, Penrose had reduced it to six tile shapes. Shortly after that, he identified non-periodic tiling was possible with just two tile shapes.

 

Penrose maintains that his solution to non-periodic tiling could not have been found via an algorithmic process. Ergo, his brain is not an algorithmic computer. He formalized this by claiming strict algorithmic artificial intelligence (Strong AI) was impossible.

 

http://dfcord.blogspot.com/

Edited by jimmydasaint
Link to comment
Share on other sites

The idea is, we humans sit on our *brains* and fail to use them as smartly as the *machines* (extras) we make to help us *live*. Unfortunately we sometimes tend to think machines can perform better than humans, but fail to understand that we *trained* these computers to do what they do

 

These brings us back to the fundamental questions as to what is life? Scientist have always disagreed as to the real meaning of life but we all agree on its basic fundamental units. The idea is simple, as i study molecular patterns and genetics, what i get for the definition of life is, a bunch of materials coming together, develop and learn [learning habits which we now consider our daily lives]. Our brain has a tremendous capacity to store what we learn throughout the course of our life

 

i think we are at a point in science where topics like this should not be considered as science fiction, nevertheless, i am looking o specialize in such research when i graduate!

 

Here is some stuff to inspire you

 

http://www.telegraph.co.uk/news/2552973/Rats-brain-used-to-power-robot.html [this has been exagerated in the blogosphere]

 

[this guy could be considered stupid, but his got a powerful idea people]
Link to comment
Share on other sites

 

Here is some stuff to inspire you

 

http://www.telegraph.co.uk/news/2552973/Rats-brain-used-to-power-robot.html [this has been exagerated in the blogosphere]

 

[this guy could be considered stupid, but his got a powerful idea people]

 

I like the sources you gave to me. In my youth I read some references to Whitehead and Bergson and the source of what we could could memory which seemed to have a spooky unknown origin. This animat where movement is controlled by rat neuron tissue seems to demystify the origin of memory and is worthy of another thread.

Link to comment
Share on other sites

Do you guys honestly mean that a computer can 'feel' an emotion. I can pretend to be happy when my wife has spent my pay cheque on rubbish but it is a pretend emotion - not one that is actually felt. A simulation is not the same as a genuine appreciation involving various areas of the brain.

 

This brings about the metaphysical question of "what is emotion?" I've heard scientists refer to emotion as nothing more than a reaction to stimuli. In other words, we laugh at a funny joke as a means of processing the data. We cry at the funeral of a loved one because that's how we process that sort of data.

 

Now I'm not entirely sure that's how I see emotions, but you gotta admit it's an interesting perspective.

 

EDIT: I suppose "means" isn't the right word. I meant to say that we emote externally what is being processed internally.

Link to comment
Share on other sites

I was trying to establish if humans can do something that the computers cannot (at present). For example I seem to recall that Penrose could solve...

 

Penrose focuses on consciousness as behaving algorithmically, then attempts to formulate mathematical proofs where the behavior of a person is given formal mathematical properties.

 

To put it in Kantian terms, when arguing against materialists Penrose is unable to separate the phenomena from the noumena. To Penrose, there's nothing emergent about consciousness at all, and it's inseparable from whatever mathematical activities go on to facilitate it. To frame it in the modern verbage of the philosophy mind, Penrose likens materialism to reductive eliminativism, that is that mind is matter and the two are inseparable.

 

Functionalists would argue that mind is a symbolic (noumenological) system which is independent of its underlying substrate. It doesn't matter if that substrate is a wet lump of fat or a silicon chip with electrons whizzing through it. So long as it pushes symbols around in the right way the system is conscious.

 

All that said, conscious entities aren't formal logic systems and aren't constrained in the way formal logic systems are. Your brain can quite happily perform any number of logical fallacies, generate conclusions from nowhere, create conclusions which don't follow from their premises, and make untold numbers of mistakes. This is where Penrose's arguments tend to fail. He ignores that consciousness, as an emergent, symbolic system isn't bound by the formal logic whatever system that's driving it is bound to.

 

I suppose Penrose would fancy himself a monist, but he's not. He's a dualist is sheep's clothing.

Link to comment
Share on other sites

Penrose focuses on consciousness as behaving algorithmically, then attempts to formulate mathematical proofs where the behavior of a person is given formal mathematical properties.

 

To put it in Kantian terms, when arguing against materialists Penrose is unable to separate the phenomena from the noumena. To Penrose, there's nothing emergent about consciousness at all, and it's inseparable from whatever mathematical activities go on to facilitate it. To frame it in the modern verbage of the philosophy mind, Penrose likens materialism to reductive eliminativism, that is that mind is matter and the two are inseparable.

 

I would agree with Penrose here. He seems to favour dualist arguments. However, the materialist notion can be analagous to the epiphenomenological model which adds to Husserl's ideas of meaning coming from a person's consciousness acting on perceptions. In other words, the consciousness is like a foam which collects in an extremely active neurophysiological pond firing neurons left, right and centre. You can pick and choose if you like the materialistic notion of consciousness but you would also have to answer the objections raised by Penrose about the limitations of a machine (the Turing halting problem and Godel's incompleteness theorem).

 

http://en.wikipedia.org/wiki/Halting_problem

 

Functionalists would argue that mind is a symbolic (noumenological) system which is independent of its underlying substrate. It doesn't matter if that substrate is a wet lump of fat or a silicon chip with electrons whizzing through it. So long as it pushes symbols around in the right way the system is conscious.

 

So, in short, you are saying that inputs (or qualia) to the brain are then not immediately turned into outputs but are transformed by an intermediary function into a range of outputs. So this is another form of materialism. The functions can then work in feedback loops to the brain. Would I be right in asserting these opinions? OK, if that is the case, what is the distinction made between conscious and physical states that govern perceptions? For example, when I look at my chidren asleep, I do not think: 'my neonates are in a state of dormant and quiescent slumber' I think: 'aw, they're so cute when they sleep'. The physical and conscious states express a difference.

 

As a consequence of this dichotomy, there is a problem with expressing machine consciousness as anslagous to human consciousness.

 

I suppose Penrose would fancy himself a monist, but he's not. He's a dualist is sheep's clothing.

 

I think Penrose is a dualist in dualists clothing, to be honest. I have read a little bit about the Orch OR model and it seems to leave the source of the Objective Reduction to 'spooky' other world sources.

 

[sheepish] I tried reading Kant as a primary source and gave up after reading part of his Prolegomena because I could not understand his terminology and reference points like someone in his own time could [/sheepish]

Link to comment
Share on other sites

Do you guys honestly mean that a computer can 'feel' an emotion. I can pretend to be happy when my wife has spent my pay cheque on rubbish but it is a pretend emotion - not one that is actually felt. A simulation is not the same as a genuine appreciation involving various areas of the brain.

 

Enter the p-zombie! The philosophical zombie is a thought experiment. What if a being existed that was physically and qualitatively identical to a human, but did not have conscious experience or emotion? For instance, if you poked it with something sharp it would recoil and say "ow" but not really feel any pain. Is this possible? I don't think so. Even though the zombie is just that, the fact that it recoils and says ow constitutes it feeling pain. A robot can exhibit emotion in this same way.

Link to comment
Share on other sites

Enter the p-zombie! The philosophical zombie is a thought experiment. What if a being existed that was physically and qualitatively identical to a human, but did not have conscious experience or emotion? For instance, if you poked it with something sharp it would recoil and say "ow" but not really feel any pain. Is this possible? I don't think so. Even though the zombie is just that, the fact that it recoils and says ow constitutes it feeling pain. A robot can exhibit emotion in this same way.

 

Well, do we really feel pain, or is it simply our brains processing damage detection/hazard avoidance?

Link to comment
Share on other sites

You can pick and choose if you like the materialistic notion of consciousness but you would also have to answer the objections raised by Penrose about the limitations of a machine (the Turing halting problem and Godel's incompleteness theorem).

 

Penrose attempts to apply Godel's Incompleteness Theorem to the problem in consciousness in a completely unfounded manner. As I'm not a mathematician, I'll defer that argument to Solomon Feferman:

 

http://math.stanford.edu/~feferman/papers/penrose.pdf

 

Furthermore, I think Penrose is really missing the big picture in regard to Godel's Incompleteness Theorem, a picture spelled out quite well by Hofstadter.

 

Godel's Incompleteness Theorem is fundamentally rooted in self-referentiality. It requires a statement in the logical notation of Principia Mathematica (PM) be encoded as a number (a so-called Godel number). Furthermore, it requires that Godel Number be self-reflexive, i.e. there exists a Godel Number which codifies a statement in PM which refers to its own Godel Number representing the same statement in PM (ad infinitum).

 

Godel never finds such a number, he merely proves it exists. However his proof relies on a self-referential system. Godel numbers, while being numbers, refer to statements in PM, and by codifying a statement into a number, a statement in PM can refer to itself.

 

Penrose's proofs completely omit the self-referential aspect of Godel's Incompleteness Theorem, and focus on consciousness as a self-contained, non-self-referential system. This is foolish. Above all else, consciousness is self-referential. And again, I'll defer to Hofstadter to make that argument.

 

 

Again, our brains aren't formal logic systems! If you're looking for a mathematical system to compare our brains to, they're Bayesian classifiers (Jeff Hawkins argues this point about the neocortical column extensively in his book On Intelligence, from a neurophysiological point of view). Our brains reduce problems to sets of probabilities. We don't prove anything in our own heads. Instead, we use the power of Bayesian inference to come to conclusions. This means we can "solve" problems that formal logic systems can't, because our solutions are probabilistic, not formal proofs.

 

So, in short, you are saying that inputs (or qualia) to the brain

 

You're misusing "qualia". Qualia are perceived qualities. They're much more closely synonymous with noumena. The sensory inputs are nociceptic: they merely represent data coming from our senses and are not directly perceived. Perception occurs at a much higher level, namely after the sense data has been processed both by the various sense areas of the brain and the lower levels of the cerebral cortex.

 

are then not immediately turned into outputs but are transformed by an intermediary function into a range of outputs. So this is another form of materialism. The functions can then work in feedback loops to the brain. Would I be right in asserting these opinions?

 

I'm most certainly a materialist, although in some regards you could think of me as a monist. I fall into a school known as "emergent materialism" or "epiphenominalism". I don't believe consciousness and brain activity are synonymous. I do believe there's a direct mapping between brain activity and the contents of consciousness.

 

That said I do believe our reactions are derived through a combination of sense data and memory.

 

OK, if that is the case, what is the distinction made between conscious and physical states that govern perceptions?

 

There's a symbolic abstraction between the two. The physical states involve electrical and chemical signals moving between systems, and changes in physical structure. Consciousness occurs at a level of symbols being exchanged between systems, which can fundamentally be mapped to the underlying structures.

 

For example, when I look at my chidren asleep, I do not think: 'my neonates are in a state of dormant and quiescent slumber' I think: 'aw, they're so cute when they sleep'. The physical and conscious states express a difference.

 

You could say the same about a microprocessor. It has no notion of electrons entering a complex structure of silicon and germanium substrate which forms an ALU. It understands that 1 + 1 = 2.

 

As a consequence of this dichotomy, there is a problem with expressing machine consciousness as anslagous to human consciousness.

 

No? Again, why does the substrate matter, so long as the symbolic systems it implements remain the same.

 

I think Penrose is a dualist in dualists clothing, to be honest.

 

Penrose claims to be a monist. However, he is certainly quite unfamiliar with Kant...

 

[sheepish] I tried reading Kant as a primary source and gave up after reading part of his Prolegomena because I could not understand his terminology and reference points like someone in his own time could [/sheepish]

 

I wouldn't recommend reading Kant directly. You might find this more palatable:

 

http://books.google.com/books?id=fLKXJitd7FsC&pg=PA161&lpg=PA161

Link to comment
Share on other sites

  • 1 month later...

The thing is not if it ever will but if we ever want it to. If we want a computer with emotions, which seems pretty pointless to me, we could build it it's not that hard. (What I mean by"not that hard" is that the meta-process to doing this is simple - not the actual process.)

 

Because as you say emotions do cloud your judgment, it's a fact.

 

The idea is, we humans sit on our *brains* and fail to use them as smartly as the *machines* (extras) we make to help us *live*. Unfortunately we sometimes tend to think machines can perform better than humans, but fail to understand that we *trained* these computers to do what they do

We are as much machines created by machines as computers are.

DNA creates us (as a way for the DNA to move around etc, the theory of selfish DNA), then in turn we create computers with AI.

Edited by TrickyPeach
Link to comment
Share on other sites

Yes, we're also machines, although chemical machines. If we were to genetically engineer people, you could say they are artificial. Would genetically engineered people count as artificial intelligence?

 

Why not?

I mean they are artificial in a sense are they not?

Link to comment
Share on other sites

I don't know guys - that kinda seems like going into an artists workshop and drawing a few trees, then labeling them as our own. We definitely know how to manipulate what's already there - I don't know if we have the processing power to make a truly robust AI yet though. Think of the calculations your brain does, it's insanity!

Link to comment
Share on other sites

  • 2 months later...
Just musing about the boundaries of Artificial Intelligence, I wonder if AI programming can ever take a machine to the point that it would be able to appreciate the sounds of a waterfall, or a gently murmuring stream or be able to appreciate the works of Wagner. Or to be happy when the UK win gold medals in the Olympics. In short, can our emotions ever be felt and appreciated by a computer, and could it then make mistakes based upon emotion?

 

I don't know how, because in computational theory, it's stated that some problems which are solvable by humans are impossible for computers. How can a computer have human intelligence if it's mathematically impossible for certain things to be made into algorithms?

 

Yes. Why couldn't it?
Have you taken a computer science course? Not trying to come off as better than anyone, or anything - I mean I haven't taken a course in AI yet so I don't know how close AI is. But, to make a computer "feel" would first require defining what a feeling is. Second, is to classify all instances of a feeling, and then improve on it.
Link to comment
Share on other sites

Have you taken a computer science course?

 

Yes. And not to be a blowhard, but most likely my computer science background far exceeds your own.

 

Not trying to come off as better than anyone, or anything - I mean I haven't taken a course in AI yet so I don't know how close AI is. But, to make a computer "feel" would first require defining what a feeling is.

 

A feeling is a specific type of noumenon, to use Kantian phraseology.

 

However, that statement makes no more sense than the statement "to make a person 'feel' would first require defining what a feeling is". Humans feel regardless of hard definitions.

 

In regard to consciousness we have a working reference implementation, the human brain. All that's required is porting it to different substrate. Brains are made out of matter but there's no reason we can't build a working model of a brain in a computer. The brain is just a physical system, and we've had great success at building models of other physical systems inside computers.

Link to comment
Share on other sites

I would just like to bring up reptiles. Now a reptile has reptile biology as far as human scientific stuff goes right? Why does the same material, in terms of cells or what not get suggested to be not conscious, when our brains themselves are made of cells also. I know the question seems moderately redundant, but I think it applies to trying to qualify what is conscious. I personally view life regardless of scale as dynamic in a conscious sense in some way, even with a microbe. I merely apply this being that its alive, I don't really know if a microbe of some kind experiences any sort of reality outside of whatever it physically is/does though.

 

I cant help but to think this simple aspect gets lost in all the high tech terms. I don't care to bring up bad news, but brains injuries often result in immediate changes to that individuals consciousness in a variety of ways. I also think any AI thats made with a computer would have to be kept within a certain operational integrity in regards to component structure, or such would have a direct impact on the AI itself. I think to deny this means AI could be created out of stones being stacked together, if this is all philosophical that is.

Link to comment
Share on other sites

I would just like to bring up reptiles. Now a reptile has reptile biology as far as human scientific stuff goes right? Why does the same material, in terms of cells or what not get suggested to be not conscious, when our brains themselves are made of cells also.

 

Because reptiles don't have a neocortex or a structure which performs a similar function.

Link to comment
Share on other sites

I think it`s worth pointing out that if such examples as this:

I was trying to establish if humans can do something that the computers cannot (at present). For example I seem to recall that Penrose could solve a tiling problem where a surface could be tiled with a small number of shapes without the pattern repeating -

are to be used, then the answer has to be "Perhaps", because if a machine was exactly modeled as a human brain, and not all humans can perform this task example (I couldn`t) then it stands to reason that not all simulations could do this either.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.