Jump to content

Consciousness and Robots


Randolpin

Recommended Posts

 

Does robots have consciousness?

 

Robot's actions are based on the programmed chip that it contains and it's sensors.

 

For example, we have a robot designed to shot the ball if it sees a ball. When it's sensors for vision sees a ball it shots the ball because that is being programmed on it's microchip. The essence of this reasoning is that robots have no consciousness. It is merely a "catch and response" machine depending only on the program of it's microchip. It has no freedom on itself just following the program of it's microchip. It can't do other things other than shooting a ball. It has no freewill in other words. It is not also aware of itself because it has no consciousness.

 

Now I want to go deeper. Consciousness is a separate entity..Why?

 

It is not depending on the program of a certain microchip... How?

 

Robots will never have consciousness because it only depends on the program of it's microchip. Robots are only pattern-seeking machines. They are "mindless of themselves".

 

Another example to clarify all the points regarding this.

 

Again, we take the example of a robot who is designed to become a basketball player. The robot is being program to either shoot or dribble a ball when it sees a ball.

Now, what should lead the robot to either shoot or dribble the ball since it has no consciousness or freewill to choose which of the two actions. The only way for a robot to come up into a specific action is engineers will program a random generating action based microchip because creating consciousness is impossible.

Link to comment
Share on other sites

This argument appears to based on what robots can do today.

 

There seems to be reason why robots in future should not have consciousness. At least, I have never seen a convincing argument against that. But I have seen a few plausible explanations of why it might be possible.

 

 

 

Now I want to go deeper. Consciousness is a separate entity..Why?

 

There is no evidence of that. As far as we know it is a side-effect of the complex processing entity that is the animal brain.

 

It could equally well arise from a sufficiently complex electronic computing device,


The "computers can only do what they are told" argument is hopelessly naive. We already have computers that can come up solutions or designs that the programmers never envisaged.

Link to comment
Share on other sites

I agree with Strange; contemporary robots are not capable of consciousness but future robots could be. I think the problem contemporary programmers have in developing truly conscious robots is their inadequate understanding of how the human brain produces consciousness. Human consciousness involves a confluence of separate brain areas engage in distinct processes that together constitute cognition and cognitive output. I believe that when programmers learn how to duplicate the intricacies of human brain function accurately, they will succeed. As a start, programmers will need a precise understanding of what mind truly is. Consciousness is a product of our brain function's matrix and that matrix is what we refer to as mind. What constitutes a mind in living organisms is both simple and complex but not beyond our ability to program with the proper understanding.

Link to comment
Share on other sites

I agree with Strange; contemporary robots are not capable of consciousness but future robots could be. I think the problem contemporary programmers have in developing truly conscious robots is their inadequate understanding of how the human brain produces consciousness. Human consciousness involves a confluence of separate brain areas engage in distinct processes that together constitute cognition and cognitive output. I believe that when programmers learn how to duplicate the intricacies of human brain function accurately, they will succeed. As a start, programmers will need a precise understanding of what mind truly is. Consciousness is a product of our brain function's matrix and that matrix is what we refer to as mind. What constitutes a mind in living organisms is both simple and complex but not beyond our ability to program with the proper understanding.

To be honest, I think we'll have the first AI capable of indistinguishably replicating consciousness before we really understand how consciousness works on the kind of deep level that you are talking about.

 

Presupposing, of course, that both achievements are physically possible.

Link to comment
Share on other sites

To be honest, I think we'll have the first AI capable of indistinguishably replicating consciousness before we really understand how consciousness works on the kind of deep level that you are talking about.

 

Presupposing, of course, that both achievements are physically possible.

 

Perhaps, but what I've seen thus far is mimicry without that true spark that says to me, "I am awake and aware!"

Edited by DrmDoc
Link to comment
Share on other sites

 

Perhaps, but what I've seen thus far is mimicry without that true spark that says to me, "I am awake and aware!"

Of course not. If you had, we'd already have such an AI. I said that I think we're likely to see that kind of AI before we have a true understanding of how consciousness works, not that it had already been created.

Link to comment
Share on other sites

Rather than obsess tryiing to create or model a mind, if you just focus on getting AI to perform the tasks that humans can do, it's mind will emerge as its complexity and abilities increases. After the fact, scientists can then figure out which are the essential attributes that give rise to a mind by elimination.

Link to comment
Share on other sites

Of course not. If you had, we'd already have such an AI. I said that I think we're likely to see that kind of AI before we have a true understanding of how consciousness works, not that it had already been created.

 

I understood. Although a possibility, what I'm suggesting is that I believe it's very unlikely now without that understanding. Without a proper understanding, how would developers know their machines are doing anything more than mimicking consciousness?

Link to comment
Share on other sites

 

I understood. Although a possibility, what I'm suggesting is that I believe it's very unlikely now without that understanding. Without a proper understanding, how would developers know their machines are doing anything more than mimicking consciousness?

It needs to happen empirically before we can start to figure out what constitutes a mind. If mind is an emergent property, I don't think think anyone can work out what's needed to bring about that emergence.

Link to comment
Share on other sites

There seems to be reason why robots in future should not have consciousness. At least, I have never seen a convincing argument against that. But I have seen a few plausible explanations of why it might be possible.

I'd like to present a meta-argument to the contrary.

 

When water was the big technology back in ancient Greece and Rome, the mind was described as a flowing phenomenon. The word nous for soul has the same root as pneumatic according to one article I read.

 

After Newton, everyone got interested in mechanical devices and people thought the mind and the universe were mechanical in nature.

 

Now we're in the age of computers and everyone is all, "Oh of course the mind is a computer. The universe too. Why we'll soon upload ourselves to heaven I mean the computer." Funny how upload theory sounds just like Christian theology.

 

So the meta-argument is that we always think the mind is whatever the hot technology of the day is. When the next big thing comes along we'll think it explains the mind and the universe as well. History shows that.

 

By that argument it's highly unlikely that the mind is a computer as computers are currently understood.

Edited by wtf
Link to comment
Share on other sites

So the meta-argument is that we always think the mind is whatever the hot technology of the day is. When the next big thing comes along we'll think it explains the mind and the universe as well. History shows that.

 

By that argument it's highly unlikely that the mind is a computer as computers are currently understood.

 

 

I certainly agree with, and had noted before, the historical metaphors point. Similar analogies, based on contemporary technology, have been used to describe the body (a furnace, clockwork, etc), the universe (a machine, clockwork, a computer...)

 

I hadn't thought of applying it to the concept of AI. It is a very good argument. Although, being an argument about analogies, ultimately not very compelling! :)

 

We have a theoretical understanding of things that can be computed by any computing machine. And there is, based on that, no reason to think the brain can do anything more than other computing machines (of any sort).

 

But, yes, that theoretical model may be found to be wrong. Which would be pretty exciting. Discovering that could prove that machine AI is impossible or show the way to implement it.

Link to comment
Share on other sites

I'd like to present a meta-argument to the contrary.

 

When water was the big technology back in ancient Greece and Rome, the mind was described as a flowing phenomenon. The word nous for soul has the same root as pneumatic according to one article I read.

 

After Newton, everyone got interested in mechanical devices and people thought the mind and the universe were mechanical in nature.

 

Now we're in the age of computers and everyone is all, "Oh of course the mind is a computer. The universe too. Why we'll soon upload ourselves to heaven I mean the computer." Funny how upload theory sounds just like Christian theology.

 

So the meta-argument is that we always think the mind is whatever the hot technology of the day is. When the next big thing comes along we'll think it explains the mind and the universe as well. History shows that.

 

By that argument it's highly unlikely that the mind is a computer as computers are currently understood.

Mind is data and the other technologies from the past are just less sophisticated versions of how it processes. Computers reflect how we think. We are basically making them in our own image; we can't do anything else. All technologies are insights into the workings of the mind.

 

When you see a beehive and its occupants you are looking at a blueprint into the workings or "minds" of bees.

Edited by StringJunky
Link to comment
Share on other sites

Mind is data and the other technologies from the past are just less sophisticated versions of how it processes. Computers reflect how we think. We are basically making them in our own image; we can't do anything else. All technologies are insights into the workings of the mind.

 

When you see a beehive and its occupants you are looking at a blueprint into the workings or "minds" of bees.

 

I disagree, slightly. From my understanding of how our brain likely evolved to produce this quality, mind isn't data. Mind is the functional matrix into which we input data. Mind is separate from consciousness and data in that mind comprises the functional programming that produces consciousness from data. I agree my perspective may be an oversimplification, but it is based on a path of evolution that suggest mind is evidenced, generally, by a capacity to retain and integrate data--or, in terms of brain function, a capacity to retain and integrate sensory input.

Edited by DrmDoc
Link to comment
Share on other sites

 

I disagree, slightly. From my understanding of how our brain likely evolved to produce this quality, mind isn't data. Mind is the functional matrix into which we input data. Mind is separate from consciousness and data in that mind comprises the functional programming that produces consciousness from data. I agree my perspective may be an oversimplification, but it is based on a path of evolution that suggest mind is evidenced, generally, by a capacity to integrate data--or, in terms of brain function, a capacity to integrate sensory input.

Like the operating system vs programs? But the OS is still data which makes up the functional matrix. What I see is hierarchies of information.

Edited by StringJunky
Link to comment
Share on other sites

Like the operating system vs programs? But the OS is still data which makes up the functional matrix. What see is hierachies of information.

 

I'm sure most of us agree that there's a slight distinction between operating systems and regular data. Data is primarily that stream of input acted upon by the system, while the OS tells the system what to do with that input stream. It's the distinction between an innate quality (e.g., fight or flight instincts) and that quality added to a system (e.g., visual, tactile, auditory sensory input). Consciousness in AI will require more nuanced distinctions in programming than what I've seen.

Edited by DrmDoc
Link to comment
Share on other sites

 

I understood. Although a possibility, what I'm suggesting is that I believe it's very unlikely now without that understanding. Without a proper understanding, how would developers know their machines are doing anything more than mimicking consciousness?

How do we know other people are doing anything more than mimicking consciousness?

Link to comment
Share on other sites

How do we know other people are doing anything more than mimicking consciousness?

 

If we accept the human brain as a consciousness producing structure, then anyone or thing possessing its prerequisite functional configuration or some equivalently structured programming should be viewed as producing human equivalent consciousness--in my opinion.

Link to comment
Share on other sites

 

We think therefore we are.

 

We can't say the same for other things.

If it walks like a duck...

 

If we accept the human brain as a consciousness producing structure, then anyone or thing possessing its prerequisite functional configuration or some equivalently structured programming should be viewed as producing human equivalent consciousness--in my opinion.

Yes. The substrate doesn't have to be the same.

Link to comment
Share on other sites

It seems that AI is getting much closer than I thought to producing human level consciousness. This SciShow video discusses how AI have defeated several professional poker players, which is much different and more difficult for AIs than chess. As the video host explains, chess is a perfect information game where all aspects of the game are visible to players. Poker is an imperfect information game where opponents and draw cards are not known and strategies aren't as evident as in chess. These AIs are programmed to learn from experience which, in my opinion, is one prerequisite indication of consciousness.

Link to comment
Share on other sites

It seems that AI is getting much closer than I thought to producing human level consciousness. This SciShow video discusses how AI have defeated several professional poker players, which is much different and more difficult for AIs than chess. As the video host explains, chess is a perfect information game where all aspects of the game are visible to players. Poker is an imperfect information game where opponents and draw cards are not known and strategies aren't as evident as in chess. These AIs are programmed to learn from experience which, in my opinion, is one prerequisite indication of consciousness.

 

Close but no cigar.

Poker and chess have fundamental differences; poker is essentially a guessing game based on previous play, chess has no such limitation.

Sorry, I only read the last sentence in your post.

 

Still no cigar as yet.

Link to comment
Share on other sites

These AIs are programmed to learn from experience which, in my opinion, is one prerequisite indication of consciousness.

Oh I must disagree with this point.

 

I can write a program -- frankly this would not be unsuitable as a beginning programming exercise after the basic syntax and concepts of programming are learned.

 

The program reads in 10 years of daily temperature data from, say, New York City. The program then does statistical analysis (since this is a beginner exercise we will supply the needed statistical routines) and then emits the following prediction: Next year the average temp in July will be higher than the average temp in January.

 

Has the program "learned?" Well if you say so, but I say no. It's only applied a simple deterministic statistical test. And more importantly, the program does not know what a temperature is, or what July is, or where or what New York City is. It's jut flipping bits in deterministic accord with an algorithm provided by a human being. The computer does not know the meaning of what it's doing.

 

Now if you learn a little about machine learning, you will find that the students are buried in statistical analysis and linear algebra. That's all it is. Every bit flip is 100% determined by algorithms. And when a human learns, they are not multiplying matrices in the cells of their brain.

 

This is nothing to do with learning as the term is commonly understood. Only by naming the subject "machine learning" can proponents of strong AI fool the public into thinking that machines learn.

 

It's programming 101 to read in data, categorize and statistically analyze it, and output a prediction. Ok to be fair, programming 102. A week's work for a student, a few hours for a professional. A nothingburger of a program.

 

You might argue that people are doing the same. You have no evidence for that though.

Edited by wtf
Link to comment
Share on other sites

 

Close but no cigar.

Poker and chess have fundamental differences; poker is essentially a guessing game based on previous play, chess has no such limitation.

Sorry, I only read the last sentence in your post.

 

Still no cigar as yet.

 

Perhaps I didn't explain the distinctions between an imperfect information game (poker) versus a perfect information game (chess), but the video host's description convinces me that poker is a more difficult game for AI than chess. In chess, moves are based on parameters visible to all participants while in poker, decisions are made based on parameters not visible to all participants. Moves based on unknown parameters aren't as easy as those based on known parameters.

 

 

Oh I must disagree with this point...It's programming 101 to read in data, categorize and statistically analyze it, and output a prediction. Ok to be fair, programming 102. A week's work for a student, a few hours for a professional. A nothingburger of a program.

 

You might argue that people are doing the same. You have no evidence for that though.

 

Indeed, I would argue that learning for humans is exactly the same. Experience for humans is the accumulation of data and making decisions based on that experiences is the predictive output from our mental analysis of those accumulated experiences. Although learning is an indicative aspect of consciousness, I agree that a capacity to learn is not by itself indicative of consciousness as a whole. I agree that consciousness involves much more than a capacity to learn.

Edited by DrmDoc
Link to comment
Share on other sites

 

Perhaps I didn't explain the distinctions between an imperfect information game (poker) versus a perfect information game (chess), but the video host's description convinces me that poker is a more difficult game for AI than chess. In chess, moves are based on parameters visible to all participants while in poker, decisions are made based on parameters not visible to all participants. Moves based on known parameters aren't as easy as those based on unknown parameters.

 

That's an interesting point, I'll get back to you.

Link to comment
Share on other sites

  • 3 weeks later...

There are three possibilities here: 1) there is the material universe only, and consciousness arises from brain complexity, 2) there is the material universe, but in addition to that consciousness exists as a fundamental, unexplainable "given," and 3) there is consciousness only, and the material world we perceive is just that (a perception).

 

Possibility 1

 

I'm a digital electronics person by profession, with "heavy for an engineer" math and science education, and I haven't yet gotten comfortable with this position. People have lots to say about it, but in the end it just comes down to transistors that are on or off and logical 1's and 0's in the software data representation - I've yet to figure out how a data pattern can have the self-awareness that I have (and presume you have as well).

 

Position 2

 

Possibility 2 could be correct, but loses the Occam's Razor contest to possibilities 1 and 3. For this reason, I really haven't spend much time on this one, and probably won't unless I give up on #1 and #3.

 

Possibility 3

 

Possibility 3 (see Donald Hoffman's "conscious agents") doesn't have to explain consciousness, since it takes it as a given, but before it can claim to be fully developed it has to be able to provide a reasonable program for how the perceived material world would match what we've actually observed.

 

This possibility may be just as problematic as position 1 - it could just be that I don't have the background to appreciate the problems. However, it seems to me that if we endow conscious agents with the power to control the outcome of some subset of quantum events, it's feasible that a mechanism for free will could result. In laboratory experiments related to quantum theory, the normal practice is to collect results from an ensemble of equivalently prepared samples, and we expect the proper probability distribution to emerge. But the quantum events that a conscious agent would control to implement free will would each occur only once. Any outcome, corresponding to any of the superposed possibilities, is valid, does not violate quantum theory, and therefore is "fair game." The choice mechanism would be internal to the conscious agent and explaining it would likely be outside the realm of science - it's part of the "given" of conscious agent theory. Basically the action of consciousness would be "behind the wall" of quantum uncertainty - beyond the reach of experimental investigation.

 

Summary

 

This feels a little like "stacking the deck" in favor of possibility 3, but that's just how the cards fall. Possibility 1 claims to live entirely within the realm of science, and therefore science must produce (eventually) a full explanation for the entire process. The whole shebang is on "our side" of the uncertainty "wall." Possibility 3 very nicely puts the unexplained stuff on the far side, out of reach. But sometimes the simplest explanation is the best one.

 

Given my current state of knowledge I find myself most comfortable with possibility 3. I asked a question here earlier today in hopes of getting "new input" (new to me) about how possibility 1 might work. I also need to look more deeply into Hoffman's conscious agents materials. This is all fascinating stuff, and in my opinion represents one of the biggest "unknowns" we still face.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.