Jump to content

Can you create a robot capable of need and necessity?


Myuncle

Recommended Posts

This would speed up the learning process of any robot. We humans don't plan anything when we need something, we are not taught how to need, we learn things by instinct, like any other animals. Since we are babies, we don't plan to learn how to do things, we need food, and we suddenly learn how to eat, we learn to distinguish new objects, grasp food, grasp things that we need. We need our parents and we learn to get close to them by crawling or by walking, etc. We would never teach a human or an animal how to need things, it's just genetic. So, is it possible to create a software or teach a robot to need things?

Link to comment
Share on other sites

"To need something" is just the sudden realization that you want to have something, and that you somehow have to acquire this, isn't it?

I think it is not difficult at all to learn a robot to go plug itself into the grid when its battery runs low, for example. It requires just a few lines of code.

 

It becomes more difficult when you want to teach a robot the human concept of materialism: the desire to constantly get newer things, and more of them too.

 

And it becomes more difficult still to teach a robot to "need" very abstract things, like a feeling of accomplishment. However, you can argue whether humans really "need" that.

Link to comment
Share on other sites

I think it is not difficult at all to learn a robot to go plug itself into the grid when its battery runs low, for example. It requires just a few lines of code.

 

Plugging yourself into the greed: whatever the cost, crawling, walking or running, jumping, because you need it, otherwise you die. Is it possible for a robot to do that?

Link to comment
Share on other sites

There's programming, which I think most would agree, is reasonably identical to a determined drive.

 

Where Robots lack is in their ability to try new things and learn from them.

 

Possibly if you hooked a physical robot body up to a good AI with a simulator built in, it would be more capable.

 

Real world processing power limitations and lack of safeguards(injury/damage) are the main issues involved.

Link to comment
Share on other sites

  • 2 weeks later...

Interesting. I guess there is no way of ever truly knowing. I don't know much about amoeba, (or ants come to that) Do they (amoeba) seek out food because they are hungry?

 

However I take your point - need does not necessarily imply consciousness. I don't believe plants are conscious, and they need water.

 

But for a robot to utilize need as an aid to learning, would that not require some level of awareness? Otherwise how would it work?

Link to comment
Share on other sites

A robot needs power (analogy food), but code to have it plug itself in to the grid does not require learning.

 

Programs that "learn" have been developed and are being developed. However, there is a level of learning that programs cannot do that we humans can. No one knows how to make a program smart like people are smart. Thus, your question, " Otherwise how would it work," cannot be answered to the degree I think you mean. You will have to be satisfied with learning algorithms as understood today, wait until someone improves them, or maybe you will have an insight and become the one who tells us all. However, I have a speculation I'll share.

 

I think that current learning algorithms are on the right track; although, they may need a bit of improvement. The big difference, IMO is that the brain has billions of neurons processing data and they all learn and coordinate their effort to decide what is best to learn or do. Thus, if programs could make billions of coordinated decisions with learning algorithms, the total result might be better we can do today. But, we do not have the computer power yet. With specific problems, for example chess, programs can make decisions as good as people, but general intelligence is beyond our capability.

Link to comment
Share on other sites

Ants and amoeba need food. Are they conscious?.

interesting point.

But, we do not have the computer power yet.

I thought our latest supercomputers were already capable of simulating the same human brain power of billions of neurons processing data. I remember they simulated mouse brain power a few years ago.

Link to comment
Share on other sites

I thought our latest supercomputers were already capable of simulating the same human brain power of billions of neurons processing data. I remember they simulated mouse brain power a few years ago.

There may be enough total computing power on the internet, but is it available for use by those doing AGI. Moreover, have they created software than can use that many computers effectively, if it is available to them. IDK the answers to those Q. I'm not a researcher in that area, so do not know the latest info. The efforts I know about are simplifying software to make it run fast on available processors.

 

There is an effort to learn everything about the brain, with the goal of simulating an entire brain. That effort is far from being complete; neither the research is complete nor the computer power necessary for the simulation. I believe the singularity will be achieved before the complete brain simulation "project" (AFAIK it is a concept, not a funded project) is complete. Since the singularity has not been achieved, I assume the computer power is not available. IMO Hierarchical Temporal Memory, a simplified model of the cortex, is closer to achieving the singularity than any other model, but I believe they await special hardware to make significant progress.

Link to comment
Share on other sites

I just read this report about a robot with good hand-eye coordination, which says that for AI vision to work requires it to touch (i.e., experiment) with things it sees. Previously, AI vision has had difficulty interpreting things it sees. For example, the car that won the DARPA Grand Challenge used multiple sensors.

STANFORD RACING

 

The Stanford Vehicle (nicknamed "Stanley") is based on a stock, Diesel-powered Volkswagen Touareg R5, modified with full body skid plates and a reinforced front bumper. Stanley is actuated via a drive-by-wire system developed by Volkswagen of America's Electronic Research Lab.

 

All processing takes place on seven Pentium M computers, powered by a battery-backed, electronically-controlled power system. The vehicle incorporates measurements from GPS, a 6DOF inertial measurement unit, and wheel speed for pose estimation.

 

While the vehicle is in motion, the environment is perceived through four laser range finders, a radar system, a stereo camera pair, and a monocular vision system. All sensors acquire environment data at rates between 10 and 100 Hertz. Map and pose information are incorporated at 10 Hz, enabling Stanley to avoid collisions with obstacles in real-time while advancing along the 2005 DARPA Grand Challenge route.

 

This AI driver hardly compares with a human, in 2005 it completed the 11.78 km (7.32 mi) course in 6:54 hours.

 

In the 2007 Urban Challenge the course involved a 96 km (60 mi) urban area course, won by Carnegie Mellon University in 4:10:20 hours. Their vehicle was equipped with more than a dozen lasers, cameras and radars to view the world.

 

These examples of AI technology indicate the state of the art and the speed it is advancing.

Link to comment
Share on other sites

  • 3 months later...

eventually, yes. Since the human mind is a biological computer, it is only a matter of time before a quantum based computer will have the complexity and necessary positive/negative feedback circuitry to achieve awareness, then introspection, then self-awareness. Then things could get interesting....a new form of life...our "frankenstein", if you will, will perhaps continue rapid evolution, surpassing us in emotional and spiritual matters as well as the intellectual, and evolving past the "need and greed" phase we seem to be stuck in...... It might even save us from ourselves....considering that our abuse of technology is causing so much trouble, it is poetic hopeful justice it might offer us a chance to survive the damage we are doing to ourselves and the planet. I think the quickest way to give humankind a lift is the answer to "why anything", of which I think it will have the answer......the obvious question as to it's knowledge that it is reliant on being "plugged in" as it's power source will be a major step in it's personal evolution.....perhaps the fabled "zero point energy" will be finally realized and it's first demonstraton will be with the awareness of "franki" being liberated from physical instrumentalities.....and the ghost will rise from the machine....edd

Edited by hoola
Link to comment
Share on other sites

eventually, yes. Since the human mind is a biological computer, it is only a matter of time before a quantum based computer will have the complexity and necessary positive/negative feedback circuitry to achieve awareness, then introspection, then self-awareness. Then things could get interesting....a new form of life...our "frankenstein", if you will, will perhaps continue rapid evolution, surpassing us in emotional and spiritual matters as well as the intellectual, and evolving past the "need and greed" phase we seem to be stuck in...... It might even save us from ourselves....considering that our abuse of technology is causing so much trouble, it is poetic hopeful justice it might offer us a chance to survive the damage we are doing to ourselves and the planet. I think the quickest way to give humankind a lift is the answer to "why anything", of which I think it will have the answer......the obvious question as to it's knowledge that it is reliant on being "plugged in" as it's power source will be a major step in it's personal evolution.....perhaps the fabled "zero point energy" will be finally realized and it's first demonstraton will be with the awareness of "franki" being liberated from physical instrumentalities.....and the ghost will rise from the machine....edd

Your post wanders so much I'm not sure what you mean or if you are trying to make a point or ask a question. I shall respond to the first two sentences, the ones I made bold.

 

Neurologists have begun to understand how the brain is made, how it is structured and interconnected. And, they are finding more and more about what variations in brain morphology cause various pathologies and what nominal structures and organizations result in normal functioning. This knowledge indicates that merely having a powerful computer is not sufficient for intelligence. The brain is composed of many processing units, neurons, groups of neurons, and groups of groups of neurons working in parallel with the just the right organization and structure. Various mistakes make people suffer from a myriad of things, including ticks, compulsions, depression, paranoia, and schizophrenia. To build an intelligent computer will probably require years of tweaking it to eliminate a myriad of bugs that cause conditions similar to ones that people suffer from, as well as conditions we have not seen.

 

On the other hand, faster computers and narrow AI programs, efforts similar to making a computer drive a car, will no doubt be of great benefit to people. And, I believe there is a spectrum of capability between AI and AGI. In other words, there may be AI programs significantly more complex that one required to drive a car that is less complex than an intelligent, self replicating robot.

Link to comment
Share on other sites

yes, I agree, and wandering is fun, if I may explain what I meant, the terms "positive and negative feedback" and "complexity" cover enough of the sentience requirements to get the ball rolling towards consciousness...from there it is an unstoppable progression to superior inelligence. Look at how long physical reality took to use regular evolution to create higher intelligenced mammals...billions of years....and how long before we took the basics of computers (the abacus) to a stage that they can mimic awareness and can be somewhat independent ala our remote space missions? Thousands of years. The voyager was likened to the relative intelligence of a grasshopper, and that was back in the 70s...how long from the beginning of life on earth to get to that grasshopper stage...again, billions of years....Do the curve of how long before machine self-awareness goes vertical on the chart, and it seems inevitable we will soon be eclipsed by the next step of evolution...kurt godel said "if ever a person or a computer should come to understand the entirety of mathematics, than that entity will cease to be a mere computer". I am paraphrasing, but the jist I believe is correct....edd

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.