Jump to content

Was HAL conscious?


NLN

Recommended Posts

Was HAL, the computer featured in Stanley Kubrick's film 2001: A Space Odyssey, a sentient being, or merely the product of "brute force" computation?

 

Since his debut in 1968, HAL has served as the guidepost for artificial intelligence research. More than any other character in fiction, he has represented the enormous potential of the field, and has helped to launch the carriers of many an AI researcher. Calm, rational, and eerily human, HAL would certainly pass the Turing test. But was he actually a conscious being -- an emergent byproduct of some robust future algorithm -- awake and aware of his surroundings? Or was he instead a masterpiece of human simulation, produced by the interplay between cleverly designed software and extremely fast -- but conventional -- hardware?

 

Of course, HAL is imaginary, but his legacy reminds us that achieving the appearance of human-like machine intelligence need not necessarily require true sentience. In the film, scientists clearly are uncertain whether or not HAL is conscious:

 

Reporter
:
The sixth member of the Discovery crew was not concerned with the problems of hibernation, for he was the latest result of machine intelligence: the H.A.L. 9000 computer, which can reproduce -- though some experts still prefer to use the word mimic -- most of the activities of the human brain, and with incalculably greater speed and reliability.

 

HAL, on the other hand, makes a case for his own self-awareness:

 

HAL
:
I enjoy working with people. I have a stimulating relationship with Dr. Poole and Dr. Bowman. My mission responsibilities range over the entire operation of the ship, so I am constantly occupied. I am putting myself to the fullest possible use, which is all I think that any conscious entity can ever hope to do.

 

Much air play has been given to the glamorous future of artificial intelligence, the dawn of sentient machines. But little attention has gone toward imagining a less glamorous -- but arguably more realistic -- future in which machines might be constructed to appear conscious, without actually being so.

 

In his book, The Age of Spiritual Machines, Ray Kurzweil predicted that by the year 2019 a $1,000 computing device will have the computational ability of the human brain. He further predicted that just ten years later, a $1,000 machine will have the computing capacity of approximately one thousand human brains. Regardless of whether or not you agree with Kurzweil's timetable, one thing is certain: computer "horsepower" has increased dramatically since their inception, and seems likely to increase just as dramatically in the near future.

 

Let us imagine that it is 2019 and software development has advanced no further than today, while hardware has progressed to the point where it matches the computational ability of the human brain (estimated by Kurzweil to be 20 million billion calculations per second). Even with present-day software, the sheer horsepower behind such hardware will make such systems capable of amazing things. Is it possible that problems like computer vision, knowledge representation, machine learning, and natural language processing will be solved by brute force computation, even if no new software efficiencies are implemented?

 

Consider the progress made in Chess playing computers. For a long time in the 1970s and 1980s it remained an open question whether any Chess program would ever be able to defeat the expertise of top humans. In 1968, International Master David Levy made a famous bet that no chess computer would be able to beat him within ten years. He won his bet in 1978 by beating Chess 4.7 (the strongest computer at the time), but acknowledged then that it would not be long before he would be surpassed. In 1989, Levy was defeated by the computer Deep Thought in an exhibition match.

 

Chess algorithms work not by reproducing human cognitive processes, but by examining future moves. They have attained tournament-level playing ability almost exclusively due to dramatic speed increases in their number-crunching hardware. In their book, How Computers Play Chess, researchers David Levy and Monty Newborn estimated that doubling the computer speed gains approximately fifty to seventy ELO* points in playing strength.

 

As Nigel Shadbolt of the University of Southampton said: "I believe that massive computing power directed with a light touch toward interesting parts of a problem space can yield remarkable results."

 

I asked a few AI researchers what they thought about the possibility of brute force computation eventually simulating human intelligence, and here is what they told me:

 

Steve Grand:

 

Take the simplest possible method of brute force AI: a straightforward list of the answers to all anticipated questions. You can draw a network of all the possible relationships between objects and verbs, representing the number of possible questions and answers. If the knowledge domain only has one object then there are very few questions that could be asked about it. If there are two objects then you can ask questions about each object, but also about relationships between the two (is lead denser than water?). As you add more objects the number of questions rises as the factorial. Clearly there are more questions that could be asked about a world containing a few dozen objects than there are subatomic particles in the entire universe. So quite quickly you reach the point at which the universe simply isn't big enough to hold a computer that could store the necessary answers. So you obviously have to take a more sophisticated approach.

 

The most sophisticated approach would be an accurate model of a human brain, configured by real personal experiences of the world. This is clearly capable of passing the Turing test and it scales efficiently but it's Strong AI. So where is the point between these two extremes at which the results result from a cheat are sufficiently convincing -- and does this method of representation scale well enough not to require a computer larger than the number of bits in a manageable chunk of the universe? My feeling is, it doesn't scale well at all -- there is no substitute for the structure of the brain itself -- the brain is its own best description and any other valid description contains vastly more bits than a brain (or even a thousand brains).

 

Ben Goertzel:

 

I think that faking intelligence in a Turing-test context is almost surely possible, but only using many orders of magnitude more computing power than exists in the human brain. Mathematically, one can prove that it IS possible if one has sufficiently much computing power -- but this theorem doesn't tell you much of any practical use, because the proof doesn't tell you whether the amount of computing power is, say, bigger or smaller than the amount of computing power in the entire universe.

 

Hugo De Garis:

 

There's a huge difference between high bit rate computers and high intelligence. A high bit rate is a necessary condition for an intelligent machine but not sufficient. To be sufficient, the bits in the circuitry have to be connected in brain like ways, but we don't know how to do that yet. We will probably have to wait until nanotech gives us powerful new tools to investigate the principles of how the brain performs its magic. Then we can put those principles into machines and get the same level of intelligence performing a million times faster, i.e., at light speed compared to chemical speed. My view of the timing is that we won't have real nanotech until the 2020s, then in the 2030s there will be an explosion of knowledge in neuroscience, which we will be putting into brain like computers in the 2040s.

 

Steve Lehar:

 

The Turing test is a pretty strange benchmark from the outset. The idea is to restrict the 'user interface" to something that computers can handle, like a text I/O interface. But the REAL mark of intelligence, human or otherwise, is the ability to walk into the room and sit down in front of the user interface, with the kind of grace and elegance that a cat or lizard or snake can demonstrate, even if they can't figure out how to read the screen or type an input. If we could replicate even THAT degree of intelligence and biomechanical grace, we would be much farther advanced in replicating human intelligence.

 

I think the Turing test is a very biased and restricted benchmark, designed more to demonstrate the "abilities" of our stupid digital computers than to release the potential of true human or animal intelligence. How about an anti-Turing test, where the creature or machine has to walk into a room, identify where the user interface is located, and sit down in front of it? How long would Kurzweil suppose it take before we can accomplish THAT in artificial intelligence?

 

One of the big surprises of the search for artificial intelligence has been the fact that the "white collar" type tasks, such as calculating Boolean logic, solving differential equations, navigating a spaceship to the moon and back, are apparently the "easy" problems of computation, while the more basic "blue collar" tasks of getting dressed in the morning, identifying the wife and kids to communicate the appropriate kisses and nods, and driving the body to work, are actually the REAL frontiers of human intelligence; we have NO IDEA how they are done.

 

Paul Almond:

 

What do you mean by a Turing test pass? Do you mean fooling the average person into thinking that they are talking to a human for 5 minutes? 5 days? 5 years? As an example, would you require the machine to reproduce anything like the detailed e-mail exchanges we have had for a long time now? Would you expect the e-mail messages you have sent me to be answerable, to some degree?

 

I think this is where we can run into problems. Given a prolonged enough exchange, passing the Turing test would probably be as hard as having full consciousness anyway -- because of the scope a person has for catching the computer out -- so I don’t really see a proper Turing test pass as a particularly easy problem.

 

I think that mimicry of consciousness would imply consciousness, but I don’t think it could be done by brute force. I think it would require cleverness in software design of some kind. This means I do not expect huge processing power, in itself, to deliver a Turing test pass. However, when we get such [super-fast] hardware, a lot of AI research will become easier. Furthermore -- and this is a big point -- lots of AI algorithms that might have been impractical before now become practical, so a lot of speculation can now be tested experimentally. I think it would speed up AI research a great deal, and the start of true AI might emerge not many years after.

 

There is one exception where brute force could clearly deliver AI. That is, if you had the ability to somehow record the structure of a human brain with sufficient accuracy. You could then get a computer to “run” your image of a human brain and you would have an AI system: you would not know how it worked, of course, and your AI may not thank you for it; it would have the memories and personality of whatever brain you used. You would not know how this AI system worked without some research. It would not even know itself.

 

In 2001: A Space Odyssey, HAL acted as if he was conscious -- but was he? We'll never know for sure, but if one day brute force computation conquers many of the problems associated with artificial intelligence, the question of machine sentience may be a whole lot easier to answer.

 

Reporter
:
Do you believe HAL has genuine emotions?

 

Dave Bowman
:
Well, he acts like he has genuine emotions. Of course, he was programmed that way to make it easier for us to talk with him. But as to whether or not he has real feelings is something I don't think anyone can truthfully answer.

 

__________

*The ELO rating system is a method for calculating the relative skill levels of players in two-player games such as chess and Go.

 

Machines Like Us

Link to comment
Share on other sites

But was he [HAL] actually a conscious being -- an emergent byproduct of some robust future algorithm -- awake and aware of his surroundings? Or was he instead a masterpiece of human simulation, produced by the interplay between cleverly designed software and extremely fast -- but conventional -- hardware?

 

Real consciousness, simulated consciousness. What's the difference?

Link to comment
Share on other sites

if it quacks like a duck...

 

conciousness could be considered and emergent property of the chemical reactions taking place in our brains, this could be represented by an algorithm(infact, it has partially been in several models of neural tissues).

 

if you played this algorithm through a computer instead of a chemical system, would it be conciousness?

 

i'd say yes.

 

HAL certainly had all the signs of being sentient, it could abstract, it had awareness through the ships sensors and cameras(like our nerves and eyes) it had the ability to control various parts of the ship (i'm afraid i can't do that dave) and it made desicions based on that data.

Link to comment
Share on other sites

I ascribe to functionalism, and by that, HAL is an adequate "duck type" of consciousness (to borrow a term from dynamic programming, and IA).

 

My opinion is a computer simulation of a human (at the cellular/molecular level) would be every bit as conscious as a real human. Just how conscious arbitrary computer program X is depends on what algorithm program X is using.

 

THE SINGULARITY IS NEAR! PHEER!

Link to comment
Share on other sites

"When does a perceptual schematic become consciousness? When does a difference engine become the search for truth? When does a personality simulation become the bitter mote... of a soul?"-Dr Alfred Lanning(iRobot)

Link to comment
Share on other sites

Real consciousness, simulated consciousness. What's the difference?

 

The difference is that with the former, the entity would be self aware; with the latter, it would have no self-awareness.

Link to comment
Share on other sites

The difference is that with the former, the entity would be self aware; with the latter, it would have no self-awareness.

 

You mean that it would have simulated self-awareness. But how would you (objectively) tell the difference?

Link to comment
Share on other sites

  • 3 weeks later...

Nice post, NLN. Said a lot.

 

I'm interested in the same thing. (If I ever have anything to add , I

will...)

 

But I will note that there's *humans* who won't pass a "Turing Test".

And I think your protestants who say (in effect) an AI should be "perfect" are wrong.

 

A.I. will 'think' like a 747 flys. Not like a sparrow. A 747 is not a perfect sparrow.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.