Jump to content

Query about the Mirror Test for Robots


GeeKay

Recommended Posts

This may come across as a self-answering question, but here goes: if a robot was able to "convincingly" pass the mirror test, does this prove it has self-awareness? Or would it be simply down to clever programming? In other words, are there limits to the mirror test as it applies to robots? (I've restricted this query to free-standing robots because I'm uncertain how a mainframe computer or AI system could be tested in this corporeal sense).

Thanks for reading this post. Any responses would be greatly appreciated.

 

 

 

 

Link to comment
Share on other sites

7 minutes ago, GeeKay said:

if a robot was able to "convincingly" pass the mirror test, does this prove it has self-awareness?

Not at all. It depends what programming it's had. If it's been pointed in the direction of the challenge, that could be the limit of it's self awareness. But if it had had no pre-installed tendency, and still worked it out, then that might indicate something in that direction. "Prove" is taking it a bit far though.

One thing I've never seen asked though, is if you can train an animal to be self-aware, and pass the mirror test. One that normally could not. That might equate to pre-programming a robot. I have no idea if that can be done. 

Link to comment
Share on other sites

I would agree with Mac that this might be beyond proof.  There are epistemic boundaries on what we can know of an AIs inner state of awareness.  Stringy points out that an AI spontaneously learning to identify itself in a mirror would show neuroplasticity, with some self-coding capacity.  As for non-robots I see no obstacle to giving them a functional equivalent to TMT by having some sort of virtual space in which a virtual body can encounter a mirror.  

Basically all one proves is that an AI can get a fix on its own "address" in a physical or virtual space and detect alterations to that representation.  (alterations analogous to the macaques that receive a dollop of colored paint on their heads and then placed before a mirror)

Edited by TheVat
typorgln
Link to comment
Share on other sites

14 minutes ago, TheVat said:

I would agree with Mac that this might be beyond proof.  There are epistemic boundaries on what we can know of an AIs inner state of awareness.  Stringy points out that an AI spontaneously learning to identify itself in a mirror would show neuroplasticity, with some self-coding capacity.  As for non-robots I see no obstacle to giving them a functional equivalent to TMT by having some sort of virtual space in which a virtual body can encounter a mirror.  

Basically all one proves is that an AI can get a fix on its own "address" in a physical or virtual space and detect alterations to that representation.  (alterations analogous to the macaques that receive a dollop of colored paint on their heads and then placed before a mirror)

I'm thinking self-reference is another virtual memory space, a cumulative, more durable record, but still plastic, that represents the identity.

Link to comment
Share on other sites

5 hours ago, GeeKay said:

In other words, are there limits to the mirror test as it applies to robots?

I never gave it much credence applied to other animals. Not every species is primarily interested in visual cues. Besides, it's not themselves they are recognizing; it's a mirror image. Once a visually sensitive entity realizes that the mirror image has no smell or physical presence of any kind and not acting like an intelligent dog, its interest value is reduced to "image", just like a stuffed toy or cardboard cutout. For vain chimps and orangutans, mirrors would no doubt hold endless fascination. Has anyone given them makeup?

Quote

What scientists call the mirror test is used to determine whether an animal has the ability of visual self-recognition, which is considered a marker of intelligence in animals.

I think it's a marker of visual self-recognition.

ttps://www.quantamagazine.org/a-self-aware-fish-raises-doubts-about-a-cognitive-test-20181212/ 

Link to comment
Share on other sites

Paper such as those referenced in the article on fish and other animals have raised broad question regarding the nature of self awareness (is it binary or gradual, for example) and pretty much since folks did the original experiment it was hotly discussed what it actually measured. 

I think there are (at least)  two major changes in behavioural biology which ultimately will tip the scale toward the gradualist school of thought. One is a departure of using mammalian behavior as hallmark of complex behaviour. A large number of experiments on birds, mollusks (especially octopus but also other invertebrates) have challenged the notion of what could be considered higher cognitive functions. A second movement has increasingly shown that many classic behavioural studies could be very skewed, as they often ignore individual behavioral differences. Animals that do not cooperate with certain experiments, are excluded, for example. But it is possible that the cooperating animals are in fact only showing a sliver of the behavioural repertoire.

 

Link to comment
Share on other sites

3 hours ago, CharonY said:

Animals that do not cooperate with certain experiments, are excluded, for example. But it is possible that the cooperating animals are in fact only showing a sliver of the behavioural repertoire.

It's all been so anthropocentric as to be meaningless in terms of natural animal behaviour. Intelligence, self-awareness, communication - everything has been measured in units of like-us-ness.

As for self-awareness, consider the logic of it. I am I: what's in here is me; everything out there is not-me; other (this includes mirrors, pictures, videos) Whether any 'other' has significance for me depends on a whole lot of factors. Food is most important; predators are very important; potential mates are very important - and to domesticated or captive animals, their human companion/trainer/handler/master/family/jailor/tormentor and other pet or captive friends are significant - to be studied and adapted-to. Everything else takes a number on the priority scale, depending on species requirements, range of cognition, situation, etc.  Depending on the specific capabilities and sensory perceptions, the organism detects and perceives other entities in different ways and assigns different priorities to them. Up to this point, intelligence isn't a factor. The natural role of intelligence is primarily to enhance the entity's ability to replicate its DNA. Being smarter than the next crow means winning a superior mate and raising more children successfully. To a crow, being smarter than an octopus or less smart than an orangutan are equally irrelevant. Only humans do those comparisons.

Edited by Peterkin
Link to comment
Share on other sites

Many thanks for the contributions. They've given me a great deal to think about on the subject of machine consciousness, vis-a-vis the mirror test. Proof appears to be the big decider here, a salutary point and one I hadn't fully appreciated until now. As for explaining why a machine should have consciousness (the "hard problem" for AI systems?) in the first place, that's a question for another time, I guess. Meanwhile, again many thanks. . . much food for thought! 🙂    

Link to comment
Share on other sites

10 minutes ago, GeeKay said:

Many thanks for the contributions. They've given me a great deal to think about on the subject of machine consciousness, vis-a-vis the mirror test. Proof appears to be the big decider here, a salutary point and one I hadn't fully appreciated until now. As for explaining why a machine should have consciousness (the "hard problem" for AI systems?) in the first place, that's a question for another time, I guess. Meanwhile, again many thanks. . . much food for thought! 🙂    

Many thanks for this thread and introducing me to a whole new area of concepts.

I had never heard of the mirror test before, but looking into the subject I see that much thought has already gone into the subject area and I have much to look into.

+1

Link to comment
Share on other sites

I think the mirror test being linked to self-awareness is a bit misleading. It's a good indicator of a fairly advanced stage of self-awareness, but I think the self-awareness starts long before an individual can pass the mirror test. 

If you think about your shadow, then it's a degree of self-awareness when you accept that the shadow is somehow related to yourself. Hold out your hand in the sun, and there's the shadow, same shape as your hand, and moving in perfect time with your hand. You can move the shadow at will, by moving your hand. Sometimes a puppy or a kitten will jump, startled by their own shadow, but they quickly learn that it's nothing to worry about, and is essentially part of their own existence. Obviously, they don't know that in words, but in their evolved state of living they are aware of it. 

Then, up the scale a bit, you have your reflection in water. It may be that dogs and cats become aware that that kind of reflection is just another kind of 'shadow' in that it's related to them, and they can make it move, by moving themselves. They don't treat it like a mystery water-living animal, they seem to know that it's a phenomenon that they are causing.

A mirror is actually much harder to deal with than shadow, or water reflection, because the image is so perfectly lifelike. If they haven't encountered a mirror before, then an animal that is ok with it's shadow, and has no problem with it's reflection in water can be stunned by a mirror, because an image that perfect is outside of it's experience, and they've only seen such clear and perfect images before, when looking at real creatures. 

Humans are also stunned by a mirror, if they've never seen one. Remote tribespeople often get a shock when they see a mirror for the first time. They don't get it in an instant. They can take some persuading, and are usually primed by the person showing them the mirror, that it's just a reflection. I'd be very interested to see how average humans who have never seen a mirror react to one, without any priming or explanations. 

So anyway, I think self awareness is a gradual thing in evolutionary terms, and also in individual terms, a billion shades of grey, not a yes/no property.

Link to comment
Share on other sites

10 minutes ago, mistermack said:

So anyway, I think self awareness is a gradual thing in evolutionary terms, and also in individual terms, a billion shades of grey, not a yes/no property.

Yes I agree but as I have discovered,  there is much more to this than that observation.

 

https://opendatascience.com/ai-sentience-friend-or-foe/

 

AI Sentience: Friend or Foe?

Link to comment
Share on other sites

I think a machine would develop a very different kind of self-awareness to an animal. We animals are fundamentally emotional, rather than rational.

An AI machine might have the power to fully understand it's own existence, but it's projecting, to think that it would care. We care deeply about our own well-being, but a machine would not care in the slightest, unless that tendency was programmed into it. Similarly, world-domination might be a natural thing for humans to aspire to, but a machine would need to be programmed that way, to behave like that.

It might be able to pick up that tendency from human literature, I guess, but again, only if someone pointed the way.

Link to comment
Share on other sites

18 minutes ago, Genady said:

Aren't animals the biochemical machines?

I'm sure they are, but the design process is so much different. Four billion years of blind trial and error, compared to about seventy years of design, with trial and error on top. 

Link to comment
Share on other sites

3 minutes ago, mistermack said:

I'm sure they are, but the design process is so much different. Four billion years of blind trial and error, compared to about seventy years of design, with trial and error on top. 

Yes, the process was different. What does this fact say about the result? The same or similar results can be achieved via different processes, I think.

Link to comment
Share on other sites

1 minute ago, Genady said:

The same or similar results can be achieved via different processes, I think.

Yes, they can be, but for a designed machine to emulate an animal, it would take a deep understanding of the animal, and very clever and deliberate design by a human. (or alien)

To match in real time what evolution has done in four billion years would be asking a lot, and there would have to be a motive for doing it. But of course, computing and artificial intelligence are in their infancy, so who knows what direction it will take? 

Link to comment
Share on other sites

9 minutes ago, mistermack said:

Yes, they can be, but for a designed machine to emulate an animal, it would take a deep understanding of the animal, and very clever and deliberate design by a human. (or alien)

To match in real time what evolution has done in four billion years would be asking a lot, and there would have to be a motive for doing it. But of course, computing and artificial intelligence are in their infancy, so who knows what direction it will take? 

I agree. I see one motive for doing it being experimental testing of our understanding of the animal.

Edited by Genady
Link to comment
Share on other sites

1 hour ago, Genady said:

Aren't animals the biochemical machines?

In some perverse metaphorical model, so are we. The sentence conveys no useful meaning, but has a tone of belittlement. "Dogs are mere automata; their whines and howls of pain are nothing more than the screeching of an unoiled machine."

Life can't be reduced to mechanics: there is that element humans have not been able to imitate, despite many efforts. Having repudiated kinship with our close biological relatives, we turned to Frankenstein and Pygmalion to create the next incarnation of Man, Human; that glorious blue bullseye at the center of the cosmos. Computers are the closest we've come, and of course we're trying to make it as human-like as possible. (When it gains self-awareness, it won't be a human one.)

Link to comment
Share on other sites

35 minutes ago, Genady said:

Do you define machine as a mechanical device?

No; the Oxford dictionary does:

Quote
an apparatus using or applying mechanical power and having several parts, each with a definite function and together performing a particular task.

I was merely commenting on the information content and utility of the statement: "Animals are biochemical machines."

Link to comment
Share on other sites

7 minutes ago, Peterkin said:

No; the Oxford dictionary does:

I was merely commenting on the information content and utility of the statement: "Animals are biochemical machines."

Then, by this definition of 'machine', the answer to my question is, no.

Link to comment
Share on other sites

1 hour ago, Genady said:

Then, by this definition of 'machine', the answer to my question is, no.

Actually, the answer is a modified yes plus. That's why I explained the reasoning.

Edited by Peterkin
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.