Jump to content

Artifical Intelligence


The Angry Intellect

Recommended Posts

If sam is truly clever, it would eliminate the threat without being implicated in anything illegal.

Why would the law be of Sam's concern? I'd be afraid of clever ways of hiding evidence or preventing his implication in any crime.

If it isn't that clever, maybe sam should not exist.

Again, what if Sam is clever enough to avoid being a suspect? Or what if Sam is clever enough to persuade legal figures to aid him?

 

I think part of the problem in understanding what he would possibly do is how vaguely defined Sam is.

Edited by andrewcellini
Link to comment
Share on other sites

Why would the law be of Sam's concern? I'd be afraid of clever ways of hiding evidence or preventing his implication in any crime.

Again, what if Sam is clever enough to avoid being a suspect? Or what if Sam is clever enough to persuade legal figures to aid him?

 

I think part of the problem in understanding what he would possibly do is how vaguely defined Sam is.

"Why," yes, agree.

"Again," yes agree.

"Vaguely defined," true, we can only imagine by assuming sam's brain is made of artificial neurons; thus, its thinking processes are similar to ours. Consider sam has been built either with or without emotions, and estimate sam's thought processes. We can assume sam's brain was modeled after ours; thus, will think in a similar manner. It's the best we can do ATM. It may be incorrect, because intent to simulate us will not necessarily achieve a reasonable facsimile. In other words, sam may be mentally deranged. In this case, we must hope the developers can turn sam off.

Link to comment
Share on other sites

Emotions may be necessary, but they introduce additional scary scenarios. For example, what if sam falls in love, and his heart throb rejects sam in favor or a person. What will sam do?

To what extent can he identify his emotional states and the emotional states of others? Is this also on par with humans or is it "quicker" as are his reasoning and problem solving skills? If he only has the ability to recognize his own emotional states then he won't consider others, but he could also learn to recognize others emotional states and act on them.

 

I guess a better question is how emotionally intelligent are we starting him off? Knowing that might help answer what he'd do.

Edited by andrewcellini
Link to comment
Share on other sites

To what extent can he identify his emotional states and the emotional states of others? Is this also on par with humans or is it "quicker" as are his reasoning and problem solving skills? If he only has the ability to recognize his own emotional states then he won't consider others, but he could also learn to recognize others emotional states and act on them.

 

I guess a better question is how emotionally intelligent are we starting him off? Knowing that might help answer what he'd do.

 

 

Then we're back to my post #40.

 

 

I think it would depend on how it gained sentience:

1/ did it grow towards it by learning from a benign creator?

2/ was it driven to it by a megalomaniac?

3/ was it spontaneous with little external influence?

Think of you own formative influence, what you would do with super intelligence that could manipulate the world?

 

Link to comment
Share on other sites

To what extent can he identify his emotional states and the emotional states of others? Is this also on par with humans or is it "quicker" as are his reasoning and problem solving skills? If he only has the ability to recognize his own emotional states then he won't consider others, but he could also learn to recognize others emotional states and act on them.

Including emotions introduces not so scary scenarios, too. In both cases, scary and not scary, emotions complicate our thought experiments, and complicate the job of designing and building sam.

 

Some research AI systems today simulate emotions, but AFAIK current production level AI used for playing games (go, chess, and jeopardy), used by businesses (financial transactions), and used by the public (voice recognition) do not include emotions. These systems will improve over time, and new ones with ever greater capabilities will be developed. Thus, it seems reasonable that the first sam will not have emotions; of course, this may be an incorrect assumption.

Link to comment
Share on other sites

 

 

Then we're back to my post #40.

 

 

I agree that it depends on Sam's experience (sticking with Sam and not your "super intelligence"), so what is that? How and on what data has he learned to achieve the intelligence he already has? Another possibility is a creator with little foresight.

...

Thus, it seems reasonable that the first sam will not have emotions; of course, this may be an incorrect assumption.

I agree.

Edited by andrewcellini
Link to comment
Share on other sites

I agree that it depends on Sam's experience (sticking with Sam and not your "super intelligence"), so what is that? How and on what data has he learned to achieve the intelligence he already has? Another possibility is a creator with little foresight.

 

 

Super intelligence has to be a given in this thread.

Edited by dimreepr
Link to comment
Share on other sites

 

 

Super intelligence has to be a given in this thread.

That's not clear given the op.

 

Greetings,

 

I would like to see what people think in relation to Artificial Intelligence.

 

More so, to help me have a better understanding on why some big name people in the technology industry (e.g. William Gates) actually fear true AI being developed and used.

 

What is there to fear?

 

What do you actually think would happen if man-kind developed true AI and let it have access to the internet?

 

If you think it would go crazy and decide to hurt humans in some way or destroy all our data then please explain why you think this.

 

Wouldn't the concept of "taking over" or wanting to harm other life-forms just be a human thought process and not even be a relevant issue with true AI?

 

I look forward to hearing your views on the matter, it intrigues me greatly. :)

Edited by andrewcellini
Link to comment
Share on other sites

 

 

Never the less, what's to fear from a machine that just bursts into tears when challenged?

Whatever the tears are made of. :P

 

What of a machine that gets uncontrollably, perhaps homicidal angry as the sight of the color red? It's quite obvious you'd want that to be shut off.

Edited by andrewcellini
Link to comment
Share on other sites

 

 

If it can't outwit us, what's to fear?

Seemingly nothing, but who's to say it can't outwit us? We haven't established what it learned on, how it compares to humans in solving problems etc, just a dangerous connection between red and anger/violence in some hypothetical machine which tells us nothing about its wit; really that's all we've doing by loosely constructing these hypothetical machines.

Edited by andrewcellini
Link to comment
Share on other sites

Seemingly nothing, but who's to say it can't outwit us? We haven't established what it learned on, how it compares to humans in solving problems etc, just a dangerous connection between red and anger/violence in some hypothetical machine which tells us nothing about its wit; really that's all we've doing by loosely constructing these hypothetical machines.

 

 

Let’s not run around in circles on this point; the number of people on this planet and what we’ve accomplished, super intelligence should be a given, because for us to fear such a machine it’s wit must exceed the collective wit of humanity.

Edited by dimreepr
Link to comment
Share on other sites

 

 

Let’s not run around in circles on this point; the number of people on this planet and what we’ve accomplished, super intelligence should be a given, because for us to fear such a machine it’s wit must exceed the collective wit of humanity.

Whether its the machine that burst into tears or a super intelligence, these systems learn. You do acknowledge that your machine that bursts into tears can potentially learn to do otherwise right? What if it learns to fight back whether by force or by implicating the aggressor in a crime which ruins their life or by taking something dear from them?

 

I don't know why you're hung up on super intelligence. The simple example in this video is scary, and it's a stamp collector:

It seems like the kind of fears you're thinking of are more grand and terminator esque than some simpler ones that are also scary though not as action packed.

Edited by andrewcellini
Link to comment
Share on other sites

Let's consider how sam's neurons might be made. I can think of two technologies.

  1. Synthetic nanotechnology neurons that cannot be programmed, and
  2. Tiny microprocessors simulating neurons that can be programmed.
  3. PS a third, using the WWW.

Sam is a research and development project, and researchers will want to be able to improve on the neurons as their research discovers additional things about biological neurons. They will prefer technology 2, microprocessors, but it is conceivable that the microprocessor solution will be too large, too power hungry, or limited in another way; thus forcing researchers to use technology 1.

 

If tech 2, then it seems reasonable that sam would learn to reprogram his neurons and, thereby, increase its intelligence, possibly making super intelligence. If tech 1, then the only option for increasing intelligence would be to add neurons to the brain (also possible for tech 2), which is more difficult than reprogramming. Adding neurons might also be obvious to everyone, because the container for sam's brain (head) must be larger.

 

Since these are future technologies, it is necessary to add a caveat. Sam might be able to reprogram tech 1.

 

There may be no advantage to reprogramming the neurons; in which case, adding neurons would be the only possibility of sam making itself super intelligent.

 

PS. Sam would almost certainly use cloud resources to increase its capability. It might totally take over the WWw and leave man without a network, just to make itself smarter.

 

If sam is built without emotion, it wouldn't want increased intelligence. It might, however, decided it needed more intelligence to solve some problem. Although, I do not know of such a problem.

Edited by EdEarl
Link to comment
Share on other sites

If we want to look at this at least slightly more rigorously, it might be worth considering what emotion actually is from a results-oriented perspective rather than just how they feel and how people stereotypically act as a result of emotion.

 

In short, you respond to input differently when you are angry (or sad, happy, etc) than when you are not.

 

There's no reason an AI couldn't be programmed with situationally dependent adjustments to the weights in a decision tree (which is effectively what an emotional state would look like in an AI from a practical perspective) but there's no reason that an AI's emotions would have to look anything like a humans, or that one's responses to those emotions would have to resemble a humans.

 

Anger, for example, is a response to a situation with no apparent acceptable solution. You could program an AI to adjust its decision making when encountering such a problem. It may even be a good idea to allow for a change in the way decisions are made when there doesn't seem to be an acceptable decision available under the normal way of processing them, but there's no reason that the new set of responses would need to be violent ones just because that's how humans tend to react. You would need to hardwire or teach the AI a violent set of responses, when you could program or teach it a set of responses to the situation that are entirely unlike what a human would have.

Link to comment
Share on other sites

You would need to hardwire or teach the AI a violent set of responses, when you could program or teach it a set of responses to the situation that are entirely unlike what a human would have.

Assuming it has an accurate model of reality, if it's allowed to search the solution space unhindered and it comes to a solution with high utility which happens to be taking some violent actions or includes a violent action among the sequence of actions then why would it need to be taught or even built with those responses? Do you mean built to have access to necessary tools? Having an internet connection might be enough to do some dangerous, potentially nuclear things.

Edited by andrewcellini
Link to comment
Share on other sites

That depends somewhat on the problem that it is exploring the solution space of. You have to have some sort of starting parameters one way or another.

 

You can be lazy or incautious and set an AI on a path that will lead to violent tendencies (given that it has the capacity to implement violent behaviors), but that is going to depends on what sorts of tasks and goals an AI is applied to and what kinds of parameters are set for evaluating potential solutions.

Link to comment
Share on other sites

That depends somewhat on the problem that it is exploring the solution space of. You have to have some sort of starting parameters one way or another.

I agree, and it would be somewhat pointless to introduce these hypothetical machines without a problem to solve.

 

+1

Edited by andrewcellini
Link to comment
Share on other sites

If we want to look at this at least slightly more rigorously, it might be worth considering what emotion actually is from a results-oriented perspective rather than just how they feel and how people stereotypically act as a result of emotion.

 

In short, you respond to input differently when you are angry (or sad, happy, etc) than when you are not.

 

There's no reason an AI couldn't be programmed with situationally dependent adjustments to the weights in a decision tree (which is effectively what an emotional state would look like in an AI from a practical perspective) but there's no reason that an AI's emotions would have to look anything like a humans, or that one's responses to those emotions would have to resemble a humans.

 

Anger, for example, is a response to a situation with no apparent acceptable solution. You could program an AI to adjust its decision making when encountering such a problem. It may even be a good idea to allow for a change in the way decisions are made when there doesn't seem to be an acceptable decision available under the normal way of processing them, but there's no reason that the new set of responses would need to be violent ones just because that's how humans tend to react. You would need to hardwire or teach the AI a violent set of responses, when you could program or teach it a set of responses to the situation that are entirely unlike what a human would have.

AI is not programmed like an accounting system; there is a reason AI cannot be programmed to adjust its decision making. Microsoft recently put an AI teen girl online and within 24 hours deleted her.

 

Microsoft deletes 'teen girl' AI after it became a Hitler-loving sex robot within 24 hours

One programs an AI learning system, and lets it learn, much as people learn. Without emotions there will be unpredictable results. If you add emotions, you must decide what effect the emotion will have before learning begins. Once the learning begins, the emotional system is on automatic. There will be even more unpredictable results, which is beyond my ability to SWAG. Even without emotions, my SWAGs are iffy.

 

I'm not saying AI should not be built with emotions. However, one typically engineers novel things beginning simple and mastering it. Then, add a little complexity (one emotion) and master that. Then add another. I'd recommend not even putting in a hunger circuit to begin. Without fear of "death" it would have no particular reason to "eat," and should expire if a person didn't plug it in to recharge its batteries.

 

A pathological killer may not empathize, but they must feel emotion when killing. Otherwise, why would they kill.

Link to comment
Share on other sites

Fear isn't the only motivator. Actually, hunger should work all on its own. You train the system to decrease its feeling of hunger. Eating decreases hunger. Once it tries eating, that should reinforce itself well enough just with that.

 

Even most people eat because more they don't want to feel hungry rather than because they are afraid of dying if they don't get food.

Link to comment
Share on other sites

Fear isn't the only motivator. Actually, hunger should work all on its own. You train the system to decrease its feeling of hunger. Eating decreases hunger. Once it tries eating, that should reinforce itself well enough just with that.

 

Even most people eat because more they don't want to feel hungry rather than because they are afraid of dying if they don't get food.

Currently implemented AI systems don't have any feelings. As those systems are improved, they will not suddenly experience feelings. For example, Google Translate can be improved so it does a better job of translating. For an AI system to experience any emotion, fear, hunger, frustration, love, etc. someone must design subsystems to emulate emotions.

 

Suppose Google merges their search engine with translation, mapping, scholar, improves it so that you can interact with it verbally, and they call it Google Chat. You can talk to Chat, like you can talk to a person. Chat has no emotions, it just does net searches and interacts with you more or less like a research librarian. Does it need emotions? Would a research librarian with attitude be a benefit?

Edited by EdEarl
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.