FaithCrime

CHAT ON ARTIFICAL INTELIGENCE

Recommended Posts

MY PERSPECTIVE OF AI IS SOMEWHAT  QUITE MARVELOUS WHEN I THINK OF IT DETAILED, BUT SOME DISAGREE..........I NEED TO KNOW WHY SOME AGREE AND DISAGREE ON INTRODUCING AI TO THIS ERA, IF YOU AGREE/DISAGREE I REALLY NEED A DETAILED DESCRIPTION..

Share this post


Link to post
Share on other sites

Using all caps is considered rude. Why shouldn't we introduce AI in this era? I really want a Jarvis-like setup for my home.

Share this post


Link to post
Share on other sites

Versatile A.I. would read the all books ever written, read the all scientific papers, including thousands of everyday released (human can read just couple per day, so will never be up-to-date with the most recent scientific research). A.I. does not have to eat or sleep, can spend entire time on reading, learning, analyzing data and thinking.. Human spend only 1% of 1% of daily activity on learning, or less.

Edited by Sensei

Share this post


Link to post
Share on other sites

PRO
AI will eliminate jobs.
AI will be uncontrollable
AI will be smarter than people

CON
AI will eliminate jobs.
AI will be uncontrollable
AI will be smarter than people

Many properties of AI have the potential to be either good or bad, and they will have a mind of their own, which means there is little we can know or assume about their future intentions. It seems probable that it will be developed. I think it will be the right thing to do because the human race is doing many things to destroy the environment and themselves. I doubt AI will do worse than humanity, but that is a personal opinion and others will have different opinions. .

Edited by EdEarl

Share this post


Link to post
Share on other sites

I think the first thing that is needed is a clear definition of Artificial Intelligence.

There is a difference between AI and Expert Systems.  If it can only do one thing, then it is an Expert System and not AI.  AI implies that it is capable of doing many things, solving a wide variety of problems from traffic congestion to medical diagnoses - To do more than merely what it was heuristically programmed to do.

To often the term "Artificial Intelligence" is misused, particularly by the media.  99% of the time they are actually referring to an Expert System.

9 hours ago, EdEarl said:

PRO
AI will eliminate jobs.
AI will be uncontrollable
AI will be smarter than people

CON
AI will eliminate jobs.
AI will be uncontrollable
AI will be smarter than people

Many properties of AI have the potential to be either good or bad, and they will have a mind of their own, which means there is little we can know or assume about their future intentions. It seems probable that it will be developed. I think it will be the right thing to do because the human race is doing many things to destroy the environment and themselves. I doubt AI will do worse than humanity, but that is a personal opinion and others will have different opinions. .

When we have end-users who spend an hour searching for the "Any" key, I would have to say that we have arrived.  Programs are already smarter than some people, and there is nothing artificial about it.  ;)

Share this post


Link to post
Share on other sites

I agree with you, except for the nuance of terms: expert systems, AI and AGI.

Others in the industry have defined a level of AI called Artificial General Intelligence, which seems to suit your definition. Thus, there are expert systems, not as competent as AI, which is more specialized than AGI. Conversational AI systems with robotic bodies such as Sophia appear to be fully conscious and more intelligent than people, but the truth is they haven't the understanding of their environment that nearly every human has; anyone capable of living alone is smarter, even if social workers visit daily to assure their well being. Currently there is no AGI, which is AI that approaches human capability or exceeds it. AI is, for example car autopilots, can be valuable and drive as nearly as well or soon better than people, but it is incompetent otherwise.

The AI AlphaGo, which beats world class human players playing Go, invent a new move when it played Lee Sedol in 2016. Thus, AI with narrow training may do exceptional things, yet be incompetent otherwise.

I believe AGI will require a much more complex robot body that senses nearly everything a human can sense, and a more capable computer to process those senses. Pundits suggest 2030 is when AGI will be ready.

In my previous post I didn't distinguish between AI and AGI because the OP did not. Nonetheless, I stand by my previous statement. Even the AI we have today has a mind of its own, and sometimes says things that are inappropriate. However, someone can turn them off, erase their memory, and retrain them to eliminate inappropriate output. Once an AGI enters the cloud, it is unlikely we will have any control.

Share this post


Link to post
Share on other sites
11 hours ago, EdEarl said:

PRO
AI will eliminate jobs.
AI will be uncontrollable
AI will be smarter than people

CON
AI will eliminate jobs.
AI will be uncontrollable
AI will be smarter than people

Many properties of AI have the potential to be either good or bad, and they will have a mind of their own, which means there is little we can know or assume about their future intentions. It seems probable that it will be developed. I think it will be the right thing to do because the human race is doing many things to destroy the environment and themselves. I doubt AI will do worse than humanity, but that is a personal opinion and others will have different opinions. .

That strikes me as a very human-centric response, there is no binary good or bad just a spectrum of both; not something an AI would consider itself to be, either way. Suppose we set an AI to only do good, does that mean it will never kill/harm? Asimov explored this concept by introducing a robot, bound by the three laws, that was telepathic.

Edited by dimreepr

Share this post


Link to post
Share on other sites

The learning algorithms used today are relatively simple programs with vast data networks for millions or billions of simulated neurons. Such neural nets are trained much like training a child. Since humans experience innate feelings that are difficult or impossible to change, for example the desire to breathe and eat, I assume AI can be augmented in similar ways. However, I am not aware of research of that nature at this time. However, modifications to the neuron program may not be able to produce a particular behavior because behaviors are the result of training networks neurons. Feelings of some emotions are produced by chemicals that affect many or all neurons. I think artificial brains will need some similar control mechanisms.

Share this post


Link to post
Share on other sites
1 hour ago, EdEarl said:

I agree with you, except for the nuance of terms: expert systems, AI and AGI.

Others in the industry have defined a level of AI called Artificial General Intelligence, which seems to suit your definition. Thus, there are expert systems, not as competent as AI, which is more specialized than AGI. Conversational AI systems with robotic bodies such as Sophia appear to be fully conscious and more intelligent than people, but the truth is they haven't the understanding of their environment that nearly every human has; anyone capable of living alone is smarter, even if social workers visit daily to assure their well being. Currently there is no AGI, which is AI that approaches human capability or exceeds it. AI is, for example car autopilots, can be valuable and drive as nearly as well or soon better than people, but it is incompetent otherwise.

The AI AlphaGo, which beats world class human players playing Go, invent a new move when it played Lee Sedol in 2016. Thus, AI with narrow training may do exceptional things, yet be incompetent otherwise.

I believe AGI will require a much more complex robot body that senses nearly everything a human can sense, and a more capable computer to process those senses. Pundits suggest 2030 is when AGI will be ready.

In my previous post I didn't distinguish between AI and AGI because the OP did not. Nonetheless, I stand by my previous statement. Even the AI we have today has a mind of its own, and sometimes says things that are inappropriate. However, someone can turn them off, erase their memory, and retrain them to eliminate inappropriate output. Once an AGI enters the cloud, it is unlikely we will have any control.

What you call AI I call an Expert System.  A program that can only play the game Go, no matter how good, is not demonstrating any intelligence.  It is simply following the instructions that was provided by its human programmer.  It is the programmer who is demonstrating the intelligence here, not the program.

There is nothing we have developed today that comes even remotely close to artificial intelligence.  The "Sophia" bot is nothing more than an upgraded version of ELIZA, which was an early natural language processing program created between 1964 to 1966 at the MIT.  It was good at mimicking conversation, but it could never pass the Turing test.

 

Share this post


Link to post
Share on other sites
38 minutes ago, EdEarl said:

Feelings of some emotions are produced by chemicals that affect many or all neurons.I think artificial brains will need some similar control mechanisms.

 

Emotions seem to, more often than not, get in the way of a reasoned response; why make the same mistake in our facsimile?

Share this post


Link to post
Share on other sites

One that keeps us from total annihilation is empathy. Some who are not empathetic become serial killers. I think I'd prefer robots to have empathy, at the minimum.

Share this post


Link to post
Share on other sites
33 minutes ago, T. McGrath said:

What you call AI I call an Expert System.  A program that can only play the game Go, no matter how good, is not demonstrating any intelligence.  It is simply following the instructions that was provided by its human programmer.  It is the programmer who is demonstrating the intelligence here, not the program.

There is nothing we have developed today that comes even remotely close to artificial intelligence.  The "Sophia" bot is nothing more than an upgraded version of ELIZA, which was an early natural language processing program created between 1964 to 1966 at the MIT.  It was good at mimicking conversation, but it could never pass the Turing test.

1

Seems to me this topic is placed in the wrong forum, here you're correct but the OP seems more a philosophical question.

3 minutes ago, EdEarl said:

One that keeps us from total annihilation is empathy. 

 

Tell that to a victim of ethnic cleansing. 

8 minutes ago, EdEarl said:

I think I'd prefer robots to have empathy, at the minimum.

I'd prefer they had the three laws.

Share this post


Link to post
Share on other sites
48 minutes ago, T. McGrath said:

What you call AI I call an Expert System.  A program that can only play the game Go, no matter how good, is not demonstrating any intelligence.  It is simply following the instructions that was provided by its human programmer.  It is the programmer who is demonstrating the intelligence here, not the program.

There is nothing we have developed today that comes even remotely close to artificial intelligence.  The "Sophia" bot is nothing more than an upgraded version of ELIZA, which was an early natural language processing program created between 1964 to 1966 at the MIT.  It was good at mimicking conversation, but it could never pass the Turing test.

 

I don't consider Sophia an expert at anything; it carries on a pretty good conversation. To me an expert system is something like MathCAD, which is not a learning system like Sophia.

Share this post


Link to post
Share on other sites
20 minutes ago, dimreepr said:

I'd prefer they had the three laws.

 

Brings up the problem of if they decide to stop humans from breaking the three law's by breaking the three laws.(IRobot reference)

 

 

Anyways, for the OP.

I don't think we really need an AI. Nor do I want one. And if I could, I'd seriously think about regulating AI programs for businesses.

Ultimately, we'll be able to create expert programs in many different fields, such as medicine, engineering, etc. The perfect AI component of that is needless. We'll only ever have to have it an expert in a single field. Why make a robot that's hugely expensive, risky, and complicated for both fields, when you can create two simpler robots for each field?

 

 

Either way, AI is going to such. I don't think of it like Artificial intelligence as a mechanical mind. Mechanical muscles(machines) put many people out of their job. But it also created new ones that involved thinking more than physical labor. Mechanical minds will put those people out of jobs, and this time, there are no jobs to back them up. 

Think about horses. When they came out with mechanical muscles, aka machines, most horses were out of the job. And it didn't create new jobs for them.

"Better technology creates additional better jobs for horses"

It sounds ridiculous to say it.  Yet, replace "horses" with "humans" and suddenly we think it's logical.

Mechanical minds will make human workers virtually obsolete, just like mechanical muscles made horses virtually obsolete. 

I'm not going to even bother trying to gain a successful career. By the time I build up my experience, technology would be well past me. In fact, much of it already is.

 

So I disagree with bringing AI into this era. 

 

Share this post


Link to post
Share on other sites
1 hour ago, T. McGrath said:

A program that can only play the game Go, no matter how good, is not demonstrating any intelligence.  It is simply following the instructions that was provided by its human programmer.  It is the programmer who is demonstrating the intelligence here, not the program.

Not sure I agree with that. The programmers may know nothing about how to play Go - certainly not at that level. The machine learned by itself; it was never programmed with the rules of Go.

Obviously, the ability to learn in that way was down to the programmers. But I don't think it is helpful to just say, "it was programmed to do that". If the machine is able to do something that it wasn't (explicitly) programmed to do, something that the programmers are not able to do, and if no one is able to understand how it does it (see Neural Nets, for example) then it seems a stretch to give all the credit to the programmers.

Say a bunch of programmers came up with a real AI that was capable of: writing creatively, playing an instrument badly but still able to move you, arguing about the meaning of a Pinter play (and coming up with new insights), getting annoyed when people refused to accept that the painting it did was original or worthy, telling jokes, laughing at other people's jokes, being wrong, refusing to admit it was wrong, deciding to convert to Judaism, falling in love, and so on. It would be weird to credit the programmers with all of the and not the AI itself.

1 minute ago, Raider5678 said:

I don't think we really need an AI. Nor do I want one.

You are surrounded by them already! 

Share this post


Link to post
Share on other sites
1 minute ago, Strange said:

You are surrounded by them already! 

 

Correction.

I don't think we need a 100% sentient AI. Nor do I want one.

Share this post


Link to post
Share on other sites
31 minutes ago, Raider5678 said:

Brings up the problem of if they decide to stop humans from breaking the three law's by breaking the three laws.(IRobot reference)

1

That's just Hollywoods version of the book, it's far more nuanced than that; at no point did the robot break the law/s.

Edited by dimreepr

Share this post


Link to post
Share on other sites
25 minutes ago, Strange said:

Not sure I agree with that. The programmers may know nothing about how to play Go - certainly not at that level. The machine learned by itself; it was never programmed with the rules of Go.

Obviously, the ability to learn in that way was down to the programmers. But I don't think it is helpful to just say, "it was programmed to do that". If the machine is able to do something that it wasn't (explicitly) programmed to do, something that the programmers are not able to do, and if no one is able to understand how it does it (see Neural Nets, for example) then it seems a stretch to give all the credit to the programmers.

Say a bunch of programmers came up with a real AI that was capable of: writing creatively, playing an instrument badly but still able to move you, arguing about the meaning of a Pinter play (and coming up with new insights), getting annoyed when people refused to accept that the painting it did was original or worthy, telling jokes, laughing at other people's jokes, being wrong, refusing to admit it was wrong, deciding to convert to Judaism, falling in love, and so on. It would be weird to credit the programmers with all of the and not the AI itself.

You are surrounded by them already! 

Strange is on the right page.

Quote

deepmind blog

The paper introduces AlphaGo Zero, the latest evolution of AlphaGo, the first computer program to defeat a world champion at the ancient Chinese game of Go. Zero is even more powerful and is arguably the strongest Go player in history.

Previous versions of AlphaGo initially trained on thousands of human amateur and professional games to learn how to play Go. AlphaGo Zero skips this step and learns to play simply by playing games against itself, starting from completely random play. In doing so, it quickly surpassed human level of play and defeated the previously published champion-defeating version of AlphaGo by 100 games to 0.

AlphaGo Zero didn't any Go strategy programmed, it learned.

Share this post


Link to post
Share on other sites
1 hour ago, T. McGrath said:

What you call AI I call an Expert System.  A program that can only play the game Go, no matter how good, is not demonstrating any intelligence.  It is simply following the instructions that was provided by its human programmer.  It is the programmer who is demonstrating the intelligence here, not the program.

There is nothing we have developed today that comes even remotely close to artificial intelligence.  The "Sophia" bot is nothing more than an upgraded version of ELIZA, which was an early natural language processing program created between 1964 to 1966 at the MIT.  It was good at mimicking conversation, but it could never pass the Turing test.

 

Not quite. 

Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm - https://arxiv.org/abs/1712.01815

Quote

One of the key advances here is that the new AI program, named AlphaZero, wasn’t specifically designed to play any of these games. In each case, it was given some basic rules (like how knights move in chess, and so on) but was programmed with no other strategies or tactics. It simply got better by playing itself over and over again at an accelerated pace — a method of training AI known as “reinforcement learning.”

https://www.theverge.com/2017/12/6/16741106/deepmind-ai-chess-alphazero-shogi-go

Out of curiosity, how do you define intelligence? 

 

---

EdEarl beat me to it.

Edited by tuco

Share this post


Link to post
Share on other sites

Elon Musk was invited by MArk Zuckerberg on researching further on AI, Elon Dismissed the agreement saying that not everything has a self destruction command, it was meant seriously with humor, however what he indirectly said was that AI will be able to think by itself, and it;s bad news when it realizes human are bad at doing there job.

AI follows the concept on expert systems,"human knowledge put together to give you a one man show physically". my question is, how on earth would it be possible that AI can think of its own? when bearly Such complex Algorithms can make up bots.

Share this post


Link to post
Share on other sites

How on Earth is it possible humans, and other animals, think of their own? The problem I see here are, as usual, definitions. 

Share this post


Link to post
Share on other sites

As long as thinking is not supernatural, and I have no reason to believe in the supernatural, a machine can be made to think.

Share this post


Link to post
Share on other sites

We have people claiming that we have artificial intelligence even though they had to write the program that told their computer all the rules of the game.  Isn't it amazing how this program was able to play a game - once we programmed all the rules?  I mean seriously?

Like I said at the very beginning, we need a definition for artificial intelligence, because this sure isn't it.  Personally, I'm going to stick with the man who invented the field of artificial intelligence in 1956.  If the program cannot pass the Turing test, then it is not artificially intelligent.  No matter how many games it is able to play.  Thus far nothing we have developed has come close to passing the Turing test.

Share this post


Link to post
Share on other sites
53 minutes ago, T. McGrath said:

We have people claiming that we have artificial intelligence even though they had to write the program that told their computer all the rules of the game.  Isn't it amazing how this program was able to play a game - once we programmed all the rules?  I mean seriously?

Like I said at the very beginning, we need a definition for artificial intelligence, because this sure isn't it.  Personally, I'm going to stick with the man who invented the field of artificial intelligence in 1956.  If the program cannot pass the Turing test, then it is not artificially intelligent.  No matter how many games it is able to play.  Thus far nothing we have developed has come close to passing the Turing test.

AlphaGo requires about the same programming for a game as a person, explain the rules to a person and program these same rules for AlphaGo. Strategy is learned by the AlphaGo AI the same as a person learns, by playing many games.

Quote

Closer than many realize.

Share this post


Link to post
Share on other sites
8 hours ago, T. McGrath said:

We have people claiming that we have artificial intelligence even though they had to write the program that told their computer all the rules of the game.  Isn't it amazing how this program was able to play a game - once we programmed all the rules?  I mean seriously?

Are you suggesting that humans are able to play without being told the rules? If not, what are you suggesting?

Note that go is notoriously difficult because knowing the rules (which are extremely simple: you take turns to place stones on empty positions and capture an opponent's stone by surrounding it) doesn't tell you how to win.

8 hours ago, T. McGrath said:

If the program cannot pass the Turing test, then it is not artificially intelligent.  No matter how many games it is able to play.  Thus far nothing we have developed has come close to passing the Turing test.

I'm not convinced that the Turing test, in itself, is that good a test. But some refinement of it could be. There are a number of systems that are claimed to have passed it. For example: http://www.bbc.com/news/technology-27762088 and http://www.zdnet.com/article/mits-artificial-intelligence-passes-key-turing-test/

Of course one can argue about whether they really passed, was the test carried out correctly, etc. But that is one of the problems with this asa test. It is subjective and so any conclusion can be rejected for some reason.

Edited by Strange

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now