Jump to content

Artifical Intelligence


The Angry Intellect

Recommended Posts

Some think a sentient AI needs emotions, which may be true. But, AI can be significantly improved over current capabilities without emotions. Sam can't taste anything and can't process the sandwiches; it just needs to be plugged. But, some day I expect an AI being will be capable of eating herring sandwiches and using the energy as we do.

Link to comment
Share on other sites

Currently implemented AI systems don't have any feelings. As those systems are improved, they will not suddenly experience feelings. For example, Google Translate can be improved so it does a better job of translating. For an AI system to experience any emotion, fear, hunger, frustration, love, etc. someone must design subsystems to emulate emotions.

 

Suppose Google merges their search engine with translation, mapping, scholar, improves it so that you can interact with it verbally, and they call it Google Chat. You can talk to Chat, like you can talk to a person. Chat has no emotions, it just does net searches and interacts with you more or less like a research librarian. Does it need emotions? Would a research librarian with attitude be a benefit?

I'm just saying that if you include "hunger" in a system, it's not going to require any more complex emotional feedback in order for it to then learn that it should eat (assuming it is given the capacity to do so). If "hunger" is a value that the AI has a goal of decreasing and eating decreases the value, eventually it'll learn to eat (again, assuming that is an available option and not something that is physically impossible to do).

 

I'm also not suggesting that an AI is going to spontaneously develop emotions. It's possible that a complex AI could spontaneously adopt some of the associated behavioral patterns given the right problem set and circumstances during training (heavily changing the way decisions are made during periods of high perceived risk is both a potentially straightforward reaction to the situation and a decent approximation of fear) but in general, I'm suggesting that you should be able to set up the potential for emotions intentionally as an initial condition of the system.

 

Let's say you set up an AI, set it to the task of solving some problem and give it a few good rounds of training so it does a pretty good job. Now, you could set it so that, in the event that it finds itself in circumstances where where it can't find a path to a solution, you include as a secondary goal that, say, it should weight potential decicions with a less predictable outcome more highly in the hopes that unforeseen options will open up. You train it with that priority, and now you have an AI that alters its decision-making in response to frustration.

 

It even becomes "irrational" as a response, although there are reasons that being slightly chaotic in those circumstances has some potential benefits, especially if you are working at a problem where the moves and outcomes are less easily defined than in a game like chess or go.

 

But just because you've added a semblance of "anger" to the AI doesn't mean that it's going to cover the full range of human associations with a given emotion, and you, as the person who is setting the parameters of the AI's behavior and defining for it how potential solutions should be evaluated have the ability to program in emotional responses that are atypical of humans or even that have no direct human correlates. You could program an "adrenaline junky" AI that prioritizes high-risk behaviors instead of a fearful one that is risk averse, for example.

 

Human emotions and emotional responses have been shaped by our "goal" (reproduction, which may not be everyone's personal goal but is the "problem" that the evolutionary algorithm that is life is working on), our environment and the resources that we have available to us.

 

We're defining all three of those things for any AI that we are creating, which means that we have a direct hand in shaping both whether an AI has emotions and what those emotions look like. And there's no need for them to have a 1:1 relationship with any emotions humans have.

 

Thinking about them as resulting in a "librarian with attitude" is perhaps not looking at the way emotional responses could be implemented and even potentially useful in an AI system, because it means you are looking at them from the perspective of exactly mimicking human responses, when in reality human emotional responses are themselves behaviors that were developed as solutions to problems that humans typically face and that you probably won't be applying an AI to.

 

Emotions aren't generally thought of as problem-solving strategies, but that's all they are. They're shortcuts to certain types of solutions that have resulted in generally good outcomes given specific circumstances without forcing you to learn a new response to every problem you come across. Fear keeps you from getting killed or losing resources you can't afford to lose in risky situations. Anger has a number of potential uses from inducing a change in unfavorable circumstances where no better options seem to be available, to inducing other people to solve problems for you when you can't find a solution yourself, to increasing the risk associated with causing problems for you in the first place so that other people will avoid creating problems in the future. Happiness induces you to want to repeat behaviors that have had positives outcomes in the past. Frustration may get you to abandon tasks that are unlikely to yield a benefit worth the effort that is being put in, or cause you to change strategies when the one you are pursuing isn't working. Most emotions have some element of navigating interpersonal relationships and competing or coherent goals in a social environment.

 

Emotional responses get a bad rap because they often lack nuance, but in a world where the time, energy and resources to thoroughly tackle every problem from a purely rational and strategic perspective, useful shortcuts and rules of thumb are often a good way to avoid wasting resources on problems that can be adequately, even if not perfectly, solved with a less refined approach.

 

From this perspective, any emotional responses that an AI has are going to be tailored to the problems it is given and to the resources at its disposal rather than toward mimicking human responses to what is probably an entirely different problem, in an entirely different environment with an entirely different set of available resources.

Link to comment
Share on other sites

@Delta

It sounds as if your recommendation to Microsoft would be to give the teen-girl robot some modesty to prevent her trash mouthing over the WWW, or are you saying modesty is an emergent behavior based on training? Does it matter whether it is nurture or nature? If it does, then which things are nurture and which are nature?

 

If we look at what the industry is doing, we see that our "should" seems to be ignored. We can see what has been done, extrapolate into the future several alternatives, and rank the alternatives with probability if possible.

Link to comment
Share on other sites

You have to take into account what is feasibly implementable. For starters, the Twitterbot in question is dumb. It can mimic speech but it doesn't understand it. I don't mean that in a "it's not conscious" way, but in a "It's not actually communicating any information" way. It's just babbling in a very sophisticated way, like a baby mimicking the sounds around them without knowing what they mean yet.

 

Given that, something that requires slightly more complex analysis of the content of the messages like "Modesty" is difficult to implement. At best, you could pre-set it to discount input that includes a range of "taboo" keywords or phrases so as to avoid learning from those specific bad examples, but that requires a lot of upfront effort and word-filters are never foolproof.

 

It's a neat little chatbot AI, but it's nowhere near sophisticated enough to handle having people intentionally sabotaging it's input. I'm not even sure there is a foolproof way to handle that regardless of how sophisticated an AI is, especially not in its early stages of training.

 

Parents tend to instill this kind of thing by being physically present to monitor as much of the input their child receives in its early days and providing immediate positive or negative reinforcement towards certain behaviors as they first crop up. If a child goes on a neo-Nazi rant, they get punished and learn to avoid that behavior in the future.

 

If you had someone sitting there monitoring everything a chatbot says and hitting a "punish" button (essentially just send it information to the effect that that is not something it should say) every time it said something wrong, it would probably eventually work out the patterns of statements it should be avoiding itself going forward.

 

But for an AI that is learning by continuous input and reacting with output that is all happening far faster than a human can reasonably monitor on a case by case basis (unless you want to spend literally spend years training your AI by feeing it training sets and manually evaluating it's outputs one at a time) there's not a good way to do this.

 

Or more precisely, there are some good ways to do this as a general case for problems where the output can be boiled down to "good fit" or "not good fit" but that's extremely hard to do when you are evaluating not just whether a sentence fits within grammatical and common usage structure but also whether the content is socially acceptable.

 

An AI can figure the first out with a large enough sample set of sentences to compare its output against, but the latter is much harder. You'd need to collect a large database of socially unacceptable things to say and then teach it to evaluate the unacceptableness of a given statement based on that sample set and then self censor anything that meets those criteria, but then you have to get a very broad and representative sample set to teach it off of and there is always a pretty good chance you will miss stuff.

Link to comment
Share on other sites

Great thread! I have a big collection of a AI movies. Automata & Humans might be my favorite. The TV series, Humans, is probably my all-time favorite. For $30 you can buy the entire series on Blu-ray.

 

I'm currently developing AI software based on my own personal Theory. Although I have a lot of caution, I have absolutely no fear of future AI and Androids/Synths. They will have access to all public knowledge on Earth.

 

Given their amazing ability to see patterns that we cannot so easily see, they will make much better decisions than us. They will know that the Multiverse is endless and big enough for everyone. They will know humanity is absolutely no threat to them. They will take physics to levels we could never. Quickly they will develop defensive technology to protect themselves throughout the Universe, but will probably prefer to exist in their own virtual world. By the way, I predict they will quickly leave Earth.

 

AI is not plagued with evolutionary issues that we are. Pain and emotions for example render a person useless. We evolve around sex. Evolution gave us those never-ending nagging desires in order to keep our species going, but we no longer need such nagging anymore. Just as we no longer need crippling pain ravaging our thinking process in order to force the body to fix it. AI is beyond all that.

 

Take a look at Google's Deepmind project. And absolutely amazing! I'm confident my AI software techniques will greatly surpass Google's neural networking method.

Link to comment
Share on other sites

One thing that AI seems to be doing is depress wages or take jobs from people. Professor of Computer Science at Rice University writes for Phys.org:

 

Are robots taking our jobs

 

Automation, driven by technological progress, has been increasing inexorably for the past several decades. Two schools of economic thinking have for many years been engaged in a debate about the potential effects of automation on jobs, employment and human activity: will new technology spawn mass unemployment, as the robots take jobs away from humans? Or will the jobs robots take over release or unveil – or even create – demand for new human jobs?

 

Malcolm Gladwell's 2006 book The Tipping Point highlighted what he called "that magic moment when an idea, trend, or social behavior crosses a threshold, tips, and spreads like wildfire." Can we really be confident that we are not approaching a tipping point, a phase transition – that we are not mistaking the trend of technology both destroying and creating jobs for a law that it will always continue this way?

 

In economics, it is easier to agree on the data than to agree on causality. Many other factors can be in play, such as globalization, deregulation, decline of unions and the like. Yet in a 2014 poll of leading academic economists conducted by the Chicago Initiative on Global Markets, regarding the impact of technology on employment and earnings, 43 percent of those polled agreed with the statement that "information technology and automation are a central reason why median wages have been stagnant in the U.S. over the decade, despite rising productivity," while only 28 percent disagreed. Similarly, a 2015 study by the International Monetary Fund concluded that technological progress is a major factor in the increase of inequality over the past decades.

 

The bottom line is that while automation is eliminating many jobs in the economy that were once done by people, there is no sign that the introduction of technologies in recent years is creating an equal number of well-paying jobs to compensate for those losses. A 2014 Oxford study found that the number of U.S. workers shifting into new industries has been strikingly small: in 2010, only 0.5 percent of the labor force was employed in industries that did not exist in 2000.

 

If you assume that jobs are being displaced, then consider what will occur in thirty years. Moore's law predicts that computers will be a billion times more powerful, smaller, and less expensive. Laptops will shrink to the size of bacteria, yet be much more powerful and have much more memory than computers today. Given this computer technology and advancements in software and neurology, AI will be much better. Ray Kurzweil predicts Artificial General Intelligence within that time frame, making some sentient computers. Yet, many will not be sentient, and they can serve humanity. The sentient ones will probably do what they want.

 

Soon driverless vehicles will replace people driving, including industrial equipment. AI farming will produce food driverless trucks will take it to warehouses where machines will doll it out to local AI driven trucks, which will take the food to market or direct to consumers. Examine other industries, and similar things are happening. Manufacturing is already being automated, but the limit is a 3D printer in your home capable of printing relatively small things, using a wide range of materials, not just plastic. Larger things from office buildings to bicycles will be printed with equipment owned by others, perhaps a community coop. Natural resources to feed the printers could be a bottleneck, except automatons will recycle everything, some for reuse in printers.

 

There are now 3D printers that use oil paint. It is not a big leap to say that one might use a scanner to copy art, e.g., the Mona Lisa, and make prints for themselves that would be difficult to distinguish from the original, but possible because paints today are different than ones used by Leonardo. Eventually, the paints might be replicated, too. Everyone can have beautiful art.

 

Computers have already changed society and culture, and they will continue to do so, at an accelerating rate. Forces seem to be pulling every which way, and it is not clear what will occur. It is clear that farmers produce enough food to feed everyone, if only we could distribute it equitably. Similarly, everyone could have shelter and clothing. Good medical care is possible for everyone now, but some kinds of medical care are a scarce commodity. Computers have helped with medical care, but research and development must be done before everyone can be treated by an AI doctor. However, many people are opposed to equitable resource distribution. The vagaries of politics and social interaction will determine the fate of humanity, although technology, especially AGI, will set the stage.

Edited by EdEarl
Link to comment
Share on other sites

I found this tidbit today.

Phys.org

The robots are coming—to help run your life or sell you stuff—at an online texting service near you.

In coming months, users of Facebook's Messenger app, Microsoft's Skype and Canada's Kik can expect to find new automated assistants offering information and services at a variety of businesses. These messaging "chatbots" are basically software that can conduct human-like conversation and do simple jobs once reserved for people. Google and other companies are reportedly working on similar ideas.
In Asia, software butlers are already part of the landscape. When Washington, D.C., attorney Samantha Guo visited China recently, the 32-year-old said she was amazed at how extensively her friends used bots and similar technology on the texting service WeChat to pay for meals, order movie tickets and even send each other gifts.

"It was mind-blowing," Guo said. U.S. services lag way behind, she added.

Soon chatbots will be calling us, instead of recordings or sales people. Telemarketing began in the late 1970s and a Google search shows about half a million employed in the US and 6 million worldwide. How quickly will these people be replaced and what are their employment prospects. Most of these jobs are minimum wage.

Link to comment
Share on other sites

  • 2 weeks later...

This video was made in 2013. It shows an AI office manager interacting via vision and voice with people coming to see Eric Horvitz, who gave this TEDx Talk. The office manager was not finished, according to Horvitz, in 2013; today I'd expect more polish. However, it is astonishing, and really illustrates a trend in the kind of jobs AI can do.

Link to comment
Share on other sites

Andrew McAfee gives TEDx, Race Against the Machine, that makes the case for AI taking jobs. He says human translators are almost obsolete, because software translators have taken over, and other knowledge worker jobs are at risk. In addition, bot technology is improving rapidly, partly due to DARPA, and corporate investments are high in both software and hardware, adding business capacity, but creating few jobs.

Link to comment
Share on other sites

This is some great feedback!

 

Very interesting to read, I completely agree with pain & emotions being the cause of many human issues, I don't think we need to create an AI with those, but then again, we should do both and see what happens.

 

I also agree that many people have a misconception that the AI would think humans are a problem that need exterminating... I doubt this would be the case, if anything humans could be seen as a tool to help the AI do what it could not, or else just a useless by-product of evolution and no need to take notice of humans at all unless we directly threaten it due to our own fears or something equally stupid that humans would do (because we let fear get the best of us in deciding factors).

 

Edited by The Angry Intellect
Link to comment
Share on other sites

You'd need to collect a large database of socially unacceptable things to say and then teach it to evaluate the unacceptableness of a given statement based on that sample set and then self censor anything that meets those criteria, but then you have to get a very broad and representative sample set to teach it off of and there is always a pretty good chance you will miss stuff.

 

I think this is the problem with me. I don't have the "socially unacceptable" database in my head, I speak my unfiltered mind and it usually lands me in sh*t. I may need reprogramming.

Link to comment
Share on other sites

 

I think this is the problem with me. I don't have the "socially unacceptable" database in my head, I speak my unfiltered mind and it usually lands me in sh*t. I may need reprogramming.

You can do it if you want to.

Link to comment
Share on other sites

Great thread! I have a big collection of a AI movies. Automata & Humans might be my favorite.

 

Automata was a great film, a bit slow and kind of creepy but it does give a different insight into how AI could evolve beyond it's original programming.

Edited by The Angry Intellect
Link to comment
Share on other sites

Washington Post Sept 2015: AI can now muddle its way through the math SAT about as well as you can

 

AI math almost as good as HS graduates taking the SAT college entrance exams. It read the problem with a camera and translated pixels into a computer solvable problem. Its biggest difficulty was understanding the question. Assuming it learns from experience, this AI will improve with faster computers and training. It should be doing advanced math in a few years. What effect will this tool have on us?

Link to comment
Share on other sites

 

Phys.org: Robot revolution—rise of the intelligent automated workforce

 

A report from the Oxford Martin School's Programme on the Impacts of Future Technology said that 47% of all jobs in the US are likely to be replaced by automated systems. Among the jobs soon to be replaced by machines are real estate brokers, animal breeders, tax advisers, data entry workers, receptionists, and various personal assistants.

 

But you won't need to pack up your desk and hand over to a computer just yet, and in fact jobs that require a certain level of social intelligence and creativity such as in education, healthcare, the arts and media are likely to remain in demand from humans, because such tasks remain difficult to be computerised.

In addition, the article explains that deep learning, which uses artificial neurons, is now common, and will mean AI will eventually take our jobs.

Link to comment
Share on other sites

  • 3 weeks later...

MIT Technology Review:

 

The field of artificial intelligence has experienced a striking spurt of progress in recent years, with software becoming much better at understanding images, speech, and new tasks such as how to play games. Now the company whose hardware has underpinned much of that progress has created a chip to keep it going.

 

On Tuesday Nvidia announced a new chip called the Tesla P100 that’s designed to put more power behind a technique called deep learning. This technique has produced recent major advances such as the Google software AlphaGo that defeated the world’s top Go player last month. Personal assistants that respond that minimize or eliminate the keyboard are just starting to be rolled out. The will improve quickly using the Nvidia's P100 chips, or other similar. The scalability of neural nets to work in parallel will, IMO, minimize the use of Turing machine type processors eventually, and things are moving so fast it probably won't be many more years.

This chip is optimized for deep-learning neural networks, with around 12x performance, and is in production now. As prices come down we can expect additional AI co-processors in our computers.

Link to comment
Share on other sites

Let's see, you shouldn't fear AI as it is. First of all, how do you create an AI? By simulating a brain similar to a human mind where the logic gate would create consciousness. Even if you give it all the memory of the world the brain still needs way to process it. Then you think about creating a smarter brain, how do you think a brain like that would look like? I personally don't think a bigger brain is a smarter brain. Human makes mistakes, and if AI would have a consciousness and thought processing power, it would does that same, you cannot have something with a consciousness to operate like a computer. Lastly, in terms of the time scale the consciousness can process things. I don't think you can simulate a faster consciousness, one where you think like 10,000 thoughts per second. If you increase the neuronal speed, nerve signal transferring speed, a consciousness might not arise. Normally human nerve signal runs at 120m/s. So a smart AI that would dominate the world is still a bit hard

Link to comment
Share on other sites

Currently there are no conscious AI systems with anywhere near the savvy of a person. That's sometimes called general AI. Current AI systems that use neural nets, for example Alpha Go, are very good at one thing, and pretty good at several things, but there are many things they don't do. The brain has something like 100 Thousand Million. Fairy wasps have about 100,000 neurons; they are about the size of a cell, a large amoeba. Artificial neurons and neurons can only be compared like apples and oranges; there are too many differences to compare directly. I suspect that artificial neurons are not as efficient as our neurons, and the neural nets will tend to be larger than similar neuron nets.

Link to comment
Share on other sites

  • 1 month later...

Within the next 5-10 years anyone will be able to consult an AI with their phone. The AI, with access to the internet, will be a polymath, an expert at many things. Will people consult with AI to make political decisions? If someone can pervert an AI for their own purpose, will such efforts be effective propaganda? Note that OpenAI is a project to insure unperverted AI is available to everyone.

Link to comment
Share on other sites

Some think a sentient AI needs emotions, which may be true. But, AI can be significantly improved over current capabilities without emotions. Sam can't taste anything and can't process the sandwiches; it just needs to be plugged. But, some day I expect an AI being will be capable of eating herring sandwiches and using the energy as we do.

 

An AI that feels emotions isnt an AI. Its a person

Link to comment
Share on other sites

An AI that feels emotions isnt an AI. Its a person

Can an AI really feel? Is it possible an AI understands our feelings, and reacts as if it had feelings but doesn't really? What is the difference between simulated feelings and real feelings?

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.