Jump to content

How does Artificial Intelligence learn?


The DarkWiki

Recommended Posts

 

 

 

Today, artificial intelligence helps doctors diagnose patients, pilots fly commercial aircraft, and city planners predict traffic. 

But no matter what these AIs are doing, the computer scientists who designed them likely don’t know exactly how they’re doing it. This is because artificial intelligence is often self-taught, working off a simple set of instructions to create a unique array of rules and strategies. 

So how exactly does a machine learn? There are many different ways to build self-teaching programs. But they all rely on the three basic types of machine learning: unsupervised learning, supervised learning, and reinforcement learning

To see these in action, let’s imagine researchers are trying to pull information from a set of medical data containing thousands of patient profiles. 

First up, unsupervised learning. This approach would be ideal for analyzing all the profiles to find general similarities and useful patterns. Maybe certain patients have similar disease presentations, or perhaps a treatment produces specific sets of side effects. This broad pattern-seeking approach can be used to identify similarities between patient profiles and find emerging patterns, all without human guidance. 

But let's imagine doctors are looking for something more specific. These physicians want to create an algorithm for diagnosing a particular condition. They begin by collecting two sets of data— medical images and test results from both healthy patients and those diagnosed with the condition. Then, they input this data into a program designed to identify features shared by the sick patients but not the healthy patients. Based on how frequently it sees certain features, the program will assign values to those features’ diagnostic significance, generating an algorithm for diagnosing future patients. 

However, unlike unsupervised learning, doctors and computer scientists have an active role in what happens next. Doctors will make the final diagnosis and check the accuracy of the algorithm’s prediction. Then computer scientists can use the updated datasets to adjust the program’s parameters and improve its accuracy. This hands-on approach is called supervised learning

Now, let’s say these doctors want to design another algorithm to recommend treatment plans. Since these plans will be implemented in stages, and they may change depending on each individual's response to treatments, the doctors decide to use reinforcement learning

This program uses an iterative approach to gather feedback about which medications, dosages and treatments are most effective. Then, it compares that data against each patient’s profile to create their unique, optimal treatment plan. As the treatments progress and the program receives more feedback, it can constantly update the plan for each patient. 

None of these three techniques are inherently smarter than any other. While some require more or less human intervention, they all have their own strengths and weaknesses which makes them best suited for certain tasks. 

However, by using them together, researchers can build complex AI systems, where individual programs can supervise and teach each other. 

For example, when our unsupervised learning program finds groups of patients that are similar, it could send that data to a connected supervised learning program. That program could then incorporate this information into its predictions. Or perhaps dozens of reinforcement learning programs might simulate potential patient outcomes to collect feedback about different treatment plans. 

There are numerous ways to create these machine-learning systems, and perhaps the most promising models are those that mimic the relationship between neurons in the brain. These artificial neural networks can use millions of connections to tackle difficult tasks like image recognition, speech recognition, and even language translation. 

However, the more self-directed these models become, the harder it is for computer scientists to determine how these self-taught algorithms arrive at their solution. 

Researchers are already looking at ways to make machine learning more transparent. But as AI becomes more involved in our everyday lives, these enigmatic decisions have increasingly large impacts on our work, health, and safety. 

So as machines continue learning to investigate, negotiate and communicate, we must also consider how to teach them to teach each other to operate ethically.

Source: url deleted

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.