T. McGrath Posted December 29, 2017 Share Posted December 29, 2017 11 hours ago, EdEarl said: AlphaGo requires about the same programming for a game as a person, explain the rules to a person and program these same rules for AlphaGo. Strategy is learned by the AlphaGo AI the same as a person learns, by playing many games. Closer than many realize. The ability to learn is not an indication of intelligence, just clever programming. Intelligence begins when you apply what you have learned, and more than to just one thing. When you can show me a program that can play Chess/Go, drive me to work in congested traffic, and diagnose any medical problems I might have - without having to reprogram - then you will have achieved artificial intelligence, but doing just one thing (no matter how well) doesn't cut it. 3 hours ago, Strange said: Are you suggesting that humans are able to play without being told the rules? If not, what are you suggesting? Note that go is notoriously difficult because knowing the rules (which are extremely simple: you take turns to place stones on empty positions and capture an opponent's stone by surrounding it) doesn't tell you how to win. I'm not convinced that the Turing test, in itself, is that good a test. But some refinement of it could be. There are a number of systems that are claimed to have passed it. For example: http://www.bbc.com/news/technology-27762088 and http://www.zdnet.com/article/mits-artificial-intelligence-passes-key-turing-test/ Of course one can argue about whether they really passed, was the test carried out correctly, etc. But that is one of the problems with this asa test. It is subjective and so any conclusion can be rejected for some reason. I'm saying that developing an application that does just one thing, no matter how well it does it, is not artificial intelligence. It is an Expert System. MIT has been trying to beat the Turing test since the 1960s, and failing. So I'm not surprised to see in their desperation that they made up their own test, which they could pass, and then misassociated it with Alan Turing. I agree with you that some refinement of the Turing test could be in order, but the rules/conditions of the test would have to be established first. Not after-the-fact, as is prone to happen with the media. The goal is not to fool the observer, but rather to make it so the observer cannot distinguish between human intelligence and artificial intelligence. The problem is that there is a subjective component to this test. Link to comment Share on other sites More sharing options...
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!Register a new account
Already have an account? Sign in here.Sign In Now