Updated AlphaGo Ready to Beat World's Best

Earlier this month, Google Deepmind chief executive Demis Hassabis revealed that their updated version of AlphaGo, a program capable of learning and playing the board game Go, has exhibited remarkable success against the world’s best Go players in online matches. Go, for many decades, has been the game to beat for many artificial intelligence (AI) engineers. Considerably more complex than Chess, Go has stumped many computer researchers seeking to develop a program that could compete with high-level human players. In Go, two players successively place black or white stones on a 19-by-19 board in order to surround more territory than the opponent. In addition to the large board size, the computational ability and intuition required to play Go makes it an extremely complex game.

Prior to last year the best Go programs, such as Zen from Japan, could only compete with high-level amateur players. AlphaGo’s recent victories against the best human players have blown away the expectations of computer scientists and Go players worldwide. Most recently, AlphaGo has been playing under the online name “Master(P),” handily defeating the world’s best in a 50-game win streak on Tygem and FoxGo, two of the major online Go servers.

Google’s artificial intelligence (AI) branch, Deepmind, developed AlphaGo early last year. The London-based team created AlphaGo based on the principles of “deep learning” in neural networks, which involves the strengthening of connections between specific neurons through examples and experience. AlphaGo “trained” by adapting its simulated network of neurons as it analyzed 30 million positions from expert games. The AI program then played against itself, improving with each game through a process known as reinforcement learning.

Its debut match against Fan Hui in January 2016 was met with much fanfare as the AI program defeated the European Go Champion in all five games of a show match. The program’s victory represented the first time a computer defeated a professional human in Go. It’s rise to fame, however, came in March 2016 in a landmark match against world Go champion Lee Sedol of South Korea. While the first three games were landslide victories for AlphaGo, games four and five demonstrated that it could “bug” out and was imperfect in its play.

Throughout the match, AlphaGo often played seemingly meaningless early moves that spelled doom for the opponent much later in the game, stunning Lee and the audience members. However, Lee Sedol’s early aggressive and creative play in the last three games seemed to successfully confound the AI during the crucial early stages of each match. Nevertheless, the 4-1 victory marked a great milestone in AI research and development. Upon learning of AlphaGo’s victory, both Go players and computer scientists around the world expressed shock and amazement at the program’s capabilities. An astounded and disappointed Lee later stated in a press conference, “there was never a case [where he] felt this amount of pressure.”

After publicly unveiling the updated AlphaGo, Hassabis stated that he was “excited by the results and also by what … the Go community can learn from … the new version of AlphaGo.” However, the Deepmind team recognizes that there is still much work to be done with AlpahGo. For example, while humans require only a few thousand games per year to efficiently learn Go, the program needs to play hundreds of millions of games to learn at the same rate. Additionally, AlphaGo is still a long way off from mimicking the full capacity of the human brain. Although it can “learn” Go, AlphaGo cannot apply its simulated neuron network to learning other games or tasks. Further research must be conducted to develop a fully functional AI program capable of rivaling the human brain.

Deepmind confirmed that AlphaGo will be participating in more professional tournaments this year. Most notably, it will play a tournament match against another world champion contender Ke Jie. Many experts believed that it would take at least another decade before a computer program could compete with the best human Go players. AlphaGo’s early victories are a testament to the ever-increasing rate of technological advancement in machine learning. Only time will tell what artificial intelligence has in store for the future.

Sources

http://www.nature.com/news/google-reveals-secret-test-of-ai-bot-to-beat-top-go-players-1.21253

http://www.nature.com/news/google-ai-algorithm-masters-ancient-game-of-go-1.19234

http://www.nature.com/news/the-go-files-ai-computer-wraps-up-4-1-victory-against-human-champion-1.19575

http://www.nature.com/news/what-google-s-winning-go-algorithm-will-do-next-1.19573

http://venturebeat.com/2016/03/12/go-board-game-champion-lee-sedol-apologizes-for-losing-to-googles-ai/

http://www.computer-go.info/h-c/index.html

https://www.theatlantic.com/technology/archive/2016/03/the-invisible-opponent/475611/

 

JYI is always looking for motivated undergraduate students to join our team. We also are looking for faculty members and professional science writers to serve as mentors for undergraduates.
Follow Us
For all the latest news from JYI, join our Facebook.
For all the latest news from JYI, join our Youtube.
For all the latest news from JYI, join our twitter.
For all the latest news from JYI, join our email list.
Translate