The updated AlphaZero results come exactly one year to the day since DeepMind unveiled the first, historic AlphaZero results in a surprise match vs Stockfish that changed chess forever. This version of AlphaZero was able to beat the top computer players of all three games after just a few hours of self-training, starting from just the basic rules of the games. But the results are even more intriguing if you're following the ability of artificial intelligence to master general gameplay.Īccording to the journal article, the updated AlphaZero algorithm is identical in three challenging games: chess, shogi, and go. What can computer chess fans conclude after reading these results? AlphaZero has solidified its status as one of the elite chess players in the world. According to DeepMind, AlphaZero uses a Monte Carlo tree search, and examines about 60,000 positions per second, compared to 60 million for Stockfish.Īn illustration of how AlphaZero searches for chess moves. ĪlphaZero's results in the time odds matches suggest it is not only much stronger than any traditional chess engine, but that it also uses a much more efficient search for moves. Stockfish only began to outscore AlphaZero when the odds reached 30-to-1.ĪlphaZero's results (wins green, losses red) vs Stockfish 8 in time odds matches. In the time odds games, AlphaZero was dominant up to 10-to-1 odds. With three hours plus the 15-second increment, no such argument can be made, as that is an enormous amount of playing time for any computer engine. This time control would seem to make obsolete one of the biggest arguments against the impact of last year's match, namely that the 2017 time control of one minute per move played to Stockfish's disadvantage. In the match, both AlphaZero and Stockfish were given three hours each game plus a 15-second increment per move. The 1,000-game match was played in early 2018. The results will be published in an upcoming article by DeepMind researchers in the journal Science and were provided to selected chess media by DeepMind, which is based in London and owned by Alphabet, the parent company of Google. Adding the opening book did seem to help Stockfish, which finally won a substantial number of games when AlphaZero was Black-but not enough to win the match.ĪlphaZero's results (wins green, losses red) vs the latest Stockfish and vs Stockfish with a strong opening book. The machine-learning engine also won all matches against "a variant of Stockfish that uses a strong opening book," according to DeepMind. 7, 2018, does not specify the exact development version used. The pre-release copy of journal article, which is dated Dec. In additional matches, the new AlphaZero beat the " latest development version" of Stockfish, with virtually identical results as the match vs Stockfish 8, according to DeepMind. ( See below for three sample games from this match with analysis by Stockfish 10 and video analysis by GM Robert Hess.)ĪlphaZero also bested Stockfish in a series of time-odds matches, soundly beating the traditional engine even at time odds of 10 to one. The updated AlphaZero crushed Stockfish 8 in a new 1,000-game match, scoring +155 -6 =839. The results leave no question, once again, that AlphaZero plays some of the strongest chess in the world. In news reminiscent of the initial AlphaZero shockwave last December, the artificial intelligence company DeepMind released astounding results from an updated version of the machine-learning chess project today.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |