Until very recently, the machines that could trounce champions were at least respectful enough to start by learning from human experience. To beat Garry Kasparov at chess in 1997, IBM engineers distilled centuries of chess wisdom into a formula that was hard-wired into their Deep Blue computer.
In 2016, Google DeepMind’s AlphaGo thrashed champion Lee Sedol at the ancient board game Go after poring over millions of positions from tens of thousands of human games. But now artificial intelligence researchers are rethinking the way their bots incorporate the totality of human knowledge.
The current trend is: Don’t bother. Last October, the DeepMind team published details of a new Go-playing system, AlphaGo Zero, that studied no human games at all.
Instead, it started with the game’s rules and played against itself. The first moves it made were completely random.
After each game, it folded in new knowledge of what led to a win and what didn’t. At the end of these scrimmages, AlphaGo Zero went head to head with the already superhuman version of AlphaGo that had beaten Lee Sedol. Read more from quantamagazine.org…
thumbnail courtesy of quantamagazine.org