In 2013, DeepMind Technologies, then a little-known company, published a groundbreaking paper showing how a neural network could learn to play 1980s video games the way humans do—by looking at the screen. These networks then went on to thrash the best human players.

A few months later, Google bought the company for $400 million. DeepMind has since gone on to apply deep learning in a range of situations, most famously to outperform humans in the ancient game of Go.

Today we get an answer of sorts thanks to the work of Rachit Dubey and colleagues at the University of California, Berkeley. They have studied the way humans interact with video games to find out what kind of prior knowledge we rely on to make sense of them.

It turns out that humans use a wealth of background knowledge whenever we take on a new game. And this makes the games significantly easier to play.

But faced with games that make no use of this knowledge, humans flounder, whereas machines plod along in exactly the same way. Take a look at the computer game shown above on the left (the original game). Read more from…

thumbnail courtesy of