Since the early days of computing, there has always been this idea that artificial intelligence would one day change the world. We’ve seen this future depicted in countless pop culture references and by futurist thinkers for decades, yet the technology itself remained elusive.

Incremental progress was mostly relegated to fringe academic circles and expendable corporate research departments. That all changed five years ago.

With the advent of modern deep learning, we’ve seen a real glimpse of this technology in action: Computers are beginning to see, hear, and talk. For the first time, AI feels tangible and within reach.

AI development today is centered around Deep Learning algorithms like convolutional networks, recurrent networks, generative adversarial networks, reinforcement learning, capsule nets, and others. The one thing all of these have in common is they take an enormous amount of computing power.

To make real progress towards generalizing this kind of intelligence, we need to overhaul the computational systems that fuel this technology. The 2009 discovery of the GPU as a compute device is often viewed as a critical juncture that helped usher in the Cambrian explosion around deep learning. Read more from venturebeat.com…

thumbnail courtesy of venturebeat.com