Soon enough, most of your home electronics could be implanted with the artificial intelligence needed to know, say, when to turn on the air-conditioning, or even how chunky you enjoy your morning smoothie. All it would take is a special piece of hardware to let devices run neural networks — or artificial replicas of the human brain — locally.

“Neural networks are often implemented in a digital fashion,” Avishek Biswas, a researcher at the Massachusetts Institute of Technology, tells Inverse. “But in the end we want to implement this in actual hardware, instead of just always running simulation on CPUs or GPUs, for broader applications.” Biswas and his colleagues at MIT have done just that by developing a chip that can undertake machine-learning algorithms without the need to feed data to supercomputers in the cloud.

In a paper Biswas presented this week at the International Solid State Circuits Conference in San Francisco, he explained how he developed a prototype for a chip that can increase the speed of machine-learning computations by up to 700 percent, while reducing power consumption by 93 to 96 percent. He said an updated version with more computational capabilities could be ready in a few years.

The best neural nets in the game are housed inside powerful computers unlike any that most people ever see. Devices like the Amazon Echo, beam data to these supercomputers using the cloud, the neural net does its computations, and the output is sent back to the device.

This process is slow, presents a security risk, and creates bandwidth traffic, say Biswas. “Depending on the cloud creates a latency issue that could effect something that requires fast decisions making,” he explains. Read more from inverse.com…

thumbnail courtesy of inverse.com