John Timmer
– 2/24/2018, 4:00 PM The idea behind using a neural network for image recognition is that you don’t have to tell it what to look for in an image. You don’t even need to care about what it looks for.

With enough training, the neural network should be able to pick out details that allow it to make accurate identifications. For things like figuring out whether there’s a cat in an image, neural networks don’t provide much, if any, advantages over the actual neurons in our visual system.

But where they can potentially shine are cases where we don’t know what to look for. There are cases where images may provide subtle information that a human doesn’t understand how to read, but a neural network could pick up on with the appropriate training.

Now, researchers have done just that, getting a deep-learning algorithm to identify risks of heart disease using an image of a patient’s retina. The idea isn’t quite as nuts as it might sound.

The retina has a rich collection of blood vessels, and it’s possible to detect issues in those that also effect the circulatory system as a whole; things like high levels of cholesterol or elevated blood pressure leave a mark on the eye. So, a research team consisting of people at Google and Verily Life Sciences decided to see just how well a deep-learning network could do at figuring those out from retinal images. Read more from arstechnica.com…

thumbnail courtesy of arstechnica.com