We are back with some highlights from the second day of NIPS. A lot of fascinating research was showcased today, and we are excited to share some of our favorites with you. If you missed them, feel free to check our Day 1 Highlights!One of the most memorable sessions of the first two days was today’s invited talk by Kate Crawford, about bias in Machine Learning. We recommend taking a look at the feature image of this post, representing modern Machine Learning datasets as an attempt at creating a taxonomy of the world. Since we already covered a talk on the topic yesterday, we’ll give the spotlight to some other topics below.Capturing uncertaintyModel uncertainty from “What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?”For many Deep Learning applications, receiving a simple prediction from our model is insufficient. When having to make life threatening decisions, such as diagnosing patients or steering a self-driving car, we would like to get a measure of how confident we are in our predictions. Unfortunately, most Deep Learning models aren’t great at effectively measuring the certainty of their predictions. Recently, the field of Bayesian Deep Learning has been growing, in part because it can address these questions through measuring the variance of a model. Read more here…

thumbnail courtesy of insightdatascience.com