“We demonstrate the use of a simple, plug-and-play neural network module for solving tasks that demand complex relational reasoning. This module, called a Relation Network, can receive unstructured inputs – say, images or stories – and implicitly reason about the relations contained within.”  Read more on the blog. Quantifying predictive uncertainty in neural networks (NNs) is a challenging and yet unsolved problem. The majority of work is focused on Bayesian solutions, however these are computationally intensive and require significant modifications to the training pipeline. We propose an alternative to Bayesian NNs that is simple to implement, readily parallelisable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Through a series of experiments on classification and regression benchmarks, we demonstrate that our method produces well-calibrated uncertainty estimates which are as good or better than approximate Bayesian NNs. We revisit the structure of value approximators for RL, based on the observation that typical approximators smoothly change as a function of input, but the true value changes abruptly when a reward arrives. Our proposed method is designed to fit such asymmetric discontinuities using interpolation with a projected value estimate. Read more here…

thumbnail courtesy of deepmind.com