Deep Learning, practice and TrendsNando de Freitas, Scott Rees, Oriol VinyalsChart comparing a regular program (purple) with neural versions from the RobustFill paperThis talk gave a solid overview of the current state and recent advances in Deep Learning. Convolutional Neural Networks (CNN) and autoregressive models are starting to see ubiquitous use in production, showing a fast transition from research to industry. These models have taught us that introducing inductive biases such as translation invariance (CNN) or time recurrence (Recurrent Neural Networks) can be extremely useful. We’ve also found out that simple “tricks” such as Residual Networks or Attention can lead to tremendous leaps in performance. There are good reasons to believe we will find more such “tricks”. Looking ahead, a few exciting research areas were mentioned:Weakly supervised domain mappings: learning to translate from one domain to another, without explicit input/output pairs. Particular examples include auto-encoders or recent variants of Generative Adversarial Networks (GANs), such as CycleGAN or DiscoGAN.Deep Learning for graphs: a lot of input data, such as friend networks, product recommendations, or representations of molecules in chemistry can be thought of in graph form. Graphs as a data type are a generalization of sequences, which makes them widely applicable, but leads to similar problems to dealing with sequence data such as inefficient batching. Read more here…

thumbnail courtesy of insightdatascience.com