The first UWIN seminar of 2019 features a talk by visiting speaker Guillaume Lajoie from Université de Montréal’s Department of Mathematics and Statistics. The talk is titled “Successful learning in artificial networks thanks to individual neuron failure”.
Guillaume is an Assistant Professor in the Department of Mathematics and Statistics at the Université de Montréal, and is also an Associate Member of Mila, the Quebec Institute for Learning Algorithms. We are especially excited to welcome Guillaume back to UW as he was previously a UWIN postdoctoral fellow!
The seminar is on Wednesday, January 9, 2019 at 3:30 in Husky Union Building (HUB) 337. Refreshments will be served prior to the talk.
This talk will outline work in progress. Not unlike the brain, artificial neural networks can learn complex computations by extracting information from several examples of a task. Typically, this is achieved by adjusting the parameters of the network in order to minimize a loss function via gradient descent methods. It is known that introducing artificial failure of single neurons during a deep network’s training, a procedure known as dropout, helps promote robustness. While dropout methods and variants thereof have been successfully employed in a variety of contexts, their effect is not entirely understood, and relies on stochastic processes to select which units to drop. Here, I will discuss two methods designed to purposely select which units would best benefit learning if dropped or temporarily modified, based on their tuning, activation and the current network state: The first method is aimed at improving generalization in deep networks, and the second combats gradient exploding and vanishing in recurrent networks, when learning long-range temporal relations. While gradient descent methods for artificial networks are not biologically plausible, I will discuss how relationships between neural tuning and failure during training can inform exploration of learning mechanisms in the brain.