David Kristjanson Duvenaud
Preprints  Publications  Videos  Misc  TalksI'll be joining the faculty of the University of Toronto this fall, in both CS and statistics.
In the meantime, I'm finishing a postdoc in the Harvard Intelligent Probabilistic Systems group, working with Prof. Ryan Adams on modelbased optimization, synthetic chemistry, Bayesian numerics, and neural networks.I did my Ph.D. at the University of Cambridge, where my advisors were Carl Rasmussen and Zoubin Ghahramani. My M.Sc. advisor was Kevin Murphy at the University of British Columbia, where I worked mostly on machine vision.
I spent a summer at the Max Planck Institute for Intelligent Systems, and the two summers before that at Google Research, doing machine vision.
I cofounded Invenia, an energy forecasting and trading firm where I still consult.
Email: duvenaud@cs.toronto.eduPreprints
Composing graphical models with neural networks for structured representations and fast inference
We propose a general modeling and inference framework that composes probabilistic graphical models with deep learning methods and combines their respective strengths. Our model family augments graphical structure in latent variables with neural network observation models. For inference we extend variational autoencoders to use graphical model approximating distributions, paired with recognition networks that output conjugate potentials. All components of these models are learned simultaneously with a single objective, giving a scalable algorithm that leverages stochastic variational inference, natural gradients, graphical model message passing, and the reparameterization trick. We illustrate this framework with several example models and an application to mouse behavioral phenotyping. Matthew Johnson, David Duvenaud, Alex Wiltschko, Bob Datta, Ryan P. AdamsSubmitted. preprint  code  bibtex  
Blackbox stochastic variational inference in five lines of Python
We emphasize how easy it is to construct scalable inference methods using only automatic differentiation. We present code that computes stochastic gradients of the evidence lower bound for any differentiable posterior. For example, we do stochastic variational inference in a deep Bayesian neural network. David Duvenaud, Ryan P. AdamsNIPS Workshop on Blackbox Learning and Inference, 2015 preprint  code  bibtex  video  
Autograd: Reversemode differentiation of native Python
Autograd automatically differentiates native Python and Numpy code. It can handle loops, ifs, recursion and closures, and it can even take derivatives of its own derivatives. It uses reversemode differentiation (a.k.a. backpropagation), which means it's efficient for gradientbased optimization. Check out the tutorial and the examples directory. Dougal Maclaurin, David Duvenaud, Matthew Johnsoncode  bibtex 
Selected papers
Early Stopping as Nonparametric Variational Inference
Stochastic gradient descent samples from a nonparametric distribution, implicitly defined by the transformation of an initial distribution by a sequence of optimization updates. We track the change in entropy during optimization to get a scalable approximate lower bound on the marginal likelihood. This Bayesian interpretation of SGD gives a theoretical foundation for popular tricks such as early stopping and ensembling. We evaluate our marginal likelihood estimator on neural network models. David Duvenaud, Dougal Maclaurin, Ryan P. AdamsTo appear in Artificial Intelligence and Statistics, 2016 preprint  slides  code  bibtex  
Convolutional Networks on Graphs for Learning Molecular Fingerprints
We introduce a convolutional neural network that operates directly on graphs, allowing endtoend learning of the entire feature pipeline. This architecture generalizes standard molecular fingerprints. These datadriven features are more interpretable, and have better predictive performance on a variety of tasks. David Duvenaud, Dougal Maclaurin, Jorge AguileraIparraguirre, Rafa GómezBombarelli, Timothy Hirzel, Alán AspuruGuzik, Ryan P. AdamsTo appear in Neural Information Processing Systems, 2015 pdf  slides  code  bibtex  
Gradientbased Hyperparameter Optimization through Reversible Learning
Tuning hyperparameters of learning algorithms is hard because gradients are usually unavailable. We compute exact gradients of crossvalidation performance with respect to all hyperparameters by chaining derivatives backwards through the entire training procedure. This lets us optimize thousands of hyperparameters, including stepsize and momentum schedules, weight initialization distributions, richly parameterized regularization schemes, and neural net architectures. We compute hyperparameter gradients by exactly reversing the dynamics of stochastic gradient descent with momentum. Dougal Maclaurin, David Duvenaud, Ryan P. AdamsInternational Conference on Machine Learning, 2015 pdf  slides  code  bibtex  
Probabilistic ODE Solvers with RungeKutta Means
We show that some standard differential equation solvers are equivalent to Gaussian process predictive means, giving them a natural way to handle uncertainty. This work is part of the larger probabilistic numerics research agenda, which interprets numerical algorithms as inference procedures so they can be better understood and extended. Michael Schober, David Duvenaud, Philipp HennigNeural Information Processing Systems, 2014. Oral presentation. pdf  slides  bibtex  
PhD Thesis: Automatic Model Construction with Gaussian Processes
The individual chapters summarize it pretty well:
 
Automatic Construction and NaturalLanguage Description of Nonparametric Regression Models
We wrote a program which automatically writes reports summarizing automatically constructed models. A prototype for the automatic statistician project. James Robert Lloyd, David Duvenaud, Roger Grosse, Joshua B. Tenenbaum, Zoubin GhahramaniAssociation for the Advancement of Artificial Intelligence (AAAI), 2014 pdf  code  slides  example report  airline  example report  solar  more examples  bibtex  
Avoiding Pathologies in Very Deep Networks
To help suggest better deep neural network architectures, we analyze the related problem of constructing useful priors on compositions of functions. We study deep Gaussian process, a type of infinitelywide, deep neural net. We also examine deep covariance functions, obtained by composing infinitely many feature transforms. Finally, we characterize the model class you get if you do dropout on Gaussian processes. David Duvenaud, Oren Rippel, Ryan P. Adams, Zoubin GhahramaniArtificial Intelligence and Statistics, 2014 pdf  code  slides  video of 50layer warping  bibtex  
Raiders of the Lost Architecture: Kernels for Bayesian Optimization in Conditional Parameter Spaces
To optimize the overall architecture of a neural network along with its hyperparameters, we must be able to relate the performance of nets having differing numbers of hyperparameters. To address this problem, we define a new kernel for conditional parameter spaces that explicitly includes information about which parameters are relevant in a given structure. Kevin Swersky, David Duvenaud, Jasper Snoek, Frank Hutter, Michael OsborneNIPS workshop on Bayesian optimization, 2013 pdf  code  bibtex  
Warped Mixtures for Nonparametric Cluster Shapes
If you fit a mixture of Gaussians to a single cluster that is curved or heavytailed, your model will report that the data contains many clusters! To fix this problem, we simply warp a latent mixture of Gaussians to produce nonparametric cluster shapes. The lowdimensional latent mixture model summarizes the properties of the highdimensional clusters (or density manifolds) describing the data. Tomoharu Iwata, David Duvenaud, Zoubin GhahramaniUncertainty in Artificial Intelligence, 2013 pdf  code  slides  talk  bibtex  
Structure Discovery in Nonparametric Regression through Compositional Kernel Search
How could an AI do statistics? To search through an openended class of structured, nonparametric regression models, we introduce a simple grammar which specifies composite kernels. These structured models often allow an interpretable decomposition of the function being modeled, as well as longrange extrapolation. Many common regression methods are special cases of this large family of models. David Duvenaud, James Robert Lloyd, Roger Grosse, Joshua B. Tenenbaum, Zoubin GhahramaniInternational Conference on Machine Learning, 2013 pdf  code  short slides  long slides  bibtex  
Active Learning of Model Evidence using Bayesian Quadrature
Instead of the usual MonteCarlo based methods for computing integrals of likelihood functions, we instead construct a model of the likelihood function, and infer its integral conditioned on a set of evaluations. This allows us to evaluate the likelihood wherever is most informative, instead of running a Markov chain. The upshot is that we need many fewer samples to estimate integrals. Michael Osborne, David Duvenaud, Roman Garnett, Carl Rasmussen, Stephen Roberts, Zoubin GhahramaniNeural Information Processing Systems, 2012 pdf  code  slides  related talk  bibtex  
OptimallyWeighted Herding is Bayesian Quadrature
We prove several connections between a numerical integration method that minimizes a worstcase bound (herding), and a modelbased way of estimating integrals (Bayesian quadrature). It turns out that both optimize the same criterion, and that Bayesian Quadrature does this optimally. The talk slides also contain equivalences between Bayesian Quadrature and kernel twosample tests, the HilbertSchmidt Independence Criterion, and the Determinantal Point Processes MAP objective. Ferenc Huszár and David DuvenaudUncertainty in Artificial Intelligence, 2012. Oral presentation. pdf  code  slides  talk  bibtex  
Additive Gaussian Processes
When functions have additive structure, we can extrapolate further than with standard Gaussian process models. We show how to efficiently integrate over exponentiallymany ways of modeling a function as a sum of lowdimensional functions. David Duvenaud, Hannes Nickisch, Carl RasmussenNeural Information Processing Systems, 2011 pdf  code  slides  bibtex  
Multiscale Conditional Random Fields for Semisupervised Labeling and Classification
How can we take advantage of images labeled only by what objects they contain?
By combining information across different scales, we use imagelevel labels (such as Canadian Conference on Computer and Robot Vision, 2011 pdf  code  slides  bibtex  
Causal Learning without DAGs
When predicting the results of new actions, it's sometimes better to simply average over flexible conditional models than to attempt to identify a single causal structure as embodied by a directed acyclic graph (DAG). David Duvenaud, Daniel Eaton, Kevin Murphy, Mark SchmidtJournal of Machine Learning Research, W&CP, 2010 pdf  code  slides  poster  bibtex  
Videos
Visualizing draws from a deep Guassian process
By viewing deep networks as a prior on functions, we can ask which architectures give rise to which sorts of mappings. Here we visualize a mapping drawn from a deep Gaussian process, using the inputconnected architecture described in this paper. mapping video  density video  code 

Machine Learning to Drive
Andrew McHutchon and Carl Rasmussen are working on a modelbased reinforcement learning system that can learn from small amounts of experience. For fun, we hooked up a 3D physics engine to the learning system, and tried to get it to learn to drive a simple twowheel car in a certain direction, starting with no knowledge of the dynamics. It only took about 10 seconds of practice to solve the problem, although not in realtime. Details are in the video description. by Andrew McHutchon and David Duvenaudyoutube  related paper  
HarlMCMC Shake
Two short animations illustrate the differences between a MetropolisHastings (MH) sampler and a Hamiltonian Monte Carlo (HMC) sampler, to the tune of the Harlem shake. This inspired several followup videos  benchmark your MCMC algorithm on these distributions! by Tamara Broderick and David Duvenaudyoutube  code  
Evolution of Locomotion
A fun project from undergrad: using the genetic algorithm (a terrible algorithm!) to learn locomotion strategies. The plan was for the population to learn to walk, but instead they evolved falling, rolling and shaking strategies. Eventually they exploited numerical problems in the physics engine to achieve arbitrarily high fitness, without ever having learned to walk! youtube  
Misc
Kernel Cookbook
Have you ever wondered which kernel to use for Gaussian process regression? This tutorial goes through the basic properties of functions that you can express by choosing or combining kernels, along with lots of examples. html  
Talks
Fast Random Feature Expansions
The JohnsonLindenstrauss lemma states that you can randomly project a collection of data points into a lower dimensional space while mostly preserving pairwise distances. Recent developments have gone even further: nonlinear randomised projections can be used to approximate kernel machines and scale them to datasets with millions of features and samples. In this talk we explore the theoretical aspects of the random projection method, and demonstrate its effectiveness on nonlinear regression problems. We also motivate and describe the recent Fastfood method. With David LopezPaz, November 2013slides  video  code  
Introduction to Probabilistic Programming and Automated Inference What is the most general class of statistical models? And how can we perform inference without designing custom algorithms? Just as automated inference algorithms make working with graphical models easy (e.g. BUGS), a new class of automated inference procedures is being developed for the more general case of Turingcomplete generative models. In this tutorial, we introduce the practice of specifying generative models as programs which produce a stochastic output, and then automatically performing inference on the execution trace of that program, conditioned on the algorithm having produced a specific output. related links With James Robert Lloyd, March 2013slides  
Metareasoning and Bounded Rationality
Metareasoning is simply decision theory applied to choosing computations: Thinking about what to think about. It formalizes of an otherwise adhoc part of inference and decisionmaking, and is presumably necessary for practical automated modeling. HTML slide software thanks to Christian SteinrueckenTea talk, February 2013 slides  
Sanity Checks When can we trust our experiments? We've collected some simple sanity checks that catch a wide class of bugs. Roger Grosse and I also wrote a short tutorial on testing MCMC code. Related: Richard Mann wrote a gripping blog post about the aftermath of finding a subtle bug in one of his landmark papers. Tea talk, April 2012slides  paper  