Scott W. Linderman

Postdoctoral Fellow, Columbia University

I'm a postdoctoral fellow in the labs of Liam Paninski and David Blei at Columbia University. I completed my Ph.D. in Computer Science at Harvard University under the supervision of Ryan Adams and Leslie Valiant. My research is focused on computational neuroscience, machine learning, and the general question of how computer science and statistics can help us decipher neural computation. I've worked on bottom-up methods for discovering interpretable structure in large-scale neural recordings as well as top-down models of biological computation.

I completed my B.S. in Electrical and Computer Engineering at Cornell University in 2008. Prior to beginning graduate school, I worked for three years as a software development engineer at Microsoft, specifically working on the Windows networking stack.


slinderman at seas.harvard.edu
curriculum vitae

Projects

Discovering latent network structure in point process data

We develop a probabilistic model combining mutually-exciting point processes with random graph models, and derive a fully-Bayesian, parallel inference algorithm based on an auxiliary variable formulation.
Joint work with Ryan Adams.
ICML '14 | arXiv:1402.0914 [stat.ML] | ICML Talk

Building on this work, we have proposed a scalable stochastic variational inference (SVI) algorithm for discovering latent networks from excitatory point process observations. Using a discrete-time formulation and a weak spike-and-slab approximation, we have developed highly scalable algorithms.
arXiv:1507.03228 [stat.ML]

I've also worked on developing similar ideas for modeling neural spike trains, which have nonlinear dynamics and a combination of excitatory and inhibitory interactions. Leveraging and extending recent work on Pólya-gamma augmentation schemes, I have developed efficient Bayesian inference algorithms for discovering structured patterns of connectivity underlying spike trains.
Joint work with Ryan Adams and Jonathan Pillow.
Cosyne 2015 Abstract

Nonparametric state space models for neural spike trains

My collaborators and I have been developing scalable, nonparametric latent state space models like hidden Markov and semi-Markov models (HMMs and HSMMs), linear dynamical systems (LDSs), and switching linear dynamical systems (SLDSs) to probe the low-dimensional discrete and continuous states of neural systems. These models push the frontier of computational modeling, allowing us to express sophisticated hypotheses about neural dynamics in the form of hierarchical, probabilistic models. As the sophistication of our models and the size of our datasets grow, inference becomes a major challenge. We have developed novel augmentation schemes and fast Bayesian inference algorithms to fit these models at scale.
Joint work with Matt Johnson, Aaron Tucker, Matt Wilson, Zhe Chen, Bob Datta, and Ryan Adams
JNM paper | Cosyne 2016 Abstract | Cosyne 2015 Abstract

Dependent multinomial models made easy: Stick breaking with the Pólya-gamma augmentation

Many practical modeling problems involve discrete multinomial data with underlying dependencies that cannot be captured by the Dirichlet-multinomial formulation. We leverage a logistic stick-breaking representation and recent innovations in Pólya-gamma augmentation to perform efficient fully Bayesian inference for models with latent Gaussian structure and multinomial observations like correlated topic models, multinomial GP's, and multinomial linear dynamical systems.
Joint work with Matthew Johnson and Ryan Adams.
NIPS '15 | arXiv:1506.05843 [stat.ML]

A framework for studying synaptic plasticity with neural spike train data

Learning and memory in the brain are implemented by complex, time-varying changes in neural circuitry. We present a framework for incorporating synaptic plasticity rules into the popular generalized linear models for neural spike trains, and derive Bayesian inference algorithms based on particle MCMC.
Joint work with Chris Stock and Ryan Adams.
NIPS '14 | arXiv:1411.4077 [stat.ML]

Publications

Conference and Journal Papers

  • Linderman, Scott W., Johnson, Matthew J., Wilson, Matthew W., and Chen, Zhe. A Bayesian nonparametric approach for uncovering rat hippocampal population codes during spatial navigation Journal of Neuroscience Methods, 2016.
    JNM paper
  • Linderman, Scott W., Johnson, Matthew J., and Adams, Ryan P. Dependent Multinomial Models Made Easy: Stick-Breaking with the Pólya-gamma Augmentation. Neural Information Processing Systems, 2015.
    NIPS paper | arXiv:1506.05843 [stat.ML]
  • Linderman, Scott W., Stock, Christopher H., and Adams, Ryan P. A Framework for Studying Synaptic Plasticity with Neural Spike Train Data. Neural Information Processing Systems (NIPS), 2014.
    NIPS paper | arXiv:1411.4077 [stat.ML]
  • Linderman, Scott W. and Adams, Ryan P. Discovering Latent Network Structure in Point Process Data. International Conference on Machine Learning (ICML), 2014.
    ICML paper | arXiv:1402.0914 [stat.ML] | ICML Talk.

Workshop Papers, Abstracts, and Posters

  • Linderman, Scott W., Tucker, Aaron, and Johnson, Matthew J. Bayesian latent state space models of neural activity. Computational and Systems Neuroscience (Cosyne) Abstracts, 2016, Salt Lake City, UT USA.
    abstract
  • Linderman, Scott W., Adams, Ryan P., and Pillow, Jonathan W. Inferring structured connectivity from spike trains under negative-binomial generalized linear models. Computational and Systems Neuroscience (Cosyne) Abstracts, 2015, Salt Lake City, UT USA.
    abstract
  • Johnson, Matthew J., Linderman, Scott W., Datta, Sandeep R., and Adams, Ryan P. Discovering switching autoregressive dynamics in neural spike train recordings. Computational and Systems Neuroscience (Cosyne) Abstracts, 2015, Salt Lake City, UT USA.
    abstract
  • Linderman, Scott W., Stock, Christopher H., and Adams, Ryan P. Fully-Bayesian inference of time-varying synaptic weights from neural spike trains. Annual Meeting of the Society for Neuroscience (SfN), 2014.
    abstract
  • Linderman, Scott W. Discovering Latent States of the Hippocampus with Bayesian Hidden Markov Models. Abstracts of the 2014 Brains, Minds, and Machines Summer School, 2014.
    paper
  • Nemati, Shamim, Linderman, Scott W., and Chen, Zhe (2014). A Probabilistic Modeling Approach for Uncovering Neural Population Rotational Dynamics. Cosyne Abstracts 2014, Salt Lake City, UT, USA.
    abstract
  • Linderman, Scott W., Adams, Ryan P. (2013). Fully-Bayesian Inference of Structured Functional Networks in GLMs. Acquiring and Analyzing the Activity of Large Neural Ensembles, Workshop at Neural Information Processing Systems (NIPS), Lake Tahoe, NV, USA. poster
  • Linderman, Scott W., Adams, Ryan P. (2013). Discovering Structure in Spiking Data. New England Machine Learning Day 2013, Cambridge, MA USA.
    poster
  • Linderman, Scott W., Adams, Ryan P. (2013). Inferring functional connectivity with priors on network topology. Cosyne Abstracts 2013, Salt Lake City USA.
    abstract|poster

Invited Talks

  • Discovering Latent Network Structure in Neural Spike Trains. Machine Learning and Friends, University of Massachusetts at Amherst. February 12th, 2015.
    slides
  • Discovering Interpretable Structure in Neural Spike Trains with Negative Binomial GLMs. Applied Math Seminar, University of Washington. January 8th, 2015.
  • Discovering Interpretable Structure in Neural Spike Trains with Negative Binomial GLMs. Harvard Center for Brain Science Neurolunch. December 3rd, 2014.
  • Discovering Latent Network Structure in Point Process Data. Lazer Lab Meeting, Northeastern University. September 4th, 2014.
  • Discovering Latent Network Structure in Point Process Data. Harvard Computer Science Colloquium. July 24th, 2014.
  • Discovering Latent Network Structure in Spiking Data. Boston Data Mining Meetup. May 1st, 2014.
  • Discovering Latent Network Struture in Spiking Data Applied Statistics Workshop, Harvard University, September 4th, 2013

Thesis

Neuroscience is entering an exciting new age. Modern recording technologies enable simultaneous measurements of thousands of neurons in organisms performing complex behaviors. Such recordings offer an unprecedented opportunity to glean insight into the mechanistic underpinnings of intelligence, but they also present an extraordinary statistical and computational challenge: how do we make sense of these large scale recordings? This thesis develops a suite of tools that instantiate hypotheses about neural computation in the form of probabilistic models and a corresponding set of Bayesian inference algorithms that efficiently fit these models to neural spike trains. From the posterior distribution of model parameters and variables, we seek to advance our understanding of how the brain works.

Chapter 1: Introduction
Chapter 2: Background
Chapter 3: Hawkes Processes with Latent Network Structure
Chapter 4: Discrete-Time Linear Autoregressive Poisson Models
Chapter 5: Networks with Nonlinear Autoregressive Dynamics
Chapter 6: Dynamic Network Models
Chapter 7: Bayesian Nonparametric Hidden Markov Models
Chapter 8: Switching Linear Dynamical Systems with Count Observations
Chapter 9: Reverse Engineering Bayesian Computations from Spike Trains
Chapter 10: Conclusion

Or, if you prefer the full document, here is the complete thesis. The source is available at https://github.com/slinderman/thesis.