Scott W. Linderman

Ph.D. Candidate, Harvard University

I am a Computer Science Ph.D. candidate at Harvard University where I am advised by Leslie Valiant and Ryan Adams. My research is focused on computational neuroscience and machine learning, and the general question of how computer science can help us decipher neural computation. I've worked on bottom-up methods for discovering interpretable structure in large-scale neural recordings as well as top-down models of probabilistic reasoning in neural circuits.

I completed my B.S. in Electrical and Computer Engineering at Cornell University in 2008. Prior to beginning graduate school, I worked for three years as a software development engineer at Microsoft, specifically working on the Windows networking stack.

Maxwell-Dworkin, Room 209
33 Oxford Street
Cambridge, MA 02138
slinderman at


Scalable Bayesian inference for excitatory point process networks

Building on our previous work from ICML '14, we propose a scalable stochastic variational inference (SVI) algorithm for discovering latent networks from excitatory point process observations. We propose a discrete-time formulation and a weak spike-and-slab approximation that enables highly scalable algorithms.
Joint work with Ryan Adams.
arXiv:1507.03228 [stat.ML]

Dependent multinomial models made easy: Stick breaking with the Pólya-gamma augmentation

Many practical modeling problems involve discrete multinomial data with underlying dependencies that cannot be captured by the Dirichlet-multinomial formulation. We leverage a logistic stick-breaking representation and recent innovations in Pólya-gamma augmentation to perform efficient fully Bayesian inference for models with latent Gaussian structure and multinomial observations like correlated topic models, multinomial GP's, and multinomial linear dynamical systems.
Joint work with Matthew Johnson and Ryan Adams.
arXiv:1506.05843 [stat.ML]

Discovering latent network structure in point process data

We develop a probabilistic model combining mutually-exciting point processes with random graph models, and derive a fully-Bayesian, parallel inference algorithm based on an auxiliary variable formulation.
Joint work with Ryan Adams.
ICML '14 | arXiv:1402.0914 [stat.ML] | ICML Talk

A framework for studying synaptic plasticity with neural spike train data

Learning and memory in the brain are implemented by complex, time-varying changes in neural circuitry. We present a framework for incorporating synaptic plasticity rules into the popular generalized linear models for neural spike trains, and derive Bayesian inference algorithms based on particle MCMC.
Joint work with Chris Stock and Ryan Adams.
NIPS '14 | arXiv:1411.4077 [stat.ML]

Fully-Bayesian inference of latent networks underlying neural spike trains

Advances in neural recording technologies demand novel machine learning methods for discovering interpretable structure in high-dimensional data. We have developed a framework for performing fully-Bayesian inference in generalized linear models (GLMs) of spike trains, with a particular emphasis on inferring latent structure in the functional connectivity network. Recent work on random graph models is integrated into the GLM in a flexible and general manner. This enables the discovery of interpretable structure like clusters of functionally similar neurons and distance-dependent connectivity. Our framework is demonstrated on spike train recordings from a population of retinal ganglion cells (RGCs).
Joint work with Ryan Adams and Jonathan Pillow.
Cosyne 2015 Abstract | NIPS 2013 poster | Full draft in preparation

Fitting biophysical models to optically stimulated and recorded neural populations

Optogenetic methods offer unprecedented access to neural activity. Novel voltage indicating proteins developed in the Cohen Lab at Harvard allow both fast and sensitive recording of membrane potentials across populations of neurons. Their methods have been paired with a novel channelrhodopsin variant to deliver arbitrary patterns of optical stimulation to multiple neurons while simultaneously recording the activity of each, allowing rapid testing under a variety of experimental conditions. Given these high-fidelity fluorescence recordings, we aim to infer latent biophysical properties such as ion channel densities and kinetics. We are extending the probabilistic framework of Huys and Paninski, 2009 to infer these latent properties using particle MCMC.
Joint work with Aaron Tucker, Adam Cohen, Daniel Hochbaum, and Ryan Adams.

Nonparametric time series models for neural spike train data

Together with Matt Johnson, Matt Wilson, and Zhe Chen, I've been looking at nonparametric hidden Markov and semi-Markov models (HMMs and HSMMs) for neural spike trains. There has been great interest in latent state space models of neural systems, and HMMs are the discrete analog of such approaches. From a modeling perspective, discrete latent states may yield more interpretable results. By modeling the state durations with semi-Markovian models, we can naturally achieve long range correlations in the spiking activity. Our initial work has applied these models to rat hippocampal recordings.
Joint work with Matt Johnson, Matt Wilson, Zhe Chen, Bob Datta, and Ryan Adams
Cosyne 2015 Abstract | arXiv:1411.7706 [stat.ML] | Manuscript in submission


Conference and Journal Papers

  • Linderman, Scott W. Johnson, Matthew J., Wilson, Matthew W., and Chen, Zhe. A Nonparametric Bayesian Approach to Uncovering Rat Hippocampal Population Codes During Spatial Navigation. In submission, 2014.
    arXiv:1411.7706 [stat.ML]
  • Linderman, Scott W. Stock, Christopher H., and Adams, Ryan P. A Framework for Studying Synaptic Plasticity with Neural Spike Train Data. Neural Information Processing Systems (NIPS), 2014.
    arXiv:1411.4077 [stat.ML]
  • Linderman, Scott W. and Adams, Ryan P. Discovering Latent Network Structure in Point Process Data. International Conference on Machine Learning (ICML), 2014.
    abstract | arXiv:1402.0914 [stat.ML] | ICML Talk.

Workshop Papers, Abstracts, and Posters

  • Linderman, Scott W., Adams, Ryan P., and Pillow, Jonathan W. Inferring structured connectivity from spike trains under negative-binomial generalized linear models. Computational and Systems Neuroscience (Cosyne) Abstracts, 2015, Salt Lake City, UT USA.
  • Johnson, Matthew J., Linderman, Scott W., Datta, Sandeep R., and Adams, Ryan P. Discovering switching autoregressive dynamics in neural spike train recordings. Computational and Systems Neuroscience (Cosyne) Abstracts, 2015, Salt Lake City, UT USA.
  • Linderman, Scott W., Stock, Christopher H., and Adams, Ryan P. Fully-Bayesian inference of time-varying synaptic weights from neural spike trains. Annual Meeting of the Society for Neuroscience (SfN), 2014.
  • Linderman, Scott W. Discovering Latent States of the Hippocampus with Bayesian Hidden Markov Models. Abstracts of the 2014 Brains, Minds, and Machines Summer School, 2014.
  • Nemati, Shamim, Linderman, Scott W., and Chen, Zhe (2014). A Probabilistic Modeling Approach for Uncovering Neural Population Rotational Dynamics. Cosyne Abstracts 2014, Salt Lake City, UT, USA.
  • Linderman, Scott W., Adams, Ryan P. (2013). Fully-Bayesian Inference of Structured Functional Networks in GLMs. Acquiring and Analyzing the Activity of Large Neural Ensembles, Workshop at Neural Information Processing Systems (NIPS), Lake Tahoe, NV, USA. poster
  • Linderman, Scott W., Adams, Ryan P. (2013). Discovering Structure in Spiking Data. New England Machine Learning Day 2013, Cambridge, MA USA.
  • Linderman, Scott W., Adams, Ryan P. (2013). Inferring functional connectivity with priors on network topology. Cosyne Abstracts 2013, Salt Lake City USA.

Invited Talks

  • Discovering Latent Network Structure in Neural Spike Trains. Machine Learning and Friends, University of Massachusetts at Amherst. February 12th, 2015.
  • Discovering Interpretable Structure in Neural Spike Trains with Negative Binomial GLMs. Applied Math Seminar, University of Washington. January 8th, 2015.
  • Discovering Interpretable Structure in Neural Spike Trains with Negative Binomial GLMs. Harvard Center for Brain Science Neurolunch. December 3rd, 2014.
  • Discovering Latent Network Structure in Point Process Data. Lazer Lab Meeting, Northeastern University. September 4th, 2014.
  • Discovering Latent Network Structure in Point Process Data. Harvard Computer Science Colloquium. July 24th, 2014.
  • Discovering Latent Network Structure in Spiking Data. Boston Data Mining Meetup. May 1st, 2014.
  • Discovering Latent Network Struture in Spiking Data Applied Statistics Workshop, Harvard University, September 4th, 2013