23. Deep Learning Models in Computational Neuroscience

This abstract has open access
Abstract Summary

Imran Thobani (Stanford University)

The recent development of deep learning models of parts of the brain such as the visual system raises exciting philosophical questions about how these models relate to the brain. Answering these questions could help guide future research in computational neuroscience as well as provide new philosophical insights into the various ways that scientific models relate to the systems they represent or describe. 

By being trained to solve difficult computational tasks like image classification, some of these deep learning models have been shown to successfully predict neural response behavior without simply being fit to the neural data (Yamins 2016). This suggests that these models are more than just phenomenological models of neural response behavior. There is supposed to be a deeper similarity between the deep learning model and the neural system it is supposed to represent that goes beyond the sharing of neural response properties. But what exactly is this similarity relationship? 

I argue that there are three distinct similarity relationships that can hold between a deep learning model and a target system in the brain, and I explicate each relationship. The first is surface-level similarity between the activation patterns in the set of model neurons in response to a range of sensory inputs and the activations of the firing rates of neurons in response to the same (or sufficiently similar) sensory stimuli. The second kind of similarity is architectural similarity between the neural network model and the actual neural circuit in a brain. The model is similar to the brain in this second sense, to the extent that the mathematical relationships that hold between the activations of model neurons are similar to actual relationships between firing rates of neurons in the brain. The third kind of similarity is similarity between the coarse constraints that were used in the design of the model, and constraints that the target system in the brain obeys. These constraints include, amongst other things, the objective function that the model is trained to optimize, the number of neurons used in the model, and the learning rule that is used to train the model. 

Having distinguished these three kinds of similarity, I address the question of which kind of similarity is most relevant to the question of what counts as a good model of the brain. I argue that similarity at the level of coarse constraints is a necessary criterion for a good model of the brain. While architectural and surface-level similarity are relevant criteria for a good model of the brain, their relevance needs to be understood in terms of providing evidence for similarity at the level of coarse constraints.

Abstract ID :
NKDR61444
Abstract Topics
Stanford University
193 visits