Discovering the geometry of neural representations via topological tools.
Presenter
October 16, 2023
Abstract
Neural representations of stimulus spaces often comes with a natural geometry. Perhaps the most salient examples of such neural populations are those with convex receptive fields (or tuning curves), such as place cells in hippocampus or neurons in V1. Geometry of neural representations is understood in a very limited number of well-studied neural circuits. It is rather poorly understood in most other parts of the brain. This raises a natural question: can one infer such a geometry, based on the statistics of the neural responses alone?
A crucial tool for inferring a geometry is a basis of coordinate functions that "respects" the underlying geometry, while providing meaningful low-dimensional approximations. Eigenfunctions of a Laplacian, derived from the underlying metric, serve as such basis in many scientific fields. However, spike trains, and other derived features of neural activity do not come with a natural metric, while they do come with an "intrinsic" probability distribution of neural activity patterns.
Building on the tools from combinatorial topology, we introduce Hodge Laplacians associated with probability distributions on sequential data, such as spike trains. We demonstrate that these Laplacians have desirable properties with respect to the natural null-models, where the underlying neurons are independent. Our results establish a foundation for dimensionality reduction and Fourier analyses of probabilistic models, that are common in theoretical neuroscience and machine-learning.