Dimensionality reduction is the process of transforming high-dimensional neural data — the simultaneous firing rates of dozens to hundreds of neurons — into a lower-dimensional representation that captures the essential structure of the population activity. A recording from 96 electrodes produces 96-dimensional data at each time point, but the meaningful neural dynamics often unfold on a much lower-dimensional manifold (typically 5-20 dimensions), reflecting the constrained patterns of coordinated neural activity.
Why Neural Data Is Low-Dimensional
Neural populations do not fire independently. Neurons in motor cortex are connected by shared inputs and recurrent connections that constrain their activity to a subspace of the full 96-dimensional space. When you reach for a cup, the 96 neurons do not take 96 independent paths — they move together along a low-dimensional trajectory determined by the underlying motor plan. This low-dimensional structure is both a fundamental property of cortical computation and a practical opportunity for BCI design.
Common Methods
- Principal Component Analysis (PCA): Linear method that finds orthogonal directions of maximum variance. Fast, well-understood, widely used as a first analysis step. Typically captures 70-90% of neural variance in 10-20 dimensions.
- Factor Analysis (FA): Separates shared variance (neural signals) from private variance (noise) in each neuron's activity. More principled than PCA for neural data where per-neuron noise is significant.
- GPFA (Gaussian Process Factor Analysis): Combines factor analysis with temporal smoothing, producing smooth latent trajectories that are easier to interpret and decode.
- LFADS (Latent Factor Analysis via Dynamical Systems): A deep learning approach (Pandarinath et al., 2018) that infers latent neural dynamics by fitting a recurrent neural network to population activity. Produces denoised, low-dimensional representations of neural population dynamics.
Impact on BCI
Dimensionality reduction improves BCI performance in several ways. Decoding from low-dimensional latent factors rather than raw spike counts reduces noise, handles electrode dropout gracefully, and produces smoother control signals. It also enables transfer learning — if the latent space is consistent across sessions despite electrode drift, decoders trained in latent space may generalize better over time, reducing the need for daily recalibration.