A recurrent neural network (RNN) decoder is a deep learning model that processes neural signal time series by maintaining an internal hidden state that evolves over time, capturing temporal dependencies in the data. RNN decoders have achieved breakthrough results in BCI, most notably the 2021 Stanford/BrainGate handwriting BCI (Willett et al.) that decoded imagined handwriting at 90 characters per minute — a result that fundamentally changed expectations for BCI communication speed.
Architecture
RNN decoders for BCI typically use gated recurrent units (GRUs) or long short-term memory (LSTM) cells rather than vanilla RNNs, as gated architectures better handle the long-range temporal dependencies present in neural signals. A typical BCI RNN decoder consists of:
- Input layer: Receives neural features (spike counts, threshold crossing rates, or LFP power) from each electrode at each time step
- Recurrent layers: One or more GRU/LSTM layers that process the temporal sequence, building up context over time
- Output layer: Produces decoded variables — character probabilities, phoneme logits, or continuous kinematic parameters
Key Results
- Handwriting BCI (Willett et al., 2021): An RNN decoder converted motor cortex activity during imagined handwriting into text at 90 characters/minute (approximately 18 WPM) with 94% accuracy. The RNN learned to recognize the neural signatures of individual letter strokes.
- Speech BCI (Willett et al., 2023): An RNN decoded attempted speech phonemes from motor cortex at 62 WPM, a record at the time. The RNN output was refined by a language model for final text generation.
- Moses et al. (2021): An RNN decoded attempted speech from ECoG recordings over speech motor cortex at the UCSF Chang lab.
Advantages Over Linear Decoders
RNNs capture nonlinear relationships between neural activity and behavior that linear decoders (Kalman filter, linear regression) miss. They naturally handle the sequential, context-dependent nature of behaviors like handwriting and speech — where the neural pattern for a letter depends on what letters preceded it. RNNs also scale well with increasing electrode counts and can learn complex feature representations automatically.
Challenges
RNN decoders require more training data than linear models, are more computationally expensive (though still feasible for real-time operation on modern hardware), and are less interpretable — making it harder to understand what neural features drive their predictions. They can also overfit to individual sessions, requiring recalibration strategies for day-to-day use.