273 + CO-8-Auditorio | Self-supervised learning approach for inter-subject transfer learning in motor imagery brain-computer interfaces

Theoretical and Computational Neuroscience

Author: Catalina María Galván | Email: catalinamgalvan@gmail.com

Catalina M. Galván , Ruben D. Spies , Diego H. Milone , Victoria Peterson

1° Instituto de Matemática Aplicada del Litoral, IMAL, UNL, CONICET, Argentina
2° Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional, sinc, FICH-UNL/CONICET, Argentina

Reducing calibration time is crucial for enhancing the usability of brain-computer interfaces based on motor imagery. Due to the high inter-user variability of electroencephalography (EEG) signals, a user traditionally has to endure long and tedious calibration sessions to collect enough personalized training data before using the system. This need has become even more evident with the advent of deep learning decoding models, whose performance strongly depends on the volume of data available for training. Inter-user transfer learning, where other users’ data is used to train the model, reduces the required amount of personalized training data. In this context, self-supervised learning strategies can be used to pretrain the first stages of the model on a pretext task and then adapt it to the task of interest through fine-tuning with a few data from the target user.
Here, we propose a self-supervised learning approach based on a fully convolutional encoder-decoder network. The reconstruction of EEG segments of a channel is used as the pretext task. Then an ensemble of the pre-trained encoders per EEG channel, followed by a classification block, conforms the final decoding model. This model is fine-tuned with a small dataset of the target user in the final MI-classification task.