Abstract:
A brain-computer interface (BCI) based on functional magnetic resonance imaging (fMRI) noninvasively records brain activity with a high spatial resolution. Like other BCI systems, an fMRI-based BCI system requires a decoding method to translate fMRI patterns of neural activation into BCI output. Previous studies focus on building an fMRI decoding model to identify specific features of the corresponding outputs. However, what kind of features suitable for encoding numerous labels and how to prevent the models from the overfitting due to limited fMRI data are questionable. This thesis focuses on building a computational model that learns to decode labels corresponding to neural activations captured by fMRI data using a deep learning approach. Unlike the conventional models that uses text-derived feature, this research demonstrates that the corresponding labels can be accurately decoded by fMRI activity responses using visual features extracted from deep convolutional neural networks trained on general object recognition datasets. To reduce the overfitting issue, our approach trains the model with multi-task learning scheme. The experimental results conducted on CMUs fMRI datasets show that the multi-task based models not only meets the state-of-the-art performance, but also the decoding features can be easily derived.