Abstract (pdf)
Human brain follow a modular approach, i.e. different classes of stimuli are process in
different brain regions. Auditory stimuli processing is also dependent on the category
(e.g. sentences and music are having different neural patterns) (Rogalasky et al., 2011).
Further to add, music perception is highly dependent on behavioural traits such
experience of the music (Chapin et al., 2010).
Nakai, Koide-Majima, and Nishimoto (2020) has experiment with 10 music genres and
processed the fMRI scans of 5 subjects. They have reported that superior temporal
region (STG) was having different pattern for different music genres, which was
capable of classifying the neural activity through data analysis approach.
We have acquired the fMRI data of their experiment (from
https://openneuro.org/datasets/ds003720/versions/1.0.0) and applied machine learning
for automatic classification of neural activity for different music genres. The data
contains fMRI scans for 5 subjects and each subject gone through 12 training runs and 6
test runs. In each training run 10 music genre played in random order for 15 seconds
each, after intimating the genre name. In test run, 10 music genres were played for
15 seconds each but without intimating the name of genre.
The preprocessing of the data is done in SPM 12 and for machine learning, PRONTO
V3.0 was used. To make the feature efficient we limited the feature calculation to STG
only. The STG mask was created SPM anatomy toolbox. For each training run, a voxel
based feature set was prepared. Using the label from design matrix, a supervised
machine learning algorithm (support vector machine-SVM) has been applied.
The results shows that SVM was able to classify the music genres with significantly
better accuracy.