Chinese Journal of Magnetic Resonance

   

Semantic Audio-Visual Single-Trial Detection Based on the New Generation of Magnetoencephalography

GUO Xu 1,2, WANG Chenxu 1,2, ZHANG Xin 2,3, CHANG Yan 2,3, CUI Feng 2, GUO Qingqian 2,3, HU Tao 2,3, YANG Xiaodong 1,2,3*   

  • Received:2024-01-04 Revised:2024-01-19 Published:2024-01-19 Online:2024-01-19
  • Contact: YANG Xiaodong E-mail:xiaodong.yang@sibet.ac.cn

Abstract: In order to decode the difference of brain response between audio-visual dual-mode and single-mode based on semantic context, this study designed a related task paradigm and applied a new generation magnetoencephalogram combined with the machine learning model to analyze the collected signals from three perspectives: behavioral response, event-related field (ERF) and single-trial detection. Results show that the single-mode semantic response is mainly concentrated in the occipital cortex, while the bimodal semantic response is mainly concentrated in the parietal cortex. At the same time, respondents' response rate and the detection accuracy of single-trial in the bimodal-mode is significantly higher than that in the single-mode. Moreover, the support vector machine (SVM) shows the best classification performance among the four machine learning models, with an average classification accuracy of 75.16% within-subject and 80.56% between-subject. This research determines that the combination of OPM-MEG and machine learning model provides an efficient method to decode the brain response in audio-visual difference between dual-mode and single-mode in semantic context.

Key words: OPM-MEG, semantic, Visual-auditory, machine learning, ERF

CLC Number: