Источник
ECAI
Дата публикации
19.10.2024
Авторы
Илья Макаров Андрей Савченко Bashar M. Deeb
Поделиться

CA-SER: Cross-Attention Feature Fusion for Speech Emotion Recognition

Аннотация

In this paper, we introduce a novel tool for speech emotion recognition, CA-SER, that borrows self-supervised learning to extract semantic speech representations from a pre-trained wav2vec 2.0 model and combine them with spectral audio features to improve speech emotion recognition. Our approach involves a self-attention encoder on MFCC features to capture meaningful patterns in audio sequences. These MFCC features are combined with high-level representations using a multi-head cross-attention mechanism. Evaluation of speech emotion recognition on the IEMOCAP dataset shows that our system achieves a weighted accuracy of 74.6%, outperforming most existing techniques.

Присоединяйтесь к AIRI в соцсетях