Sensory and Motor Systems
Author: Juan Esteban Kamienkowski | Email: jkamienk@gmail.com
Juan Esteban Kamienkowski 1°, Nicolás Nieto 3°, Pablo Brusco 2°, Agustin Gravano 4°, Joaquin Ezequiel Gonzalez 1°
1° Instituto de Ciencias de la Computacion (Universidad de Buenos Aires – Consejo Nacional de Investigaciones Cientificas y Tecnicas), Argentina
2° Departamento de Computacion, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Argentina
3° Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional, sinc(i) (Universidad Nacional del Litoral – Consejo Nacional de Investigaciones Cientificas y Tecnicas), Argentina
4° Univesidad Torcuato Di Tella, Argentina
When engaged in a conversation, one receives auditory information from the other’s speech but also from the own speech. However, this information is processed differently by an effect called Speech-Induced Suppression (SIS). Here, we studied brain representation of acoustic properties of speech in natural unscripted dialogues, using concurrent electroencephalography (EEG) and high-quality speech recordings from both speakers. Firstly, we reproduced a broad range of previous findings on listening to another’s speech using encoding techniques from different speech features (spectrogram, envelope, phonemes, etc). Moreover, we achieved better performance when predicting the EEG signal even in a complex scenario such as natural dialogues, in particular in the theta band. Secondly, we found no response when listening to oneself on different frequency bands, evidencing a strong effect of SIS. The present work shows that this mechanism is present, and even stronger, during natural dialogues. Furthermore, the methodology presented here opens the possibility of a deeper understanding of the related mechanisms in a wider range of contexts and an increasing complexity in speech features.