Author: Cristina Daiana Duarte | Email: email@example.com
Cristina Daiana Duarte 1°, Francisco Ramiro Iaconis 1°, Marianela Pacheco 3°, Juan Ignacio Ruíz Díaz 2°, Ángel Javier Tabullo 4°, Claudio Augusto Delrieux 2°, Gustavo Gasaneo 1°
1° Instituto de Física del Sur (Ifisur), Departamento de Física, Universidad Nacional del Sur, CONICET, Bahía Blanca
2° Departamento de Ingeniería Eléctrica y Computadoras, Universidad Nacional del Sur, CONICET, Bahía Blanca
3° Universidad Nacional de Córdoba, CONICET
4° INCIHUSA (Instituto de Ciencias Humanas Sociales y Ambientales), CONICET
We propose novel visual-auditory multisensory signal analysis techniques to explore how auditory and visual stimuli processing interact, and how one modality is managed when attention is captured by the other. We conducted an experiment that modified traditional speech audiometry by introducing visual stimuli, where images were integrated into conventional setup. This experiment combines eye-tracking and brain EEG signals, with the aim to discriminate different auditory thresholds from brain activity and eye movement.
Six participants followed two experimental phases. Initially, disyllabic words were presented in auditory form at an intensity loud enough for allowing accurate discrimination. At the same time two images were presented on a computer screen. The subjects were instructed to choose the image corresponding to the heard word. In the second part, a similar sequence was repeated with a lower auditory level (only fifty percent of words could be recognized).
Pupil size and gaze paths were tracked synchronically with the EEG signal during the experiments. Novel statistical-complexity-based evaluation methods were applied to these fused signals. The evaluated parameters were used as feature vectors to represent these signals, with which a classification model was trained. We used three different classification models (Decision Trees, Random Forest, K-Nearest Neighbors). The best model achieved a 79% accuracy in threshold determination.