The auditory brain circuits are biologically constructed to recand localize sounds by encoding a combination of cues that help individuals interpret sounds. The development of computational methods inspired by human capacities has established opportunities for improving machine hearing. Recent studies based on deep learning show that using convolutional recurrent neural networks (CRNNs) is a promising approach for sound event detection and localization in spatial sound. Nevertheless, depending on the sound environment, the performance of these systems is still far from reaching perfect metrics. Therefore, this work intends to boost the performance of state-of-the-art (SOTA) systems by using bio-inspired gammatone auditory filters and intensity vectors (IVs) for the acoustic feature extraction stage, along with the implementation of a temporal convolutional network (TCN) block into a CRNN model, to capture long term dependencies. Three data augmentation techniques are applied to increase the small number of samples in spatial audio datasets. The mentioned stages constitute our proposed Gammatone-based Sound Events Localization and Detection (G-SELD) system, which exceeded the SOTA results on four spatial audio datasets with different levels of acoustical complexity and with up to three sound sources overlapping in time.
|Número de páginas
|IEEE/ACM Transactions on Audio Speech and Language Processing
|Publicada - 2023
HuellaProfundice en los temas de investigación de 'Sound Events Localization and Detection Using Bio-Inspired Gammatone Filters and Temporal Convolutional Neural Networks'. En conjunto forman una huella única.
Prensa/Medios de comunicación
Data on Networks Discussed by Researchers at University of Campinas (Sound Events Localization and Detection Using Bio-inspired Gammatone Filters and Temporal Convolutional Neural Networks)
1 elemento de Cobertura del medio de comunicación
Prensa/medios de comunicación