Information retrieval from marine soundscape by using machine learning-based source separation

In remote sensing of the marine ecosystem, visual information retrieval is limited by the low visibility in the ocean environment. Marine soundscape has been considered as an acoustic sensing platform of the marine ecosystem in recent years. By listening to environmental sounds, biological sounds, and human-made noises, it is possible to acoustically identify various geophysical events, soniferous marine animals, and anthropogenic activities. However, the sound detection and classification remain a challenging task due to the lack of underwater audio recognition database and the simultaneous interference of multiple sound sources. To facilitate the analysis of marine soundscape, we have employed information retrieval techniques based on non-negative matrix factorization (NMF) to separate different sound sources with unique spectral-temporal patterns in an unsupervised approach. NMF is a self-learning algorithm which decomposes an input matrix into a spectral feature matrix and a temporal encoding matrix. Therefore, we can stack two or more layers of NMF to learn the spectral-temporal modulation of k sound sources without any learning database [1]. In this presentation, we will demonstrate the application of NMF in the separation of simultaneous sound sources appeared on a long-term spectrogram. In shallow water soundscape, the relative change of fish chorus can be effectively quantified even in periods with strong mooring noise [2]. In deep-sea soundscape, cetacean vocalizations, an unknown biological chorus, environmental sounds, and systematic noises can be efficiently separated [3]. In addition, we can use the features learned in procedures of blind source separation as the prior information for supervised source separation. The self-adaptation mechanism during iterative learning can help search the similar sound source from other acoustic dataset contains unknown noise types. Our results suggest that the NMF-based source separation can facilitate the analysis of the soundscape variability and the establishment of audio recognition database. Therefore, it will be feasible to investigate the acoustic interactions among geophysical events, soniferous marine animals, and anthropogenic activities from long-duration underwater recordings. REFERENCES: 1. Lin, T.-H., Fang, S.-H., Tsao. Y. 2017. Improving biodiversity assessment via unsupervised separation of biological sounds from long-duration recordings. Sci Rep, 7: 4547. 2. Lin, T.-H., Tsao. Y., Akamatsu, T. 2018. Comparison of passive acoustic soniferous fish monitoring with supervised and unsupervised approaches. J. Acoust. Soc. Am. Express Letters, 143: EL278. 3. Lin, T.-H., Tsao. Y. 2018. Listening to the deep: Exploring marine soundscape variability by information retrieval techniques. OCEANS'18 MTS/IEEE Kobe / Techno-Ocean 2018, in press.


Citation style:
Could not load citation form.


Use and reproduction: