Linking the sounds of dolphins to their locations and behavior using video and multichannel acoustic recordings. Academic Article uri icon

abstract

  • It is difficult to attribute underwater animal sounds to the individuals producing them. This paper presents a system developed to solve this problem for dolphins by linking acoustic locations of the sounds of captive bottlenose dolphins with an overhead video image. A time-delay beamforming algorithm localized dolphin sounds obtained from an array of hydrophones dispersed around a lagoon. The localized positions of vocalizing dolphins were projected onto video images. The performance of the system was measured for artificial calibration signals as well as for dolphin sounds. The performance of the system for calibration signals was analyzed in terms of acoustic localization error, video projection error, and combined acoustic localization and video error. The 95% confidence bounds for these were 1.5, 2.1, and 2.1 m, respectively. Performance of the system was analyzed for three types of dolphin sounds: echolocation clicks, whistles, and burst-pulsed sounds. The mean errors for these were 0.8, 1.3, and 1.3 m, respectively. The 95% confidence bound for all vocalizations was 2.8 m, roughly the length of an adult bottlenose dolphin. This system represents a significant advance for studying the function of vocalizations of marine animals in relation to their context, as the sounds can be identified to the vocalizing dolphin and linked to its concurrent behavior.

publication date

  • October 2002