Change is in the air . . .
logo2017JC1

Transition to our new website!

bcmgreylogo

 

North America's Four Automated
Bat Call Analysis Apps

 

SonoBat
SonoBat, Inc. • Arcada, CA

item22
SonoBat
software provides a comprehensive tool for analyzing and comparing high-resolution full-spectrum Sonograms of Bat echolocation calls recorded from time-expansion bat detectors. With its intuitive and direct interface, SonoBat makes it easy to record, process, display, and analyze calls with great sophistication.

 

 

BCID
Bat Call Identification, Inc. • Kansas City, MO

BCID2
BCID East allows users to automate the identification process of bats in the Midwest and North Eastern United States. The application interacts with an AnalookW to filter to extract call parameters from bat calls. It then uses several different algorithms to determine which bat most likely is responsible for producing the echolocation file. It can be used with Anabat files, converted full-spectrum files, SCAN'R parameter files and Analook parameter files.

 

 

Echoclass
US Army Engineer R&D Center • Vicksburg, MS

EC2

Echoclass - The US Fish and Wildlife Service funded Dr. Eric Britzke to develop an automated acoustic bat identification software program suitable throughout the range of the Indiana bat. Note: use of this software is generally limited to the range of the Indiana bat.

 

 

 

 

Kaleidoscope Pro
Wildlife Acoustics, Inc. • Concord, MA

KP1

Kaleidoscope Pro is a zero-cross analysis tool for people who need to quickly analyze terabytes of call data. The software recognizes many North American bat species and is capable of working with either full-spectrum or zero-crossing source files. The program features a handy tool for converting full spectrum files so they may be used in BCID and/or Echoclass.

Automated Bat Call
Analysis Software Review

As you know BCID and EchoClass can only use zero-cross data (ZC, or "frequency-division") and are ONLY looking at the frequency-time data from bat calls since the amplitude-time data is unavailable in these recordings. What you may not know is that Kaleidoscope Pro (KaPRO), even though it will accept full-spectrum recordings in addition to zero-cross recordings; during the analysis process, it too discards the amplitude-time data and only looks at the frequency-time data when rendering auto-classification decisions. (Which is way it can render decisions so quickly and provide an output in minutes instead of hours.) The benefit of using true full spectrum data for analysis comes from both including-time-amplitude data, but also the advantage of using lower amplitude components at multiple frequencies to better extract the time-frequency trends of the calls. This enables better extraction of time-frequency data, but also more robust extraction from lower amplitude signal components. In other words, even for the time-frequency data that ZC and full-spectrum both measure, full-spectrum still does it better, producing higher quality, more faithful renderings of call data.There is only ONE automated software application for use in North America that actually uses full-spectrum analysis to auto-classify bats, and that is SonoBat.

Based on our personal experience, we at BCM have some significant problems with trusting the results of any auto-classifier (even SonoBat; but especially the other three since they are essentially throwing away half the content of any recording (i.e., the amplitude-time data) during analysis). And we would hesitate to assign definitive occupancy for all but the most fully-rendered recording of an acoustically distinct species, based on an echolocation recording alone. And acoustic distinctness depends upon the diversity of the species in an area (i.e., LASCIN is easy to disambiguate acoustically as long as you don't have TADBRA flying around too, like in Hawaii; likewise MYOSOD can be identified in absence of MYOLUC in which ever little tiny pocket of North America that might be). So accepting acoustic results as evidence of occupancy must be done only on the most archetypal recordings, in areas where the entire acoustic repertoire of the local bat diversity is well-understood, and where auto-classifier outputs are verified by manual vetting.

What about using two classifiers, as per USFWS Indiana Bat Acoustic Protocols?
In our considerable experience, applying a second classifier to a data-set does not provide a more robust decision. Nothing is gained confidence-wise by having a second opinion from a computer-generated output that is only as good as the inputs it is trained on. Instead, the second classifier only introduces additional confusion about occupancy decisions because each classifier is "built" differently and no two classifiers rarely agree on the confidence of the ID for the same recording. This trickles down to the Maximum Liklihood Estimate (MLE) decision. How can you trust the MLE results when the same data set used to arrive at a decision produce disparate results? 

So, what to do with the FWS "protocol" requiring a second classifier, or for using acoustics to document occupancy for a species, especially the acoustically ambiguous MYOSOD for that matter? We've already answered this (as have numerous other biologists and developers intimately acquainted with actually doing acoustic surveys, in the field) by penning and revising an exhaustive treatise on the subject during the two comment periods for the FWS protocol, to little avail considering the content of the final guidance document. (Comments on the protocol were voluntarily archived by many authors at: http://batprotocol.info/batprotocol.info/Comments/Comments.html and one of our colleagues has obtained the rest to confirm that the FWS received input with similar conclusions to those that were voluntarily submitted.)

The only responsible thing (and scientifically defensible) to do at this point, is to use the "best" classifier and disregard the requirement for a second classifier in favor of manually vetting the recordings for any species of interest identified by the classifier (with concurrence from the regional FWS office of course).

So what is required of a "best" automated classifier?

1) That the classifier output a file-by-file decision for every recording thrown at it; in addition to an MLE result for species occupancy. All four classifiers do this.

2) Someone with years of acoustic experience, perform the manual vetting and provide file-by-file results; accepting or rejecting the computer-generated output. This is made easier by a classifier that makes tracking MLE decisions for species of interest back to individual files easy. KaPRO is the best at this, with SonoBat not far behind.

3) Manual vetting MUST be done by someone who completely understands the meaning of the results provided by the auto-classifiers, how the MLE value is calculated, and how to attribute significance (confidence) to the computer-generated decisions. NONE of the classifiers come with sufficient documentation on these details. SonoBat has the most robust information available, if you know where to look for it on-line or from colleagues. When we asked the developers of BCID, EchoClass, and KaPRO to explain some of these details for their respective outputs, we either got no response, or responses that were inconclusive, obfuscated the intent original question, or were simply shocking admissions that the metrics reported in the classifier outputs had no relevance! Additionally, the person tasked with manual vetting should be prepared to defend their classifications, expect that the original files will be requested by agencies and clients, and a subset of those files be sent off for third party verification.

4) Moreover, manually vetting the species of interest from an auto-classifier output ONLY addresses the false-positives. A user would have to manually review the entire data-set of recordings to identify any false-negatives. False negatives are a SIGNIFICANT problem in echolocation call recordings because none of the computer generated classifications are accurate when it comes to multiple species recorded in a single file. SonoBat is the only program that attempts to address multiple species so recordings can be easily identified in the output, and vetted for false negatives.

5) The classifier performance should have a 3rd-party review for every release version that quantifies the accuracy and precision of the auto-generated classifications for each species considered so users can appreciate the error-rates for decisions and by extension, the confidence of occupancy decisions, and the need for additional manual scrutiny. As the FWS has proven with their efforts to date, this is a heroic task which takes more time and talent than anyone could have anticipated. SonoBat is the only software which has been tested in this manner where results are currently available, but they reside on the SonoBat website which is not necessarily an example of a 3rd party tester (http://www.sonobat.com/About_SonoBat_Classification_Performance.html). BCM is actively working to publish some of our colleagues testing on this website, but only a fraction of our results are presently available. The only places this information has been presented has been at BCM training workshops and certain bat working group meetings. So, stay tuned for this, but our personal experience is that SonoBat comes closest to "reality" when being tested against manually vetted echolocation call recordings in the wild.

6) The classifier should be conservative in assigning confidence to species decisions due to an appreciation of the very real (and often under acknowledged) inherent variability in bat echolocation calls; the generally poor quality of many passive recordings; and the affects that bat behavior can have on the reliability of classification outputs. EchoClass is the most conservative program, with SonoBat not far behind. And, SonoBat is the only program that addresses the very real problem with disambiguating the acoustically indistinct MYOLUC/MYSOD species pair.

So, our 10-second analysis of the four software programs and their usefulness is as follows:

BCID - somewhat accurate, somewhat precise using the default settings and an appropriate species-set. There is no way to evaluate a pulse-level classification or the process for consensus decisions. This is no display output for manual vetting (use AnaLook for detailed inspection of calls with the best manual vetting tools for ZC files). Cost: $1500.

EchoClass - the most recent version is the most conservative of the four programs mentioned here. This might not be a bad thing for rendering decisions only on the most archetypal recordings, which are rare in passively collected field data, but provides the most accuracy and precision on recordings that do receive a classification decision. Again there is no way to evaluate a pulse-level classification or the process for rendering consensus decisions, nor is there any way to actually display bat calls (use AnaLook for detailed inspection of calls with the best manual vetting tools for ZC files). as this is a government-funded program, it is unlikely to be updated or revised based on new data as rapidly as the commercial offerings; no display output for manual vetting; Cost: $FREE

KaPRO - extremely uneven accuracy and precision at the species-classification level (i.e., does well with some species, but less than 50% on others so you might as well save $1500 and flip a coin for species presence). KaPRO provides pulse-level classifications but does not easily address multiple species in a single recording. The metrics provided in classification outputs to determine confidence are spurious because they are different depending on the species classified. There are far too many versions "out there" with knee-jerk releases/updates based on feedback from users; this has encouraged the developers to tweak the classifiers to be more accurate at the species-level using their "crowd-sourced" library data (e.g., GIGO). However, users only receive a year of free updates so early-adopters end up using an inferior product for surveys and have no choice but to re-do analysis or accept false results. KaPRO provides a display output for manual vetting but this is cumbersome at best for performing qualitative analysis compared with the more mature, refined tools provided with SonoBat, AnaLook, and Batsound. Cost: $1500.

SonoBat - our personal tests show it to be the most precise and accurate when applied to real field data. It is also the most mature program available with auto classification capbilities (it has been in development for over a decade) and is the only one that renders classifications based true full-spectrum data. Version 3 series is very slow (but can run in the background), however pre-release demos of version 4 have something of a 10x speed increase. The tracable library data was collected by numerous professionals with years of experience recording, analyzing, and studying bat echolocation calls. The display output for manual vetting is the best that is out there, allowing for patent-pending standard views and same-screen comparisons with a built in library of reference recordings. Cost: $1500. ZC users are out of luck; presently SonoBat only accepts full sprectum .wav files, but those files can origionate from a wide variety of hardware manufacturers such as Pettersson, Binary Acoustics, Elekon, Avisoft, and Wildlife Acoustics.