Habitat-Net: Habitat interpretation using deep neural nets

Vashishtha, Anand; Abrams, Jesse F.; Mohamed, Azlan; Wilting, Andreas; Mukhopadhyay, Anirban

Biological diversity is decreasing at a rate of 100-1000 times pre-human rates [1] [2], and tropical rainforests are among the most vulnerable ecosystems. To avoid species extinction, we need to understand factors influencing the occurrence of species. Fast, reliable computer-assisted tools can help to describe the habitat and thus to understand species habitat associations. This understanding is of utmost importance for more targeted species conservation efforts. Due to logistical challenges and time-consuming manual processing of field data, months up to years are often needed to progress from data collection to data interpretation. Deep learning can be used to significantly shorten the time while keeping a similar level of accuracy. Here, we propose Habitat-Net: a novel Convolutional Neural Network (CNN) based method to segment habitat images of rainforests. Habitat-Net takes color images as input and after multiple layers of convolution and deconvolution produces a binary segmentation of an image. The primary contribution of Habitat-Net is the translation of medical imaging knowledge (inspired by U-Net [3]) to ecological problems. The entire Habitat-Net pipeline works automatically without any user interaction. Our only assumption is the availability of annotated images, from which Habitat-Net learns the most distinguishing features automatically. In our experiments, we use two habitat datasets: (1) canopy and (2) understory vegetation. We train the model with 800 canopy images and 700 understory images separately. Our testing dataset has 150 canopy and 170 understory images. We use the Dice coefficient and Jaccard Index to quantify the overlap between ground-truthed segmentation images and those obtained by Habitat-Net model. This results in a mean Dice Score (mean Jaccard Index) for the segmentation of canopy and understory images of 0.89 (0.81) and 0.79 (0.69), respectively. Compared to manual segmentation, Habitat-Net prediction is approximately 3K – 150K times faster. For a typical canopy dataset of 335 images, Habitat-Net reduces total processing time to 5 seconds (15 milliseconds/ image) from 4 hours (45 seconds/ image). In this study, we show that it is possible to speed up the data pipeline using deep learning in the ecological domain. In the future, we plan to create a freely available mobile app based on Habitat-Net technology to characterize the habitat directly and automated in the field. In combination with ecological models our tools will help to understand the ecology of some poorly known, but often highly threatened, species and thus contribute to more timely conservation interventions. REFERENCES: 1. Sachs et al. "Biodiversity conservation and the millennium development goals." Science 325.5947 (2009): 1502-1503. 2. Chapin Iii, F. Stuart, et al. "Consequences of changing biodiversity." Nature 405.6783 (2000): 234. 3. Ronneberger, Olaf, Philipp Fischer, and Thomas Brox. "U-net: Convolutional networks for biomedical image segmentation." International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015.

Zitieren

Zitierform:

Vashishtha, Anand / Abrams, Jesse / Mohamed, Azlan / et al: Habitat-Net: Habitat interpretation using deep neural nets. Jena 2018.

Zugriffsstatistik

Gesamt:
Volltextzugriffe:
Metadatenansicht:
12 Monate:
Volltextzugriffe:
Metadatenansicht:

Grafik öffnen

Rechte

Nutzung und Vervielfältigung:

Export