A new paper in the Proceedings of the National Academy of Sciences (PNAS) reports how a cutting-edge artificial intelligence technique called deep learning can automatically identify, count and describe animals in their natural habitats. Photographs that are automatically collected by motion-sensor cameras can then be automatically described by deep neural networks.

The result is a system that can automate animal identification for up to 99.3 percent of images while still performing at the same 96.6 percent accuracy rate of crowdsourced teams of human volunteers. “This technology lets us accurately, unobtrusively and inexpensively collect wildlife data, which could help catalyze the transformation of many fields of ecology, wildlife biology, zoology, conservation biology and animal behavior into ‘big data’ sciences.

This will dramatically improve our ability to both study and conserve wildlife and precious ecosystems,” says Jeff Clune, the senior author of the paper. He is the Harris Associate Professor at the University of Wyoming and a senior research manager at Uber’s Artificial Intelligence Labs.

The paper was written by Clune; his Ph.D. student Mohammad Sadegh Norouzzadeh; his former Ph.D. student Anh Nguyen (now at Auburn University); Margaret Kosmala (Harvard University); Ali Swanson (University of Oxford); and Meredith Palmer and Craig Packer (both from the University of Minnesota). Deep neural networks are a form of computational intelligence loosely inspired by how animal brains see and understand the world.

They require vast amounts of training data to work well, and the data must be accurately labeled (e.g., each image being correctly tagged with which species of animal is present, how many there are, etc.). This study obtained the necessary data from Snapshot Serengeti, a citizen science project on the http://www.zooniverse.org platform. Read more from phys.org…

thumbnail courtesy of phys.org