Deep neural networks are a form of computational intelligence loosely inspired by how animal brains see and understand the world. They require vast amounts of training data to work well, and the data must be accurately labelled.
Scientists have developed an artificial intelligence (AI) system that can automatically identify, count and describe animals in their natural habitats. Photographs that are automatically collected by motion-sensor cameras can then be automatically described by deep neural networks. The result is a system that can automate animal identification for up to 99.3 per cent of images while still performing at the same 96.6 per cent accuracy rate of crowdsourced teams of human volunteers. “This technology lets us accurately, unobtrusively and inexpensively collect wildlife data, which could help catalyse the transformation of many fields of ecology, wildlife biology, zoology, conservation biology and animal behaviour into ‘big data’ sciences,” said Jeff Clune, an associate professor at the University of Wyoming in the US. “This will dramatically improve our ability to both study and conserve wildlife and precious ecosystems,” said Clune.
Deep neural networks are a form of computational intelligence loosely inspired by how animal brains see and understand the world. They require vast amounts of training data to work well, and the data must be accurately labelled. This study obtained the necessary data from Snapshot Serengeti, a citizen science project that has deployed a large number of “camera traps” (motion-sensor cameras) in Tanzania that collect millions of images of animals in their natural habitat, such as lions, leopards, cheetahs and elephants.
Crowdsourced teams of human volunteers were asked to label each image manually. The study harnessed 3.2 million labelled images produced in this manner by more than 50,000 human volunteers over several years. “Not only does the artificial intelligence system tell you which of 48 different species of animal is present, but it also tells you how many there are and what they are doing. It will tell you if they are eating, sleeping, if babies are present, etc,” said Margaret Kosmala from Harvard University in the US. “We estimate that the deep learning technology pipeline we describe would save more than eight years of human labeling effort for each additional 3 million images. That is a lot of valuable volunteer time that can be redeployed to help other projects,” said Kosmala.