Between 2010 and 2013, the website Snapshot Serengeti presented citizen scientists with the opportunity to help organize more than 1.2 million images (including some hilarious animal selfies) captured by a camera trap in Serengeti National Park in Tanzania. These volunteers helped identify which images in the dataset contained animals and classified the species pictured, the number of animals, and any behaviors on display.

A new analysis of the work of these citizen scientists shows that trusting untrained strangers with scientific data isn’t a terrible idea. As the study in Conservation Biology notes, overall, volunteers classified 98 percent of the images accurately, based on a comparison with expert answers. 

On average, 27 volunteers viewed each image, but even the images that were classified by only five volunteers were labeled accurately 90 percent of the time. Even the most knowledgeable expert can make mistakes, and an image classified by multiple volunteers was slightly more accurate (98 percent compared to less than 97 percent) than the classification of a single expert.

This is exciting news for scientists, who don't always have the manpower to go through and catalog their entire archive of images from camera traps like those in the Serengeti. If the judgment of regular volunteers can be trusted, a whole lot more data can be collected. 

All images via Snapshot Serengeti