Skip to Main Content

Data Guys

By Eric Wallace

A female mallard on a pond with their chicks

Female Mallard and Hatchlings (CO Rob Bielawski)

A team of dedicated eBird reviewers is working behind the scenes to ensure the accuracy of the eBird database in Virginia.  Spreadsheet wizard Rob Bielawski is helping to revolutionize the data filtering process.

Augmenting a trio of statewide reviewers is a group of seven regional experts. Together, the volunteers serve as the atlas’s first line of quality-control agents.

“In order for us to maintain the integrity of the database, and for it to be put to use by science and conservation communities, our data quality has to be high,” says VABBA2 director, Dr. Ashley Peele. “Without the help of these committed guys and gals, that would be an impossibility.”

Today, the process is very different than it was 30 years ago. “Back then, everything was done by hand,” says regional eBird reviewer Mike Stinson. A veteran of early breeding bird atlases, Stinson volunteered for the first VABBA in 1985. Sifting through handwritten records for anomalies was precarious and time-consuming; integrating eBird’s online database and digital filtering technology led to a marked increase in efficiency.

However, there’s been a concurrent rise in entries. Compared to the VABBA’s ~400 birder participants, more than 950 have contributed to the VABBA2. By the close of the atlas’s second season in 2017, the boost had already yielded more than 500,000 records.

“It’s a tremendous amount of data and it’s growing daily,” says Peele. “If we were still reviewing every record by hand, we’d need a lot more than eleven people to get it all done.”

This is where eBird’s data filters come in. Like a team of automated fact-checkers, they help the state reviewers home in on potentially problematic entries.

A Blue-gray Gnatcatcher on nest

Blue-gray Gnatcatcher on nest (CO Rob Bielawski)

“When you submit a checklist to eBird, each record—” i.e. species entry— “is compared against a data filter to see if it falls within certain expected criteria,” says Rob Bielawski. A statewide reviewer, Bielawski has worked to maintain and improve eBird data filters in the Commonwealth since 2015. “If the entry falls outside those criteria, it gets flagged for review and is sent to the review queue for a local reviewer to look further into.”

Typical flags entail birds sighted in locations where they aren’t likely to be, in too high of concentrations, or during times when they shouldn’t be there.

For example, “it would be remarkable to see a Black-throated Blue Warbler near Blacksburg in December (although that very thing occurred in 2017!), but one wouldn’t bat an eye—or an ear—at several of them being present in early to mid-May,” explains Bielawski. Likewise, “observing a flock of 1,000 Snow Geese wouldn’t strike any coastal birder as unusual during the month of March but seeing even one in August would certainly be noteworthy.”

As eBird is open to birders of all levels, the potential misidentification of similar species is another frequent flagger. Subtle differences in coloration and physical features can often lead well-meaning novice birders to mistake one species for another.

Though Bielawski is quick to say that “weird things happen in the birding world all the time and that’s half the fun,” he emphasizes that the point of eBird’s filters is “to catch the outlier reports so they can be investigated by local experts.” The subsequent review process is what maintains the integrity of the eBird database.

What happens when an entry gets flagged?

“The hope is always that the person has a photo that makes it a clear-cut ‘yes’ or ‘no,’” says Bielawski. If not, he’ll comb the birder’s notes, looking for anomalies and clues that will help him understand what the person saw. Sometimes, the mistake is as simple as a typo. “Right now, if I was reviewing a flagged entry where a person said they saw ten kingfishers in Virginia Beach, I’d suspect that might be an accidental mis-entry—perhaps an intended ‘1’ that was typed in as ‘10.’”

In instances where the decision isn’t so straightforward, Bielawski will send an email with questions about the sighting. Most birders, he says, understand the importance of the reviews and are happy to help.

Rob Bielawski, VA’s eBird Filter Expert

Rob Bielawski, VA’s eBird Filter Expert

“The goal is to work together to create a more realistic picture of what’s actually out there,” Bielawski says. Committed to what he describes as a “noble cause,” in addition to working full-time as an engineer, he spends upward of 20 hours a week volunteering for eBird and VABBA2 related tasks, including data filter upkeep, review of eBird records, managing Atlas social media outlets, and maintaining the Virginia Society of Ornithology website (one of the project’s primary sponsors). On top of all this, he still finds time to get out and atlas in the field. “One of the biggest reasons I do this is you can’t help a species unless you know the full story.” To help birds with declining populations, conservationists need “to know where they are and where they’re not.”

Above all, to get an accurate picture of the avian landscape, you have to have accurate records.

 


Bielawski’s work with eBird data filters goes beyond simple maintenance—as a civil engineer with a mathematical mind and a knack for manipulating spreadsheets, he has ambitious goals for improvement.

“When I came onboard in 2015, we had about 20 regional data filters,” he says. The 34-year-old hopes to “expand the system to include custom filters for smaller regions, as well as independent cities and counties where warranted.” Doing so would lead to records being “more accurately flagged for review” and minimizing inefficiency for eBird users and reviewers alike.

Possible future enhancements might include the ability to filter by elevation. This would substantially reduce false flags in areas like the Blue Ridge Mountains, where many birds breed in higher elevations but not in nearby valleys. For instance, “though Dark-eyed Juncos will breed along Skyline Drive, they’re rare in about 99 percent of the filter area in summer,” says Bielawski. As such, when birders plug in accurate sightings, their observations are flagged by eBird.  “Elevation issues like this add unnecessary effort to both observers and reviewers alike, so our goal is to minimize these wherever possible.”

“My goal is continual refinement,” Bielawski says. Although he reluctantly concedes the filters will “probably never be perfect,” customizations at the county and city level would be nothing short of revolutionary. However, for birds, he says manmade boundaries will remain largely arbitrary. “It would be great to create filters based on specific habitats.” While he’s doubtful about the viability of such a project—habitat is so fragmented that micro-level refinement on that scale may prove impossible—Bielawski nonetheless asserts he plans to deliver us “as close as we can get.”

Dr. Peele also hopes to one day build breeding code filters into the standard eBird review system.  For now, an intensive layer of Atlas data review occurs outside of the eBird system every winter.  “With the help of Dr. Lewis Barnett, an Atlas regional coordinator and University of Richmond professor, we developed code for reviewing breeding data specifically.”  This code flags eBird records for potential breeding code errors, which are then reviewed by-hand in the off-season.  This annual process includes tens of thousands of records and is the Atlas’ second-line of data quality control.

~ Eric Wallace, VABBA2 Communications

  • July 19, 2018