There is no mandatory federal crash-reporting standard and the standard that NHTSA recommends is deeply flawed and much is left up to officer discretion. We can fix this!

recent study conducted by advocates in Portland, Ore., for instance, compiled law-enforcement data alongside hospital records, media reports, roadway measurements, and even manual inspection of adjacent land uses on Google Maps. Similar efforts are underway in San Francisco. Some advocates say the MMUCC should include many of those factors its next edition (2022) — and the burden of collecting the data should be shared among departments of transportation, health, and other city offices, rather than left to the cops.

Excerpt from https://usa.streetsblog.org/2021/07/20/why-u-s-car-crash-reporting-is-broken/

The national Fatality Analysis and Reporting System contains a universe of useful data about the 6.5 million car crashes that happen on U.S. roads in an average year. But that data is all built on crash reports from tens of thousands of police departments — and each of those departments gets to decide which details about those crashethey think should count.

Not surprisingly, a lot of the critical information falls through the cracks. A 2016 report from the National Safety Council found that crash reports in 26 states lacked a field for officers to record whether drivers were texting, and that no state crash reports included a field to record whether drivers were fatigued, despite the fact that the National Highway Traffic Safety Administration had conducted major public awareness campaigns on both issues.

Some states have updated their reporting standards recently — but not all cities and municipalities in those states have followed suit. In 2019, for instance, 44 percent of officers simply didn’t report whether a driver was distracted in the event of a crash, nor did they note whether drivers were distracted by common behaviors, such as texting — likely because local forms don’t require them to do so.

The standard that NHTSA recommends is deeply flawed 

States aren’t totally flying blind on crash reporting, but federal guidelines are voluntary — and not they’re especially forward-thinking.

Since its initial publication in 1998, the little-known Model Minimum Uniform Crash Criteria has advised states on the collection of hundreds of data points on crashes, including hundreds of details on roadway features, vehicles, drivers, and other road users. But some of the most widely-recognized causes of our traffic violence epidemic still aren’t included in the 236 page tome. For instance,

  • the MMUCC doesn’t ask officers to note how far a walker who’s struck by a driver might be from the nearest unobstructed crosswalk;
  • it simply advises them to note whether the walker was in a crosswalk or not, even if the closest one was miles away.
  • Vehicle height and weight aren’t requested, either, despite the fact that the growing bulk of SUVs and pick-ups is well known to be accelerating U.S. walking fatalities.

Comparatively, the details that officers are advised to collect about vulnerable road users are weirdly granular. There’s even a separate form to detail whether pedestrians and cyclists were wearing safety equipment, with separate codes for “reflectors,” “reflective clothing,” and “lighting,” such as flashlights.

There are some efforts to paint a fuller picture of the factors commonly involved in car crashes — but they’re not based on police reports alone.

recent study conducted by advocates in Portland, Ore., for instance, compiled law-enforcement data alongside hospital records, media reports, roadway measurements, and even manual inspection of adjacent land uses on Google Maps. Similar efforts are underway in San Francisco. Some advocates say the MMUCC should include many of those factors its next edition (2022) — and the burden of collecting the data should be shared among departments of transportation, health, and other city offices, rather than left to the cops.

3. A lot is left up to officer discretion

Of course, even the perfect crash report might not give us the data we need to really understand our traffic violence crisis — at least if the officials filling it out rely on opinion rather than fact.

Law enforcement officers are often given wide latitude to guess at whether behaviors like speeding, distraction and erratic driving contributed to crashes, even when more hard data about what actually happened is readily available. Last year, reps for the U.S. Department of Transportation even acknowledged that as many as two-thirds of the “drunk walking” deaths the agency mentions in its safety campaigns were not verified by blood tests of the dead pedestrians, but instead based on “imputed blood-alcohol content” or an officer’s professional opinion of a walkers’ sobriety — opinions presumably based on the testimony of the driver and other witnesses, because, of course, the walker died.

Simply put, officers also don’t always have the tools or the bandwidth to make accurate crash reports — which is why some advocates want to give reporting responsibilities to agencies with better resources and specialized training. But until that happens, police may keep leaning on the same, simplistic explanations that perpetuate the corrosive myth that individual road-user error accounts for almost all crashes, rather than collecting data that could identify more complex, systemic factors.

“At least anecdotally, a lot of officers will default to explanations [for crashes] that frankly, often eye witnesses don’t even agree with,” said Rohit T. Aggarwala, senior fellow at Cornell Tech and author of an op-ed encouraging Secretary Buttigieg to reform FARS. “Excessive speed seems to be the default for many things; driver inattention is another one you hear a lot, too.”

But Aggarwala hopes that, with the right reforms, much subjectivity can be stripped from crash reporting — especially with former management consultant Pete Buttigieg at the helm of the DOT.

“Guys who work at places like McKinsey are usually pretty obsessed with data,” said Aggarwala. “To bring a former consultant’s kind of perspective to this could be great to help identify where there are gaps that need to be filled…It would definitely be great to see the data cleaned up, and more importantly, to see the data matter.”