Fallible Fingerprints: Law Professor Seeks to Shore Up the Science Used in Courts

UVA law professor Brandon Garrett says that while no two fingerprints may be alike, two interpretations of fingerprint evidence may be quite different.

University of Virginia law professor Brandon Garrett is on a mission to see that the legal system presents science accurately when it introduces forensic evidence into the courtroom.

Garrett is a principal investigator of UVA’s year-old Center for Statistics and Applications in Forensics Evidence, which is generating new research about forensic analysis and sharing best practices in order to facilitate justice.

The author of “Convicting the Innocent: Where Criminal Prosecutions Go Wrong,” Garrett has spent much of his career studying how jurors can reach false conclusions based on bad evidence and other misleading factors.

But in a recent Q&A, he said he’s optimistic that junk-science convictions can become a thing of the past.

Q. Why should we be more skeptical of forensics?

A. Many ubiquitous types of forensics, such as fingerprint comparisons, tool-mark evidence and ballistic evidence, have been criticized as lacking a sufficiently reliable scientific foundation.

Still worse, some of those forensics were traditionally presented in court in a misleading way, as if they were foolproof. No human endeavor is error-free. Crime laboratories across the country have been plagued by scandal, with thousands of cases reopened due to poor quality control, contamination, falsification of results or improper methods.

Q. The White House recently released a report on forensics. Can you describe the findings and recommendations?

A. The President’s Council for Advisors on Science and Technology report forcefully states something very simple: We should not use forensics that are not known to be reliable to convict people of crimes.


For years, courts have grandfathered in techniques like fingerprinting that provide valuable information, but with error rates and probative value that have simply not been adequately studied. For example, the White House report noted a study showing a one-in-18 error rate for fingerprint comparison and another showing a one-in-six error rate for bite-mark comparison. A 2009 report by the National Academy of Sciences carefully detailed how much of the forensic evidence used was without “any meaningful scientific validation.” Little changed.

Hopefully, the White House report adds more impetus for change.

Q. What has your own work uncovered that even you were surprised by?

A. Few studies had been done examining how jurors appreciate some of the most commonly used forensics, including fingerprint evidence. [UVA law professor] Greg Mitchell and I embarked several years ago on a series of studies, and what we discovered surprised me: I expected that we would find that when analysts gave conclusions exaggerating their certainty that prints came from a defendant, that jurors would place more weight on the evidence. Jurors were not overly affected by those over-statements. Instead, it seemed as if just hearing the word “fingerprint” was enough to convince jurors that the defendant did it.

In more hopeful news, though, we discovered in a second experiment that jurors were affected by hearing that there is a possibility of an error in fingerprinting. Still more promising, in a detailed follow-up experiment, we are exploring how jurors can be highly sensitive to information about the error rates, or the proficiency of the particular fingerprint examiner. We plan to do more to explore these findings and make practical recommendations for testimony, reports and regulation of forensics in the courtroom.

Media Contact

Mary Wood

Chief Communications Officer University of Virginia School of Law