Forensic Science: Fact or Fiction?
Published Jul 1, 2010 9:30 AM
Fingerprints may prove guilt or innocence beyond a reasonable doubt in TV crime shows, but while it may be a great plot device, it isn't necessarily science, says UCLA Law Professor Jennifer Mnookin.
"There's no information about error rates, we don't know how often fingerprint examiners get things wrong, and there's no underlying statistical model to justify their claims," explains Mnookin, a legal scholar specializing in scientific evidence and forensic science. "I've witnessed all kinds of practices in and around the courtroom that don't have any substantial research basis at all."
Now, due to a grant from the National Institute of Justice (NIJ), the research and development arm of the U.S. Department of Justice, Mnookin is hoping to change the status quo. Together with UCLA Cognitive Psychology Professor Philip Kellman, as well as latent fingerprint expert David Charlton and forensic science researcher Itiel Dror, she is attempting to establish a scientific method to quantify the accuracy and error rates of latent fingerprint examination. Dror and Charlton are experts in their respective fields at UK-based Cognitive Consultants International.
The funding, part of a larger NIJ grant of $10 million, was in response to the findings of last year's National Academy of Sciences landmark report on the state of forensic science in U.S. crime laboratories that validated the views of Mnookin and others. It questioned the reliability and accuracy of forensic methods across the spectrum, from ballistics to handwriting analysis, citing a serious lack of empirical research.
"The pattern identification sciences grew up out of police forces, rather than universities," the legal expert says. "There's never been a robust research tradition."
With fingerprint analysis specifically, the advent of computerized data systems has only worsened the inaccuracies. Previously, crime scene fingerprints were compared with fingerprints from likely suspects, resulting in a relatively small number of matches. "Chances are you wouldn't find two prints that matched with too much similarity unless you had actually found the right person," explains Mnookin.
Now, the computer "spits out a set of possibilities, many of which are wrong, and it takes a careful human eye to decide whether or not they match," she says. "You could very well get a degree of similarity that could fool the human observer into thinking two prints came from the same person, when they really didn't. There are no formal standards … Our hope is that we're a starting point, but we sure shouldn't be an ending point. We'd be thrilled if our research helps spur further interest by other academics."