search slide
search slide
pages bottom

Here come the appeals: White House report attacks modern forensic science

Criminal forensics have been under fire for decades, but the fairly recent rise of true crime content from trashy daytime TV to trendy pop culture phenomenon has seen that criticism spread to the general population. Now it’s no longer just criminology students and minority activists claiming to have found crippling flaws in much of the scientific basis of the Western justice system. With public interest comes political attention, and on Tuesday the President’s Council of Advisors on Science and Tech released its long-awaited report on the subject.

In its opinion, a large portion of criminal forensics are somewhere between questionable and full-on junk science. This includes famously questioned techniques like bite mark analysis, hair matching, and more. The report is causing waves in the justice and legal communities, as it could be the start of a large-scale revisit of decades of convictions.

The Justice Department and the FBI have both expressed concerns with the report. US Attorney General Loretta Lynch has said that the report’s recommendations will not be adopted, pushing against the report’s core claim by insisting that modern forensics have been born of the most reliable scientific methods available. The FBI said that it “disagrees with many” of the report’s claims that it “makes broad, unsupported assertions regarding science and forensic science practice.”

This will be an uphill battle for those who agree with this White House report. They’ll be facing not only genuine belief in these methods, but a justified fear of widespread appeals of formerly slam-dunk convictions. The report claims to have “no view on the legal question of whether any past cases were ‘erroneously decided,’” but it does go on to state that “from a scientific standpoint, subsequent events have indeed undermined the continuing validity of conclusions that were not based on appropriate empirical evidence.” So… it is expressing an opinion on past convictions.

The report identified two general types of problem. One, there’s “the need for clarity about the scientific standards for the validity and reliability of forensic methods.” Two, there’s “the need to evaluate specific forensic methods to determine whether they have been scientifically established to be valid and reliable.”

The first of these two recommendations feeds directly into the second, via the group it most directly attacks: expert witnesses. The report is a direct criticism of the enormous business of expert testimony, in particular taking issue with the ability of judges to basically unilaterally decide what makes an expert an expert. This gives the judge undue power over the evidence presented in a case, especially considering the judge is not an expert in the field his or herself. The report wants the development of hard standards that would take much of the burden for this decision off of the judge and put it onto more data-focused standards bodies.

But this raises a problem: How do you set a standard for, say, a boot-print? Percentage match? As measured how? Should this match percentage be calibrated across different types of terrain? How much does moisture content affect it? How much weight should be given to the presence or absence of “randomly acquired characteristics” in a specific shoe?

In the current system, these and many other questions can be left to a particular expert witness to determine, and there’s a lot of disagreement. To create a standard would mean settling that argument, determining one side to be objectively, provably correct. This isn’t just a politically difficult thing to do in any group of self-interested professionals, it’s also scientifically difficult to achieve. Accurately appraising the ability of footwear experts to match shoes to shoe-prints could mean conducting a separate study on every question like those above, then more studies on how to integrate those results into a single standard, then more studies sifting between disagreements within those studies.

If this is all sounding a bit laborious, ask yourself this: Why has this not been done before? The answer, in some cases, is that it has been. Certainly, fingerprinting and DNA analysis have both been subject to this sort of focused research, but not every investigation has led to positive results for forensic scientists. The report found that when the FBI conducted a study checking over a hundred hair follicle matches against DNA comparisons, it found that 11% of visually confirmed matches were in fact from different people. The FBI claimed this result confirmed the validity of the technique.

Most surprisingly for those of us whose views are still on some level influenced by memories of Matlock, there’s a critical overview of firearms analysis. This is the discipline in which “toolmarks” on the bullet and the barrel of a gun are compared to determine whether the gun fired the bullet. It’s been widely sold as a firearm equivalent to a fingerprint, but the report claims that both the National Research Council and the President’s Council have failed to find sufficient evidence confirming the reliability of firearms analysis.

forensics 6This is a broadly familiar narrative in the American justice system. The polygraph is probably the most famous example of judicial technology that has been institutionally over-trusted, and which has led to convictions of innocent people. But more relevant is the story of forensic handwriting analysis, which has unquestionably sent innocent people to prison, but which also unquestionably has some validity. It is a simple fact that a person’s handwriting generally conforms to broad patterns, and unlike a hacked together bio-monitor like a polygraph, handwriting is an intrinsic part of our world whether we use it or not — do we really want to make our courts incapable of considering something that could be so obviously relevant?

The report makes a number of recommendations to Justice, the FBI, NIST, and more, and they basically boil down to: Figure it out. Spend the money, and do the tests. The problem is that it’s not just about developing better tools for future prosecutions, but developing post-hoc defenses for past ones — when will it suddenly make sense to risk being the Attorney General whose decision let out a few percent of the country’s prisoners?

Thus, it seems most likely that any real progress on this topic will have to come from non-governmental research bodies like universities, though government funding budgets still have a lot of power in that area as well. If such independent reviews end up confirmed every one of the techniques currently under scrutiny, then this government hesitance to perform the research is going to seem pretty silly. If those same studies confirm this report’s concerns, however, it could be seen as bordering on sinister.

Now read: 19 ways to stay anonymous and protect your online privacy

Leave a Reply

Captcha image