Precedent without Proof: How Legislative Silence Enabled Junk Forensics
- Leah Rose
- 22 minutes ago
- 6 min read
Despite mounting scientific consensus that forensic bite mark analysis is fundamentally unreliable, it continues to surface in American courtrooms, reflecting not merely a judicial shortcoming but also a legislative one. Originating in the 1970s and legitimised by People v Marx (1975), bite mark analysis gained judicial acceptance through precedent rather than through any legislatively mandated system for assessing scientific validity. Courts, operating under the Frye “general acceptance” test and later the Daubert standard of reliability, became de facto arbiters of scientific credibility, even though these rules were procedural rather than substantive and provided no mechanism to ensure scientific rigour. As wrongful convictions later revealed, the absence of legislative safeguards allowed faulty forensic methods to masquerade as reliable science. The rise and steady decline of bite mark analysis thus expose a deeper systemic problem: without legislative standards to govern the admissibility and validation of forensic evidence, the justice system remains vulnerable to pseudoscience disguised as expertise.
Bite mark analysis is a forensic method in which experts attempt to identify a suspect by comparing the dental patterns of their teeth to impressions found on a victim’s skin. The technique emerged in the 1970s and 1980s as part of the broader rise of forensic science in criminal investigations, offering what appeared to be a tangible, visual form of evidence that could link a perpetrator to a violent crime. Bite marks were often found in sexual assault cases or violent homicides, and the method was initially embraced because it seemed objective and scientifically grounded. However, later research and case reviews exposed fundamental flaws in the practice. Human skin is a highly variable and deformable medium, prone to stretching, compression, and healing, which can distort bite marks and make accurate comparison extremely difficult. Factors such as the angle of the bite, the force applied, and postmortem changes can further compromise the reliability of impressions. Moreover, there is no standardised methodology for analysing marks, and forensic examiners have often relied on subjective judgement rather than quantifiable criteria with widespread disagreement, even among trained experts. Empirical studies, as well as investigative reports like the 2009 National Research Council report, have demonstrated that bite-mark analysis lacks foundational validity, reproducibility, and a known error rate. The method’s failure to meet basic scientific standards has led to numerous documented wrongful convictions, including those of Levon Brooks and Kennedy Brewer, who were implicated by bite mark testimony and subjected to decades-long imprisonment for crimes they did not commit. However, despite overwhelming documentation discrediting bite mark analysis as a reliable science, the technique has persisted in courtrooms, valued more for its apparent evidentiary force than for any proven scientific reliability.
The path to judicial acceptance of bite-mark analysis in American courts highlights the tension between legal standards and scientific validation. It reveals how courts often rely on precedent rather than empirical proof to determine admissibility – a pattern that reflects the broader absence of legislative oversight in forensic regulation. The foundational Frye v. United States (1923) decision established the “general acceptance” test, requiring that scientific techniques be widely endorsed within their relevant fields before admission in court. Under this standard, bite mark evidence was initially met with scepticism, as courts questioned its reliability and the lack of consensus among forensic experts. This approach shifted dramatically with People v. Marx (1975), where the California Court of Appeal admitted bite mark testimony, reasoning that the method applied well-established tools including photographs, casts, and microscopy, and could be independently assessed by the judge and jury; consequently, Frye did not exclude its acceptance. Further, the bite in Marx was highly unusual: the uniquely distinct dentition pattern of the defendant, coupled with the bite’s position on the victim’s nose, made the mark highly identifiable. Scientists at the time considered the case an exception, arguing that bite mark identification should only be admissible under similarly extraordinary circumstances. However, instead of remaining an outlier, Marx effectively created a practical precedent, and courts began treating bite mark analysis as generally admissible even when the dental pattern was less distinctive. In this manner, a norm was established where judicial discretion overrode scientific validation.
The later Daubert v. Merrell Dow Pharmaceuticals (1993) decision further transformed the legal framework for admitting expert testimony. In 1975, the Supreme Court replaced the strict Frye general-acceptance requirement with a flexible, reliability-focused standard under Federal Rule of Evidence 702, which governs expert testimony through procedural guidance. Rule 702 allows a qualified expert to testify if their specialised knowledge will help the trier of fact understand the evidence or determine a fact, provided that the testimony is based on sufficient data and the methods are reliably and properly applied. Daubert clarified how Rule 702 should be applied: it instructed trial judges to act as gatekeepers, evaluating whether an expert’s methodology is scientifically valid, testable, peer-reviewed, and subject to known error rates, while still considering the degree of acceptance in the relevant scientific community. The ruling marked a significant shift: admissibility was no longer determined solely by consensus, but by a combination of scientific rigour and logical reasoning leaving the validation of forensic techniques like bite mark analysis largely to the discretion of judges rather than to standardised scientific regulation. Daubert set a binding precedent for federal courts and influenced many state courts, contributing to the transition of bite mark analysis from a cautiously regarded technique to a routinely admitted form of evidence. This underscores how the absence of statutory standards allows unvalidated forensic methods to gain legitimacy in the courtroom.
As evidenced above, judges have relied on case law and their own discretion to determine the admissibility of forensic techniques because, unlike other scientific or technical disciplines, no federal or state statute has ever required forensic methods to undergo standardised validation. While the statutory 1975 Federal Rules of Evidence, which include Rule 702, govern how evidence is admitted, presented, and evaluated in U.S. federal courts, they are largely procedural rather than substantive. Recognising this gap, authoritative reports – including the 2009 National Research Council report and the 2016 report from the President's Council of Advisors on Science and Technology (PCAST) – have repeatedly called for the establishment of more rigorous standards of scientific validity through legislative reform, yet Congress and state legislatures have largely failed to act. This legislative void has allowed bite mark analysis and other unvalidated methods to gain credibility through courtroom practice alone, exposing systemic vulnerabilities in the justice system and perpetuating the risk of wrongful convictions. Although the use of bite mark analysis has declined significantly in recent years and has not been upheld in any recent U.S. cases, occasional admission demonstrates that these risks persist wherever statutory reform remains absent.
Addressing the legislative gap in forensic regulation requires proactive statutory reform to ensure that courtroom science is reliable and independently verified. One approach would be passing a statute like the United Kingdom’s Forensic Science Regulator Act 2021, requiring forensic science providers to comply with a Code of Practice that ensures all methods are scientifically validated and reliably fit for use in criminal investigations before being admitted as evidence. Rule 702 could be amended to explicitly codify criteria for scientific reliability, rather than leaving this assessment largely to judicial discretion. Additionally, the creation of an independent National Institute of Forensic Science, as recommended by the 2009 National Research Council report, could oversee standardisation, accreditation, and ongoing research, reducing the reliance on precedent or individual expert authority. Finally, legislation could require post-conviction review mechanisms for cases built on discredited or unvalidated forensic methods, helping to prevent wrongful imprisonment and restore public confidence in the justice system. Together, these reforms would establish a formal, enforceable framework for forensic evidence, addressing the deficiencies that allow unreliable methods like bite mark analysis to flourish.
The persistence of bite mark analysis in American courtrooms underscores a fundamental legislative deficiency: courts have been forced to serve as de facto regulators of scientific validity, relying on precedent and judicial discretion rather than codified standards. While case law – from People v. Marx to Daubert – has shaped the parameters of admissibility, these decisions are procedural stopgaps. Without legislative mandates requiring rigorous, peer-reviewed validation of forensic methods, untested or unreliable techniques can gain legitimacy simply through courtroom usage. The resulting reliance on outdated or flawed science has led to wrongful convictions and continues to threaten the integrity of the justice system. Ultimately, the problem is not merely judicial shortcoming but legislative neglect; until federal or state lawmakers establish clear statutory standards for the scientific reliability of forensic evidence, the courts alone cannot prevent junk science from influencing life-altering decisions.



