Shortly after Ronald Reagan’s administration came to power in the United States in 1981, Health and Human Services secretary Richard Schweiker began expelling scores of individuals from the Social Security disability rolls. His argument: their pain was not real. Instead, he argued—along with Social Security commissioner John Svahn, and White House policy analyst Peter Ferrara—benefits should be available only to those whose pain could be proven “on medical grounds.” Keith Wailoo, in his 2015 Pain: A Political History, notes that “This was a pivotal moment, a fulcrum in American pain politics” (239).
He goes on:
Framing the issues in these stark black-and-white terms worked politically for Reagan and emboldened his followers. Launching a stinging and dismissive critique of the liberal pain standard, Reagan’s mantra was simple: he was attempting to correct the course of the nation with a conservative formula for judging people’s pain. (243)
In the nearly 12 months since Donald Trump became president of the United States, countless members of the administration, right wing media, and even members of Congress and local governments have leaned on the rhetoric of the “fake” in the service of neoliberal and isolationist ideologies.
I do not, of course, seek to argue that the Trump administration is itself relying on—or, for that matter, seeking out—actual findings from the scientific community when making statements to the press or, worse, enacting legislation. But then again, Reagan’s portrayal of the “Welfare Queen”—cheating the government out of money, “faking” pain for the sake of gaming the system—lacked empirical evidence. In fact, when Reagan signed the Social Security Disability Benefits Reform Act in 1984 [PDF link], he authorized a Congressionally-sponsored investigation into the ways that pain should be evaluated for the purposes of benefit distribution. The eventual output [PDF link] of the so-called “Pain Commission” (made up of physicians, nurses, lawyers, representatives from the insurance agency, and others) was a report noting that “there is a clear Commission consensus that malingering [that is, faking pain] is not a significant problem” (16).
The Commission’s findings may have opened up again the opportunity for thousands who had been purged from the rolls to receive benefits. Still, it was not until the Clinton era that the number of disability awards returned to pre-Reagan levels (and Clinton was no champion of the welfare state). Still, Reagan neoliberalism ushered in a rhetoric meant to rally a base around punishing the fakers, the soft ones. And it led to real policy change—much like a “fake news” rhetoric has acted as justification for censoring National Park Service social media efforts or rolling back more than 60 EPA regulations. Trump’s absurd and dangerous rhetoric is by no means a direct result of Reagan’s. The relationship between the two, however, can be better understood when we take a brief look at the mechanisms of diagnosis and classification of diseases that were proliferating in the late Cold War era United States.
Until 1980, the Diagnostic and Statistical Manual was a seldom used reference for psychiatrists. The first edition, published in 1952, was “cheap, slim, and ring-bound,” while psychiatrists could “spend their working lives blissfully unaware of” the existence of the 1968 DSM-II (Cooper 2005, 1). In 1980, however, the DSM-III was released, containing a vast array of new classifications, diagnoses, and—importantly—validating data. Per Rachel Cooper, “within a few short years [of the DSM-III release], psychiatrists in the U.S. were using [it] on a daily basis” (2005, 1). This was a major edition, not only because of its eventual widespread use, but also the sorts of debates that were hashed out in its development: as two examples, PTSD was added to DSM-III, while homosexuality was (eventually, through a revised edition in 1987), removed—both thanks to immense pressure from veteran and gay advocacy groups, respectively (Kirk and Kutchins 1992).
The development of the DSM-III was spearheaded by a psychologist named Robert Spitzer, originally trained as a psychoanalyst, but eventually attracted to a more diagnostic-focused approach (Lane 2007, 41). One main concern of Spitzer’s was that the studies used to develop the underlying nosology needed to demonstrate a strong level of reliability. To that end, he and a number of colleagues published a paper in 1967 that proposed steps to increase reliability, representing a reliability metric with the Greek letter, kappa (𝝹). The paper included calculations made by a computer program called KAPPA, written in Fortran IV for an IBM 7094 computer (Kirk and Kutchins 1992, 42). As Stuart Kirk and Herb Kutchins argue in The Selling of DSM: The Rhetoric of Science in Psychiatry (1992), not only was the reference to a computer in the article meant to shore up support for the kappa metric, but it “also alerted clinicians that reliability problems henceforth were likely to be the province of a few research and statistical specialists who had the capability and resources to study these matters [by computer]” (42). In a further work on later editions of the DSM, the authors argue that “Now, at the close of the 20th century, a revolution of similar proportions [to that of the Freudian turn] is reshaping our thinking. that de-emphasizes case analysis in favor of using checklists of everyday feelings and behaviors to identify and classify disorders” (246-7).
The DSM acts here as representative of the turn towards computational classification—exemplified further by the International Statistical Classification of Diseases and Related Health Problems, a publication whose proliferation accelerated in post-World War II western cultures thanks, primarily, to the availability of computer-based research and publication systems (see Geoffrey Bowker and Susan Leigh Star’s 2000 Sorting Things Out: Classification and Its Consequences). When what is “real” is delineated by a formulaic checklist of conditions or visible signs, then the possibilities for variation and outliers is jettisoned. In the same vein, when authorities declare what these conditions and signs are, we risk falling victim to the motives of these regimes. We see the definition of pain limited in order to rile up resentment towards those “faking it”; we see limits to admissible evidence that supports environment-sparing policies.
Certainly, the category of “provability” is a mainstay of STS scholarship and Lorraine Daston and Peter Galison’s 2007 STS pillar, Objectivity, explores questions of realness and truth throughout an expansive history taken on by the authors. But at a time when our current administration is making moves to define what “truth” is, what language is valid, and which press outlets are arbiters of “fake news”, I see a strong connection with these efforts and those of the Reagan presidency introduced above. The authoritarianism being exemplified by the actions of the Trump government is rooted in classificatory regimes, based on the checklists and programs of the systems guiding our everyday lives.
Works cited and not linked: