Brain Power: Intelligence in the Age of Neuroscience

Ittai Orr // When I set out to take the LSAT, the law school admissions test, I believed it was an IQ test that would finally lay bare the limits of my inherent brain power. According to the organization that administers the exam, it “measures the reading and comprehension of complex texts with accuracy and insight; the organization and management of information and the ability to draw reasonable inferences from it; the ability to think and write critically; and the analysis and evaluation of the reasoning and arguments of others.” This was not a test of legal knowledge or craft; it measures, something that, like your height or weight, is really there, the bedrock capacity to comprehend, draw inferences, think, write, and analyze. Fantasies of hyper-competence in high-rise buildings come to mind; super-human jurists from primetime TV laying out a clever and unexpected counter-argument that humiliates the stupefied opposition; a fashionably unshaven man whose mental agility makes him indispensable at a law firm despite the fact that he’s not a lawyer at all; Poe’s C. Auguste Dupin or Doyle’s Sherlock Holmes, demystifying a crime for their astonished, and cognitively limited, audiences.

Several months of classes, failures and frustrations later, I managed to score in the highest percentile, but the experience left a distinctly bad aftertaste that extended well beyond an egalitarian aversion to hierarchies or survivors’ guilt. Along with my sense of pride at the hard-earned triumph and the thrill of getting into top law schools came an awareness that I had narrowly escaped that all-too-common, crippling assumption that I was pre-wired to be less-than, that those capacities of analysis, thinking, and comprehending, were programmed by my genes, or at most achieved permanently in early education, and that they could never be improved or honed beyond what my relatively unimpressive math scores in high school and college bespoke. The LSAT, perhaps alone among standardized tests, gets at functions of the mind that come closest to what we’ve come to refer to as “intelligence” or “intellectual capacity,” that we attribute to a relatively stable chemical reaction, or that electrified sponge that we called the brain. Logical reasoning, it turns out, is something you learn – something everyone should learn. I was afraid to take the test because it would expose me as an intellectual fraud, but came out the other end realizing that the fraud was the invention of intellectual ability itself.

In the wake of Michel Foucault’s critiques of science, concepts like insanity, sexuality, gender, intelligence, and even selfhood came under obliterating scrutiny, but despite these important philosophical interventions, from the 1990s onward, we have also lived in the era of “Neuro,” which locates all the features of our personalities squarely in our genetic, neurological makeup. Scholars have called this the turn to the “cerebral subject,” or the “neurostructural self.”[1] The danger of this sort of reification is not just a foreclosure of a radically vital, plastic notion of human nature, but more practically, what psychologists have called the “Pygmalion effect,” the determinative power of the belief of inferiority. Though the initial studies were flawed, subsequent experiments corroborate that children perform better on IQ tests when their teachers are primed with higher expectations of them.[2] Obviously, this cannot account for all negative student outcomes, and belief of inferiority doesn’t guarantee failure, but I do wonder what kind of effect the neurostructural self has on our performance and our decisions.

Relatedly, the faith we put into the existence of innately unequal abilities undoubtedly affects how much we acquiesce to economic and political inequalities. Just as we blame our successes on our innate talents, we are taught to associate our defeats with our own inadequacies, rather than blame bad outcomes on chance or foul play. This is, after all, what the idea of intelligence was invented to do: provide a sense of meritocracy to the lived reality of inequality. As historians have shown, it only entered seriously into public discourse in the nineteenth century, in the context of debates over the degree to which African and Native-Americans belonged to the same species, and deserved the same rights, as whites. In the nineteenth century, it also justified the abject inequality produced by the industrial revolution. The key axiom was, and continues to be, that in the game of life (figured as natural and not man-made), it is inevitable—though perhaps regrettable—that some people will succeed where others fail, and this inequality must be attributable to the natural programming we were born with. Never mind, by the way, the question of whether intelligence and dessert are really the same; by the century’s end, nature’s preference moved quietly away from the good, and attached instead to excellence.

Now, after a long period of invasive state interventions, psychology, and psychoanalysis, we’ve returned to the darkly fatalistic nineteenth-century, with all of its dissected brains and skull collections. The Neuroscience of Intelligence (Richard J. Haier, Cambridge Univ. Press, 2016), cites a 1904 study that shows how positive correlations tie together mental abilities of various kinds of thinking in a “positive manifold” that would hint at a Platonic hidden factor “g”, intelligence, that is supposedly responsible for all of them. G, Haier posits, is correlated in turn with education outcomes, and why not? “It would be unusual if learning and intelligence were unrelated” (17). Just how mental efficiency is related to education, however, becomes a matter of guesswork. After inferring from this impressive “manifold” of correlations that g is responsible for all the other things (educational excellence, math skills, memorization skills, etc.), rather than the other way around (who knows if education wasn’t mostly responsible for g?) Haier arrives at what is, given his scant evidence, an unwarranted conclusion: “Given the powerful influence of g on educational outcomes, it is surprising that intelligence is rarely considered explicitly in vigorous debates about why pre-college education appears to be failing many students. The best teachers can maximize a student’s learning, but the intelligence level of the student creates some limitations, although it is fashionable to assert that no student has inherent limitations” (20). Haier, who elsewhere writes that his goal is to maximize intelligence, reveals his hand. Actually, the endpoint of the argument is not to help people gain skills, but to question the premise underpinning desperate efforts to adequately fund public education: that a quality learning environment results in better outcomes for students. Maybe he imagines a pharmacological utopia where every child is on that magic pill from the 2011 film Limitless, sitting in damp, dirty, and overcrowded high schools. He then floats a “fashionable” straw-man argument that no student has any limitations of any kind, as if this is the only other possible belief besides his own. Readers who paid attention during a logic course (and those who were magically born with a lot of g) would not be fooled by Haier’s argument: that some students have limitations (who doubts that all students have some?), does not in any way suggest that the reason for our failing education system is a lack of inherent ability on the part of students.

The book goes on to deluge its readers with even more correlations: brain scan images reveal the well-oiled inner workings of high-IQ brains; there are stories about proteins and neurons working at varying degrees of efficiency; and, of course, there is the quintessential autist-savant test subject, whose presumably unstudied particularity (always understood as outside of education and state-provided resources), stands in for the stubborn particularity of all brains. There is no doubt that there is some difference between people’s capacities, but to conclude from a manifold of correlations, and the existence of diagnosable mental disabilities, that education is probably a useless endeavor, that inherent g factors in our brains are responsible for everything from the failing school system to the existence of poverty and racial inequality, as Charles Murray infamously posits wherever he can avoid a riot, is, as the LSAT itself will tell you, an unjustified leap of logic.

[1] Elizabeth Fein, “Innocent Machines: Asperger’s syndrome and the neurostructural self”

[2] Rosenthal, Robert; Jacobson, Lenore (1992). Pygmalion in the classroom : teacher expectation and pupils’ intellectual development (Newly expanded ed.). Bancyfelin, Carmarthen, Wales: Crown House Pub.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s