Key Takeaways
1. Biological Determinism: A Persistent Fallacy of Reification and Ranking
This book, then, is about the abstraction of intelligence as a single entity, its location within the brain, its quantification as one number for each individual, and the use of these numbers to rank people in a single series of worthiness, invariably to find that oppressed and disadvantaged groups—races, classes, or sexes—are innately inferior and deserve their status.
The core argument. Biological determinism posits that social and economic differences among human groups stem from inherent, inborn distinctions, implying that society accurately reflects biology. This pervasive idea, often cloaked in scientific language, serves to validate existing hierarchies and justify the status quo. It suggests that if the poor suffer due to "laws of nature," then societal efforts to alleviate their misery are futile.
Two fundamental fallacies. This deterministic view relies on two deep errors of thought:
- Reification: The tendency to convert abstract concepts, like "intelligence," into concrete, measurable entities. This transforms a complex set of human capabilities into a single, quantifiable "thing."
- Ranking: The propensity to order complex, continuous variation into a simple, linear scale of worth. This often leads to a "great chain of being" where certain groups are placed at the top and others at the bottom.
Quantification as a weapon. Historically, the allure of numbers has been used to lend an air of objectivity and irrefutable precision to these fallacious arguments. Scientists, often unconsciously, gathered and interpreted data selectively to confirm preconceived notions, thereby providing "scientific" validation for social prejudices against marginalized groups. This process, though seemingly objective, often travels in a circle, starting with prejudice and returning to it, reinforced by the mystique of numerical data.
2. 19th-Century Craniometry: Numbers Masking Preconceived Prejudice
Morton’s summaries are a patchwork of fudging and finagling in the clear interest of controlling a priori convictions. Yet—and this is the most intriguing aspect of the case—I find no evidence of conscious fraud; indeed, had Morton been a conscious fudger, he would not have published his data so openly.
The "American School" of racism. In the early to mid-19th century, American scientists like Samuel George Morton and Louis Agassiz championed polygeny, the idea that human races were separate species. This theory provided a convenient biological justification for slavery and the displacement of indigenous populations. Morton, a Philadelphia physician, amassed the world's largest collection of human skulls, aiming to objectively rank races by brain size.
Morton's unconscious bias. Morton's meticulous measurements, using mustard seed and later lead shot to determine cranial capacity, consistently placed whites on top, Indians in the middle, and blacks at the bottom. However, a re-analysis of his raw data reveals a pattern of unconscious manipulation:
- He included small-brained Inca Peruvians to lower the Indian average but excluded equally small-brained Hindus to raise the Caucasian mean.
- His subjective mustard seed measurements yielded systematically lower capacities for "inferior" races compared to his later, more objective lead shot method.
- He failed to correct for body size or sex, even when his data clearly showed these factors influenced brain size, not intelligence.
Broca's meticulous prejudice. Paul Broca, a leading French craniometrician, also used precise measurements to "prove" the inferiority of women, the poor, and non-European races. While his data were generally accurate, his interpretations were driven by social assumptions. He would select or abandon criteria (like forearm length or cranial index) based on whether they supported his preconceived ranking, often resorting to convoluted explanations to avert anomalies. For instance, he argued that large-brained Germans were simply more "brawny" and that the "apish" position of the foramen magnum in blacks was due to a loss of frontal brain matter.
3. Recapitulation Theory: "Inferior" Groups as Perpetual Children
The adults of inferior groups must be like children of superior groups, for the child represents a primitive adult ancestor.
Evolutionary justification for ranking. The theory of recapitulation, epitomized by "ontogeny recapitulates phylogeny" (individual development mirrors evolutionary history), became a powerful tool for biological determinism in the late 19th century. It suggested that "higher" creatures repeat the adult stages of "lower" animals during their own growth. This provided a seemingly scientific basis for ranking human groups: adults of "inferior" races, sexes, or classes were deemed analogous to the children of "superior" white males, representing an ancestral, less developed stage.
Anatomical and psychological comparisons. Scientists like E.D. Cope and Carl Vogt used this framework to highlight "primitive" traits in marginalized groups:
- Anatomical: Deficient calf musculature, flattened noses, and brain structures in blacks were compared to those of white children or apes.
- Psychological: "Savages" and women were characterized as emotionally childlike, lacking judgment, or prone to "caprice," reinforcing stereotypes.
- Social implications: This theory justified imperialism, arguing that "undeveloped races" were incapable of self-government and required tutelage, as famously articulated in Kipling's "White Man's Burden."
Neoteny's ironic reversal. The collapse of recapitulation theory in the early 20th century led to the rise of neoteny, the idea that humans retain juvenile traits into adulthood. This reversed the ranking: now, retaining childlike features was "superior." This presented a dilemma for racists, as Orientals and women often exhibited more neotenous traits than white males. Rather than abandoning their prejudices, proponents like Louis Bolk simply found new, often contradictory, anatomical features to re-assert white superiority, demonstrating how scientific theories were bent to fit pre-existing social biases.
4. Lombroso's Criminal Anthropology: Atavism as a Stigma of Deviance
At the sight of that skull, I seemed to see all of a sudden, lighted up as a vast plain under a flaming sky, the problem of the nature of the criminal—an atavistic being who reproduces in his person the ferocious instincts of primitive humanity and the inferior animals.
The "born criminal" concept. Cesare Lombroso, an Italian physician, founded criminal anthropology with his theory of "l'uomo delinquente" (the criminal man). His "flash of inspiration" led him to believe that criminals were evolutionary throwbacks, or "atavistic beings," who possessed the "ferocious instincts of primitive humanity and inferior animals." This theory provided a biological, deterministic explanation for criminal behavior, shifting focus from social causes to inherent individual pathology.
Anatomical and social stigmata. Lombroso claimed that "born criminals" could be identified by physical "stigmata" – anatomical features recalling an apish past. These included:
- Enormous jaws, high cheekbones, prominent brow arches
- Relatively long arms, large ears, flattened noses
- Insensitivity to pain, excessive idleness, love of orgies
He also cited social traits like criminal argot (language) and tattooing as signs of atavism, comparing them to the behavior of "savages" and children.
Immunity to disproof. Lombroso's arguments were constructed to be unfalsifiable. When confronted with contradictory evidence, he would simply expand his categories of criminality (e.g., "criminals of passion," "epileptic criminals") or discount worthy behavior in "primitives" as mere insensitivity to pain. His data on criminal brain size, for instance, showed little difference from "normal" brains, yet he continued to assert that "small capacities dominate" in criminals. This circular reasoning, combined with copious but often misleading numerical data, allowed his theory to persist despite its scientific vacuity.
5. Alfred Binet's IQ Test: A Tool for Help, Perverted into a Label
We must protest and react against this brutal pessimism; we must try to demonstrate that it is founded upon nothing.
A pragmatic tool for educational support. Alfred Binet, a French psychologist, developed his intelligence scale in 1904 for a specific, humane purpose: to identify children in French schools who needed special educational assistance. He created a diverse set of tasks, emphasizing that the test was a practical, empirical guide, not a measure of innate, fixed intelligence. Binet explicitly rejected the idea that intelligence could be captured by a single, immutable number.
Binet's three cardinal principles (later disregarded):
- Practical device, not theory: Scores were a guide, not a definition of innate or permanent "intelligence."
- For identifying need, not ranking: The scale was for mildly retarded or learning-disabled children, not for ranking normal children.
- Emphasis on improvement: Low scores indicated a need for special training, not an indelible mark of incapacity. Binet vehemently opposed the "brutal pessimism" of those who claimed intelligence was a fixed quantity.
The American perversion. American psychologists, particularly H.H. Goddard and Lewis M. Terman, fundamentally distorted Binet's intentions. They reified his scores as measures of a single, innate entity called "intelligence" and used them to justify social stratification and eugenics. Goddard, for example, coined the term "moron" for high-grade defectives and advocated their segregation and sterilization, believing "normal intelligence" was a single Mendelian gene. This transformation of Binet's ameliorative tool into a deterministic label represents a tragic misuse of science.
6. American Hereditarians: Mass-Marketing Innate, Immutable IQ
The common opinion that the child from a cultured home does better in tests solely by reason of his superior home advantages is an entirely gratuitous assumption.
Goddard's "moron" menace. H.H. Goddard, a key figure in bringing Binet's tests to America, reified intelligence as a single, inborn, Mendelian trait. He believed that "morons" (his coined term for high-grade defectives) were the greatest threat to American society, linking low intelligence to immorality, crime, and pauperism. His infamous "Kallikak family" study, based on dubious visual identification and later revealed to contain altered photographs, purported to show the hereditary transmission of feeble-mindedness, advocating for institutionalization and prevention of breeding.
Terman's technocracy of innateness. Lewis M. Terman, developer of the Stanford-Binet IQ test, envisioned a rational society where IQ scores would determine an individual's place in life. He argued that low IQ was the root cause of social pathology, justifying the permanent removal of "feeble-minded" individuals from society and their exclusion from reproduction. Terman also believed that IQ scores should dictate vocational placement, suggesting that:
- IQ 75 or below: unskilled labor
- IQ 75-85: semi-skilled labor
- Anything above IQ 85 for a barber: "dead waste"
His "fossil IQ" study, retrospectively assigning IQs to historical geniuses, was a methodological absurdity, reflecting available biographical information rather than actual intellect.
Yerkes and the Army Tests. Robert M. Yerkes, driven to establish psychology as a "hard science," led the administration of mental tests to 1.75 million WWI recruits. The results, interpreted by Yerkes and his colleagues, claimed:
- Average mental age of white Americans: a shocking 13 years.
- European immigrants: graded by country of origin, with southern and eastern Europeans deemed less intelligent.
- African Americans: at the bottom with an average mental age of 10.41.
These "facts," despite being riddled with cultural bias, inadequate testing conditions, and statistical finagling, profoundly influenced the discriminatory Immigration Restriction Act of 1924.
7. The Army Mental Tests: Flawed Data Driving Discriminatory Immigration Policy
The decline of American intelligence will be more rapid than the decline of the intelligence of European national groups, owing to the presence here of the negro.
A scientific facade for prejudice. The Army Mental Tests, administered to 1.75 million WWI recruits, were presented as objective measures of "native intellectual ability." However, their content was culturally biased, including questions about American commercial products, sports figures, and social customs. Testing conditions were chaotic, with many recruits unable to understand instructions due to illiteracy or language barriers, leading to a high frequency of zero scores.
Statistical manipulation and environmental denial. Despite overwhelming evidence of environmental influence, Yerkes and his statisticians, particularly E.G. Boring, systematically distorted the data:
- Zero scores: Instead of acknowledging confusion, zero scores were "corrected" downwards into negative ranges, penalizing those who simply didn't understand the tests.
- Environmental correlates: Strong correlations between test scores and education, health (e.g., hookworm), and years of residence in America were dismissed or reinterpreted as evidence of innate differences. For example, higher scores for immigrants with longer residency were attributed to "more intelligent immigrants" remaining in the country, rather than acculturation.
- Racial and national rankings: The tests "confirmed" a hierarchy: Nordic Europeans > Alpine/Mediterranean Europeans > African Americans, with the average white American deemed to have a mental age of 13.
Political triumph of scientific racism. These flawed results were aggressively promoted by eugenicists like C.C. Brigham, who used them to lobby for the Immigration Restriction Act of 1924. This act imposed harsh quotas on southern and eastern European nations, drastically reducing immigration from these "inferior" stocks. Brigham later recanted, admitting the tests were worthless as measures of innate intelligence and that his study "collapses completely," but the damage was done, with tragic consequences for millions seeking refuge.
8. Cyril Burt's Fraud and the Reification of 'g'
The concept of an innate, general, cognitive ability, which follows from these two assumptions, though admittedly a sheer abstraction, is thus wholly consistent with the empirical facts.
The doyen of mental testing. Sir Cyril Burt, a preeminent British educational psychologist, was a staunch advocate for the hereditarian view of intelligence. From his earliest papers in 1909 to his posthumous publications in 1972, he consistently argued that intelligence was innate, largely inherited, and measurable as a single, general factor ('g'). He believed this "innate, general, cognitive ability" was the primary determinant of an individual's intellectual rank and social destiny.
Fraudulent data, enduring influence. Burt's later work, particularly his studies on identical twins reared apart, which showed remarkably high and unchanging IQ correlations, was exposed as fraudulent. His "collaborators" were found to be nonexistent, and his data fabricated. However, even his earlier, "honest" work was riddled with fundamental flaws:
- Circular reasoning: He "proved" innateness by correlating test performance with parental intelligence, which he assessed subjectively based on social standing, not actual tests.
- Data "adjustment": He admitted to "correcting" test results based on teachers' subjective assessments, ensuring they aligned with his preconceived notions of innate ability.
- Idée fixe: His unwavering belief in the innateness of intelligence blinded him to environmental influences, which he acknowledged in other areas like juvenile delinquency or left-handedness.
Political impact on education. Burt's ideas profoundly influenced British educational policy, particularly the "11+ examination" system. This test, designed to measure Spearman's 'g', streamed children at age 10 or 11 into different secondary schools, effectively labeling 80% as unfit for higher education based on supposed innate intellectual limits. Burt, despite his claims of promoting social mobility, ultimately reinforced class stratification by providing a "scientific" justification for it.
9. Factor Analysis: The Mathematical Illusion of a Single Intelligence
How can we argue that g has any claim to reified status as an entity if it represents but one of numerous possible ways to position axes within a set of vectors?
Spearman's quest for 'g'. Charles Spearman, a pioneer in psychology and statistics, invented factor analysis in 1904 to identify the underlying structure of mental abilities. Observing that scores on various mental tests were almost always positively correlated, he sought a single "general factor" (g) to explain this common variance. He reified 'g' as "general intelligence," a fundamental, quantifiable "mental energy" residing in the brain, believing it would elevate psychology to the rigor of physics.
The reification fallacy. Spearman's crucial error was reifying 'g'—treating a mathematical abstraction (the first principal component of a correlation matrix) as a real, causal entity. While factor analysis can simplify complex data, the existence of a strong first component does not automatically imply a single underlying cause. Many non-causal systems can yield strong principal components. Furthermore, 'g' never accounted for all variance, leaving "specific factors" (s) unique to each test.
Ambiguity of interpretation. Even if 'g' were a real entity, its causal meaning remains ambiguous. Its presence is consistent with both:
- Hereditarian view: People do well on tests because they are born smarter.
- Environmentalist view: People do well on tests due to enriched upbringing and education.
This inherent ambiguity means that 'g' alone cannot justify claims of innate intelligence. The temptation to reify 'g' as a "thing" reflects an ancient philosophical prejudice, not a scientific truth.
10. Thurstone's Challenge: Undermining 'g' with Multiple Abilities
We cannot report any general common factor in the battery of 56 tests that have been analyzed in the present study.
Rotating 'g' away. L.L. Thurstone, an American psychologist, challenged Spearman's 'g' by demonstrating that factor analysis could yield radically different interpretations of intelligence. He argued that Spearman's method of principal components, which produced 'g' as a grand average, was arbitrary and psychologically meaningless. Thurstone developed "simple structure" rotation, a technique that repositioned factor axes to align with clusters of correlated tests, thereby identifying distinct "primary mental abilities" (PMAs) rather than a single general factor.
Multiple intelligences, not a single hierarchy. Thurstone's PMAs (e.g., verbal comprehension, numerical ability, spatial visualization) were conceived as independent, irreducible mental entities. This model shattered the idea of a single, linear hierarchy of intelligence, replacing it with a profile of individual strengths and weaknesses. The mathematical equivalence of Spearman's and Thurstone's solutions highlighted that the choice of factor model was often driven by theoretical preference rather than empirical necessity, undermining the claim that 'g' was an ineluctable reality.
Egalitarian implications. Thurstone's work had profound implications for education and social policy. By dismantling the hegemony of 'g', he argued against unilinear ranking of students and advocated for individualized education tailored to a child's unique profile of PMAs. While Thurstone still believed his PMAs were real, identifiable entities (and even acknowledged a "second-order g" from correlations among PMAs), his emphasis on multiple, independent abilities provided a powerful counter-argument to the deterministic sorting of individuals based on a single, supposedly innate, IQ score.
11. The Bell Curve: A Modern Echo of Old Fallacies and Disingenuousness
The Bell Curve contains no new arguments and presents no compelling data to support its anachronistic social Darwinism.
Recycling old arguments. Richard Herrnstein and Charles Murray's The Bell Curve (1994) rehashes the core tenets of biological determinism, presenting them as startling new insights. The book's central argument, divided into two parts, first revives social Darwinism by claiming that true equality of opportunity leads to a rigid class structure based on innate cognitive ability. Second, it extends this to inherited racial differences in IQ, with a significant gap between Caucasians and African Americans.
Disingenuousness and statistical misdirection. Despite its voluminous data and complex appearance, The Bell Curve is a "rhetorical masterpiece of scientism" that avoids crucial discussions and misuses statistics:
- Ignoring 'g's theoretical basis: The authors assert the reality of 'g' (general intelligence) as a single, measurable entity without adequately discussing or defending factor analysis, its sole theoretical justification.
- Confusing bias: They conflate statistical (S-bias) and vernacular (V-bias) definitions of test bias, claiming tests are unbiased while ignoring societal biases that affect scores.
- Hiding weak correlations: The book's numerous graphs show only the form of relationships between IQ/socioeconomic status and social behaviors, but omit the strength (R-squared values), which are overwhelmingly weak (often less than 10% of variance explained). This crucial information is relegated to an appendix, masking the fact that IQ is not a major determinant of most social outcomes.
A manifesto for conservative policy. The Bell Curve is ultimately a political manifesto advocating for conservative social policies: reducing welfare, ending affirmative action, cutting preschool programs, and shifting funds from "slowest learners" to the "gifted." Its apocalyptic vision of a growing, low-IQ underclass requiring a "custodial state" (a "high-tech and more lavish version of the Indian reservation") is a direct consequence of its flawed deterministic model, which, if accepted, would justify abandoning efforts to improve social conditions.
12. Human Unity: Genetic Evidence Undermines Traditional Racial Categories
If the misery of our poor be caused not by the laws of nature, but by our institutions, great is our sin.
Beyond biological determinism. The consistent failure of biological determinism to find scientific support for its claims does not mean biology is irrelevant to human nature. Instead, it points to a different, more nuanced role for genetics: one of potentiality and flexibility, rather than rigid programming. Human uniqueness lies in our large, flexible brains, which enable cultural evolution—a rapid, Lamarckian mode of inheritance that transmits learned knowledge across generations, far outpacing Darwinian biological change.
The astonishing unity of humanity. Modern paleoanthropology and human genetics increasingly support the "Out of Africa" hypothesis, indicating that all non-African human racial diversity is relatively recent (around 100,000 years old). This means:
- African diversity: Genetic variation among Africans alone exceeds the sum total of genetic diversity for all other non-African people combined.
- Racial categories: "African black" cannot be considered a coherent racial group in the same way as "European Caucasian" or "East Asian," as it encompasses far greater evolutionary and genetic breadth.
This profound genetic unity undermines any coherent claim for innate, group-specific traits like intelligence or athleticism based on conventional racial categories.
Flexibility as our biological hallmark. Our evolution, particularly through neoteny (retention of juvenile traits into adulthood), has maximized behavioral flexibility. Intelligence is the ability to solve problems creatively, not a fixed program for specific behaviors. Natural selection likely favored learning rules that allow for diverse responses (aggression or peacefulness) depending on circumstances, rather than coding for specific, immutable traits. This biological potentiality, rather than determinism, is the true message of human evolution, urging us to challenge institutions that perpetuate misery rather than blaming "laws of nature."
Last updated:
Review Summary
The Mismeasure of Man receives polarized reviews on Goodreads (4.06/5). Supporters praise Gould's documentation of scientific racism's history, from 19th-century craniometry to IQ testing, exposing how scientists manipulated data to justify racial hierarchies and eugenics. Critics argue the book is ideologically biased, outdated, technically dense, and inadequately addresses modern intelligence research. Some note Gould's own measurements of Morton's skulls were later disputed. Reviewers appreciate his examination of scientist bias and his challenge to biological determinism, though many find chapters on factor analysis mathematically overwhelming. The book remains influential in critiquing pseudoscientific attempts to quantify intelligence hierarchically.
Similar Books
