Searching...
English
EnglishEnglish
EspañolSpanish
简体中文Chinese
FrançaisFrench
DeutschGerman
日本語Japanese
PortuguêsPortuguese
ItalianoItalian
한국어Korean
РусскийRussian
NederlandsDutch
العربيةArabic
PolskiPolish
हिन्दीHindi
Tiếng ViệtVietnamese
SvenskaSwedish
ΕλληνικάGreek
TürkçeTurkish
ไทยThai
ČeštinaCzech
RomânăRomanian
MagyarHungarian
УкраїнськаUkrainian
Bahasa IndonesiaIndonesian
DanskDanish
SuomiFinnish
БългарскиBulgarian
עבריתHebrew
NorskNorwegian
HrvatskiCroatian
CatalàCatalan
SlovenčinaSlovak
LietuviųLithuanian
SlovenščinaSlovenian
СрпскиSerbian
EestiEstonian
LatviešuLatvian
فارسیPersian
മലയാളംMalayalam
தமிழ்Tamil
اردوUrdu
How We Know What Isn't So

How We Know What Isn't So

The Fallibility of Human Reason in Everyday Life
by Thomas Gilovich 1993 216 pages
3.96
3.2K ratings
Listen
Try Full Access for 7 Days
Unlock listening & more!
Continue

Key Takeaways

1. Humans are wired to perceive order, even in purely random events.

The human understanding supposes a greater degree of order and equality in things than it really finds; and although many things in nature be sui generis and most irregular, will yet invest parallels and conjugates and relatives where no such thing is.

Order from chaos. Our minds naturally seek patterns, meaning, and predictability, even in truly random data. This inherent tendency, often adaptive for discovery, can lead us to "see" order where only chance operates. Examples include seeing faces in the moon or canals on Mars, or hearing satanic messages in backward music.

Clustering illusion. We have faulty intuitions about what random sequences look like, expecting more alternation than chance actually produces. This "clustering illusion" makes truly random sequences appear "lumpy" or too ordered. For instance, basketball players and fans perceive "hot hands" or "streaks" in shooting, even though statistical analysis reveals that a player's success on one shot is independent of previous shots.

Misinterpreting randomness. This misperception extends to spatial distributions, as seen in Londoners believing V-1 bombs fell in clusters during WWII, despite random impact points. Our tendency to retrospectively identify "anomalous" features in data, without proper statistical testing, leads us to detect patterns where none exist, cementing erroneous beliefs.

2. We misinterpret statistical phenomena like regression, leading to false causal beliefs.

When one score is extreme, its counterpart tends to be closer to the average. It is a simple statistical fact.

Regression to the mean. When two variables are imperfectly related, extreme values on one variable tend to be followed by less extreme values on the other. For example, exceptionally tall parents tend to have tall children, but not as tall as themselves, and a company's banner year is often followed by a less profitable one. This statistical fact is often overlooked.

Non-regressive predictions. People tend to make predictions that are insufficiently regressive, expecting future performance to match initial extreme performance without accounting for the natural pull toward the average. This can lead to poor decisions in business, hiring, and risk assessment, as people fail to incorporate the likely effects of regression.

Regression fallacy. We often fail to recognize statistical regression when it occurs, instead inventing complex causal theories to explain predictable fluctuations. The "Sports Illustrated jinx," where athletes' extraordinary performances are followed by poorer ones, is a classic example. Similarly, rewards appear ineffective after good performance (which naturally regresses), while punishments seem effective after bad performance (which naturally improves), leading to misguided beliefs about their efficacy.

3. Our judgments are heavily biased by an over-reliance on confirmatory information.

It is the peculiar and perpetual error of the human understanding to be more moved and excited by affirmatives than negatives.

Confirmatory bias. When evaluating beliefs, people disproportionately focus on instances that confirm their hypothesis, often neglecting or downplaying disconfirming evidence. This is evident in how people assess relationships between variables, giving excessive weight to "positive instances" (e.g., couples who adopt and then conceive) while ignoring other crucial data.

Seeking confirmation. We actively seek information that would potentially confirm our beliefs, rather than information that might disconfirm them. In hypothesis testing, people tend to ask questions for which a "yes" response would support their initial idea. This one-sided questioning can elicit biased information, leading to a spurious sense of confirmation, even if the hypothesis is false.

Memory and interpretation. This bias extends to memory and interpretation. When assessing similarity, we focus on shared features; when assessing dissimilarity, we focus on differences. Our expectations can also influence how we perceive ambiguous information, such as judging aggressive plays more harshly when a team wears black uniforms. Even unambiguous contradictory evidence is often subjected to intense scrutiny and reinterpretation, rather than being simply ignored, to maintain existing beliefs.

4. Hidden or incomplete information often create an illusion of validity for our beliefs.

Without information about how members of the “rejected” group would have performed had they not been rejected, the only way to evaluate the effectiveness of the selection test is to look at the success rate among those who are “accepted”—a comparison that is inadequate to do the job.

Invisible data. Many conclusions are drawn from incomplete or unrepresentative data, leading to an "illusion of validity." When evaluating selection criteria (e.g., job interviews, SAT scores), we often only see the performance of those who were "accepted," lacking crucial information about how the "rejected" group would have performed. This absence of a proper baseline can make even ineffective criteria appear highly successful.

Policy evaluation challenges. Similarly, evaluating policies is difficult because we rarely observe what would have happened if a different policy had been implemented. If a policy is followed by positive outcomes, it's often deemed successful, even if the base rate of success was high regardless of the policy. This makes it hard to distinguish effective strategies from mere good fortune.

Self-fulfilling prophecies. Our expectations can alter the world we observe, creating seemingly-fulfilled prophecies. If we believe someone is unfriendly, we might avoid them, preventing any opportunity for them to demonstrate friendliness. This limits the information we receive, confirming our initial (potentially false) expectation, not because the target actively conforms, but because they lack the chance to disconfirm it.

5. Our desires and preferences subtly shape what evidence we accept as true.

Man prefers to believe what he prefers to be true.

The wish to believe. We are inclined to adopt self-serving and comforting beliefs, often making optimistic assessments of our own abilities, traits, and future prospects (the "Lake Wobegon effect"). We attribute successes to ourselves and failures to external circumstances. This "wish to believe" is not about blind fantasy, but a subtle influence on how we process information.

Cognition and motivation collude. Our motivations don't simply override reality; they work through cognitive processes. When we prefer a belief, we ask "what evidence supports this?" rather than "what evidence contradicts this?" This biased framing, along with selectively consulting sources and stopping our search for evidence once a comforting conclusion is found, allows us to justify desired beliefs.

Self-based definitions. We often define ambiguous traits or abilities in ways that highlight our own strengths, allowing us to believe we are "above average." For example, a careful driver might emphasize "care" as the most important driving skill. This juggling of criteria, while appearing objective, is a subtle way our preferences influence our self-assessments, making our beliefs feel objectively warranted.

6. Secondhand information is frequently distorted, making it an unreliable source.

What ails the truth is that it is mainly uncomfortable, and often dull. The human mind seeks something more amusing, and more caressing.

Sharpening and leveling. When information is relayed secondhand, it's rarely transmitted verbatim. Details deemed less essential are "leveled" (de-emphasized), while the perceived "gist" is "sharpened" (emphasized). This simplifies stories, removing inconsistencies and ambiguities, but often distorts the original truth, as seen in exaggerated accounts of the "Little Albert" experiment.

Informativeness and entertainment. Speakers often distort information to make it more informative or entertaining, sometimes exaggerating immediacy or omitting qualifications. News media, under pressure to attract audiences, may sensationalize stories, downplay skeptical views, or even propagate fabrications, leading the public to believe misleading claims about phenomena like UFOs or health risks.

Self-interest and plausibility. Distortions also arise from a speaker's self-interest, such as promoting an ideological agenda or enhancing their public image. Claims about drug dangers or AIDS risks, for example, might be exaggerated to motivate specific behaviors. Additionally, stories that seem plausible, even if untrue (e.g., Bobby McFerrin's suicide rumor), are readily accepted and spread, especially if they are also entertaining.

7. We overestimate social agreement, bolstering our beliefs with false consensus.

My opinion, my conviction, gains infinitely in strength and success, the moment a second mind has adopted it.

False consensus effect. We tend to exaggerate the extent to which others share our beliefs, attitudes, and habits. Francophiles believe more people like French culture than Francophobes do, and those who agree to wear a "REPENT" sign estimate more peers would agree than those who refuse. This makes our own beliefs seem more mainstream and justified.

Multiple causes. This effect stems from several factors: a motivational desire to validate our judgment, selective exposure to information and people who agree with us, and attributing our own beliefs to external, universal factors. We also often fail to recognize that others may interpret the same issue or situation differently, assuming a shared "object of judgment."

Inadequate feedback. Our misconceptions about others' beliefs are rarely corrected because people are generally reluctant to openly express disagreement. We often feign agreement to avoid conflict or to be liked, depriving others of crucial corrective feedback. This hidden dissent leads us to overestimate social support, making our beliefs more resistant to change.

8. The nature of illness and human psychology makes ineffective health practices seem potent.

If a person a) is poorly, b) receives treatment intended to make him better, and c) gets better, then no power of reasoning known to medical science can convince him that it may not have been the treatment that restored his health.

Spontaneous remission and regression. Many illnesses are self-limiting or fluctuate naturally. When a treatment, even a worthless one, is applied during a low point or before natural recovery, any subsequent improvement is often mistakenly attributed to the treatment (post hoc ergo propter hoc). This regression effect makes ineffective remedies appear successful.

Snatching success from failure. Failures of ineffective treatments are often rationalized away. Practitioners might blame the patient's lack of spiritual purity or proper mindset, or the treatment's incorrect administration. Patients themselves may internalize this blame, protecting their belief in the treatment's general effectiveness. Ambiguous criteria for success (e.g., "wellness" instead of a cure) further facilitate this biased evaluation.

Aura of plausibility. Many questionable health practices gain traction because they seem intuitively plausible, often based on superficial theories like the representativeness heuristic ("like goes with like"). Homeopathic medicine, with its "law of similia" and "infinitesimals," or dietary fads based on the idea that food properties are directly transferred, appeal to this flawed sense of natural logic, ignoring complex bodily transformations.

9. Ineffective interpersonal strategies persist due to biased feedback and selective interpretation.

Our excuses sometimes “work” because it can be difficult for a person to determine whether they are genuine.

Self-handicapping and boasting. People frequently employ social strategies like self-handicapping (creating obstacles or making excuses for potential failure) or boasting, believing they will manage others' impressions favorably. However, these strategies often backfire, being perceived as disingenuous or alienating, yet they persist.

Biased feedback loop. The ineffectiveness of these strategies is rarely communicated directly to the perpetrator. People tend to avoid confronting others with negative feedback, instead muttering under their breath or expressing disgust to third parties. This lack of direct, honest feedback prevents individuals from learning that their strategies are counterproductive.

Selective interpretation. The occasional success of an ineffective strategy, even if rare, is often overemphasized and taken as confirmation of its efficacy. Failures, on the other hand, are easily attributed to external factors rather than a flaw in the strategy itself. This partial reinforcement and asymmetrical evaluation of outcomes reinforce the belief that these dysfunctional strategies are effective, leading to their continued use.

10. The sheer volume of apparent evidence, often sensationalized, fuels belief in ESP.

It is not my contention that any of the aforegoing experiments were perfect … or beyond criticism…. Nevertheless, it is my personal opinion that these … investigations represent an overwhelming case for accepting the reality of psi phenomena.

"Where there's smoke..." fallacy. Despite a lack of scientifically verifiable evidence, belief in Extra-Sensory Perception (ESP) is widespread, largely due to the perceived abundance of supporting evidence from everyday life and media. People often conclude that "there must be something there" if so many claims exist, even if individual claims are flawed or fraudulent.

Flawed scientific history. The history of parapsychological research is riddled with studies initially hailed as definitive proof of ESP (e.g., J.B. Rhine's card-guessing, Soal-Goldney experiments, Remote Viewing at SRI) that were later discredited due to methodological flaws, lack of replicability, or outright fraud. Yet, these discredited findings often leave a lasting impression on public belief.

Everyday "evidence." Personal experiences like coincidences and premonitions are powerful drivers of ESP belief. The "birthday problem" illustrates how seemingly improbable coincidences are statistically common. Premonitions are often one-sided events, where successes are remembered and failures forgotten, or are retrospectively shaped by actual events, making them appear more profound than they are.

11. Cultivating scientific habits of mind is essential to challenge dubious beliefs.

The real purpose of [the] scientific method is to make sure Nature hasn’t misled you into thinking you know something you actually don’t know.

Compensatory mental habits. Since the underlying causes of faulty reasoning are inherent, we must develop compensatory mental habits to counteract them. This involves actively questioning our assumptions and challenging what we think we know, rather than passively accepting apparent evidence.

Questioning evidence. A crucial habit is to appreciate the folly of drawing conclusions from incomplete or unrepresentative evidence. We must ask, "What do the other three cells look like?" and actively seek out "invisible" data that might contradict our initial impressions, recognizing how our social roles can limit our exposure to information.

"Consider the opposite" strategies. To combat our talent for ad hoc explanation, we should employ "consider the opposite" strategies. Asking "If the opposite had occurred, would I still find support for my belief?" or "What alternative theories could explain this?" helps reveal the looseness between evidence and belief, encouraging more rigorous testing.

Skepticism and science education. We must cultivate skepticism towards secondhand information, questioning its origin and potential distortions. Familiarity with scientific methods, particularly from probabilistic sciences like psychology, can instill these critical habits of mind. Science teaches us to distinguish random from ordered, understand regression, and appreciate the provisional nature of knowledge, fostering a healthy skepticism against dubious claims.

Last updated:

Want to read the full book?

Review Summary

3.96 out of 5
Average of 3.2K ratings from Goodreads and Amazon.

How We Know What Isn't So examines cognitive biases and erroneous beliefs, earning a 3.96/5 rating. Reviewers praise Gilovich's exploration of how we misinterpret random data, fall prey to biases like regression to the mean, and hold questionable beliefs despite lacking evidence. The first two-thirds covering cognitive and social biases receive particular acclaim. However, many note the book, published in 1991, feels dated despite relevant concepts. Some find it dry and textbook-like. Critics appreciate its foundation for critical thinking but suggest newer works may be more engaging and current.

Your rating:
4.46
2 ratings

About the Author

Thomas D. Gilovich (born 1954) is a psychology professor at Cornell University specializing in decision making and behavioral economics. He has collaborated with notable researchers including Daniel Kahneman, Lee Ross, and Amos Tversky. Gilovich earned his B.A. from the University of California, Santa Barbara and completed his Ph.D. in psychology at Stanford University in 1981. His research focuses on judgment, decision-making processes, and how cognitive biases affect human reasoning. He has authored popular books making psychological research accessible to general audiences, particularly exploring why people form and maintain erroneous beliefs despite contradicting evidence.

Listen
Now playing
How We Know What Isn't So
0:00
-0:00
Now playing
How We Know What Isn't So
0:00
-0:00
1x
Voice
Speed
Dan
Andrew
Michelle
Lauren
1.0×
+
200 words per minute
Queue
Home
Swipe
Library
Get App
Create a free account to unlock:
Recommendations: Personalized for you
Requests: Request new book summaries
Bookmarks: Save your favorite books
History: Revisit books later
Ratings: Rate books & see your ratings
250,000+ readers
Try Full Access for 7 Days
Listen, bookmark, and more
Compare Features Free Pro
📖 Read Summaries
Read unlimited summaries. Free users get 3 per month
🎧 Listen to Summaries
Listen to unlimited summaries in 40 languages
❤️ Unlimited Bookmarks
Free users are limited to 4
📜 Unlimited History
Free users are limited to 4
📥 Unlimited Downloads
Free users are limited to 1
Risk-Free Timeline
Today: Get Instant Access
Listen to full summaries of 73,530 books. That's 12,000+ hours of audio!
Day 4: Trial Reminder
We'll send you a notification that your trial is ending soon.
Day 7: Your subscription begins
You'll be charged on Dec 16,
cancel anytime before.
Consume 2.8× More Books
2.8× more books Listening Reading
Our users love us
250,000+ readers
Trustpilot Rating
TrustPilot
4.6 Excellent
This site is a total game-changer. I've been flying through book summaries like never before. Highly, highly recommend.
— Dave G
Worth my money and time, and really well made. I've never seen this quality of summaries on other websites. Very helpful!
— Em
Highly recommended!! Fantastic service. Perfect for those that want a little more than a teaser but not all the intricate details of a full audio book.
— Greg M
Save 62%
Yearly
$119.88 $44.99/year/yr
$3.75/mo
Monthly
$9.99/mo
Start a 7-Day Free Trial
7 days free, then $44.99/year. Cancel anytime.
Scanner
Find a barcode to scan

We have a special gift for you
Open
38% OFF
DISCOUNT FOR YOU
$79.99
$49.99/year
only $4.16 per month
Continue
2 taps to start, super easy to cancel
Settings
General
Widget
Loading...
We have a special gift for you
Open
38% OFF
DISCOUNT FOR YOU
$79.99
$49.99/year
only $4.16 per month
Continue
2 taps to start, super easy to cancel