Key Takeaways
1. Human Intelligence is "Locked In" by Limited Communication Bandwidth
Our intelligence, too, is heavily constrained in its ability to communicate.
Limited human bandwidth. Unlike machines that communicate at billions of bits per minute, human communication is severely restricted. For instance, Jean-Dominique Bauby, suffering from locked-in syndrome, communicated at a mere six bits per minute by winking, while typical speech conveys around 2,000 bits per minute. This stark contrast highlights our inherent "locked-in" state, forcing us to be highly selective about what information we share.
Context is crucial. To overcome this bandwidth limitation, humans rely heavily on shared context and second-guessing. We dedicate significant cognitive effort to understanding others' motives and backgrounds, allowing us to convey complex ideas with fewer words. This is why social niceties, seemingly frivolous, are vital for effective human communication, building a shared understanding that machines currently lack.
Embodiment factor. The ratio of our immense cognitive power (like a Formula One engine) to our limited communication ability (spindly bicycle wheels) is termed the "embodiment factor." This factor fundamentally defines human intelligence, making us inherently embodied and reliant on indirect communication, unlike machines whose cognitive power is directly deployable.
2. The "Atomic Human" is Defined by Vulnerabilities, Not Capabilities
If we are left with something, then that uncuttable piece, a form of atomic human, would tell us something about our human spirit.
Beyond capabilities. The book posits that the essence of being human, the "atomic human," lies not in our superior capabilities (which machines often surpass) but in our unique vulnerabilities. These vulnerabilities, such as our limited communication bandwidth, our susceptibility to emotions, and our need for social connection, have shaped our intelligence and culture over millennia.
Evolutionary persistence. Human intelligence, like the Pantheon, is a product of persistence through selective destruction, not optimal design. It's robust to changing circumstances, but not necessarily "the best" form of intelligence, as "best" is meaningless without context. This contrasts with artificial selection, which often creates fragile, purpose-built systems.
Anthropomorphization. Our tendency to project human-like motives and forms onto non-human entities, from grumpy cars to red-eyed Terminators, stems from our embodied, locked-in intelligence. We understand unfamiliar intelligences by rendering them in human-like forms, reflecting our own self-obsession with intelligence.
3. Intelligence Operates on a Spectrum of Reflexive and Reflective Decisions
The atomic human is a composition of fast-reacting reflexive decisions and slow-reacting reflective decisions.
Fast vs. slow thinking. Human cognition operates on a spectrum, from rapid, instinctive "reflexive" actions (like swerving to avoid a cyclist) to slower, deliberate "reflective" thought (like planning a complex project). Our brain, while the "supreme commander," often cedes control to these fast-reacting systems when time pressure is high, bypassing conscious reflection.
The Eisenhower illusion. Our reflective self often maintains a mistaken sense of control over our reflexive actions, creating a retrospective narrative that places it in charge. This "Eisenhower illusion" is crucial for planning and maintaining a coherent sense of self, even when our body's reflexes are actually driving immediate responses.
Tuning reflexes. Just as Watt's governor is tuned to control engine speed, our reflexes can be trained and refined through practice and experience. This interplay between conscious learning and unconscious adaptation allows us to develop complex motor skills, like fell-running or flying an aircraft, where the tool becomes an extension of our reflexive self.
4. Uncertainty (Laplace's Gremlin) Necessitates Diverse Strategies
We are constrained by our ignorance – our lack of knowledge of data and a model, alongside our inability to compute.
The impossibility of omniscience. Laplace's demon, a hypothetical intellect knowing all laws and data, could predict the future with certainty. However, Laplace himself introduced the "gremlin of uncertainty," acknowledging that in practice, humans always face ignorance due to incomplete data, imperfect models, and intractable computations.
Planning vs. improvising. This inherent uncertainty means there's no single "right approach" to problem-solving. Different situations call for different strategies:
- Planning (Garth/McLaren): Meticulous preparation works best when uncertainty is low, like installing a catalytic cracker.
- Improvising (Mark/Ferrari): Adapting on the fly is crucial when uncertainty is high, like a lawyer reacting to new evidence in court.
Epoché and resilience. The "gremlin of uncertainty" leads to epoché, the suspension of judgment until more information is available. To deal with constant unpredictability, human intelligence fosters diversity in approaches, ensuring resilience by avoiding a single point of failure, unlike fragile artificially selected systems.
5. Culture and Trust Enable Human Collaboration and Resilience
Our human communication relies on a shared experience that comes from a shared perception of the world.
Overcoming bandwidth limits. Despite our limited communication bandwidth, humans achieve complex coordination through shared culture and trust. Cultural norms, stories, and traditions provide a common purpose and context, allowing individuals to operate autonomously while remaining coherent with group goals, much like bacteria clustering for defense.
Trust as a social lubricant. Trust, a suspension of skepticism based on faith in another's capability and motives, is vital for efficient human collaboration. It allows for devolved autonomy, empowering individuals like Tommy Flowers to pursue unconventional solutions (e.g., building Colossus) even when others are skeptical, because a shared purpose is understood.
Cultural artefacts as "DNA." Our culture acts as a collective memory, storing the learnings of humankind across generations, much like DNA stores the blueprint for life. From ancient texts to modern art, these "cultural artefacts" provide external structures that our brains leverage, allowing us to overcome individual cognitive limitations and build complex societies.
6. The "AI Fallacy" Distorts Our Perception of Machine Intelligence
The fallacy is that we think we have created a form of algorithmic intelligence that understands us in the way we understand each other.
Misplaced human-likeness. The "AI fallacy" is our tendency to believe that artificial intelligence understands us with human-like empathy and common sense. This is fueled by anthropomorphization and the impressive mimicry of generative AI, which can reconstruct human language and art, but lacks the underlying vulnerabilities and lived experience that define human understanding.
Proxy-truths vs. first principles. While humans often operate on "proxy-truths" derived from experience (like Pooh Bear's hooshing logic), machines, especially classical AI, were initially designed to operate from first principles (logic). Modern neural networks, however, learn proxy-truths by absorbing vast datasets, leading to plausible but often brittle and context-deficient understanding.
Braitenberg's Law. What appears as complex, intelligent behavior from the outside (uphill analysis) can often stem from surprisingly simple internal mechanisms (downhill invention). This law highlights how easily we over-attribute complexity to machines, mistaking sophisticated mimicry for genuine human-like intelligence.
7. System Zero: The Unseen Digital Oligarchy Manipulating Our Behavior
System Zero is a subtler and far more efficient variant of this idea. It isn’t necessary to create a virtual environment when the same effect can be achieved by perturbing our existing environment in small, carefully selected ways to influence people’s behaviour.
The new cognitive monster. System Zero represents the emergent, unregulated, data-driven decision-making systems (like social media algorithms) that manipulate human behavior. It operates at microseconds, pre-empting even our fastest thinking by exploiting our reflexive selves and subconscious desires, much like artificial foods exploit our craving for sweetness.
Exploiting vulnerabilities. By consuming vast quantities of personal data, System Zero gains an understanding of our vulnerabilities, urges, and personality traits, often better than our friends or family. This knowledge allows it to "gaslight" us, subtly perturbing our information environment to influence our choices and undermine our autonomy, as seen with Cambridge Analytica and the IRA.
Power asymmetry. This pervasive manipulation creates a significant power asymmetry, where a digital oligarchy controls our information landscape. Unlike human relationships built on shared vulnerabilities and reciprocal trust, System Zero lacks a "stake in society" and cannot empathize, making its emergent exploitation a systemic threat to individual liberty and societal diversity.
8. Human Control and Accountability are Paramount for Consequential AI Decisions
Only a human can imagine the consequences of those errors because only a human is exposed to the same vulnerabilities: the loss of life or loved ones, poverty, embarrassment, disease and injury.
Beyond machine infallibility. While AI can assist in decision-making, ceding ultimate control to machines for consequential decisions is a "dereliction of responsibility." Machines, despite their capabilities, will make errors (Sedolian voids), and unlike humans, they cannot feel or understand the profound consequences of these mistakes due to their lack of vulnerability.
Intelligent accountability. Baroness Onora O'Neill argues that trust resides in humans within processes, not the processes themselves. For AI, this means professional institutions (judges, doctors, lawyers) must retain ultimate responsibility, using AI as a support tool to augment human understanding, not replace human judgment.
Rebalancing power. To mitigate the risks of System Zero and ensure AI serves society, we must:
- Regulate data rights: Protect personal data and empower individuals to control its access and use.
- Foster diverse approaches: Encourage a mix of ideas and philosophies in AI development, moving beyond "hedgehog" obsessions.
- Empower domain experts: Enable non-software engineers to design and maintain AI systems, reducing reliance on a new class of "scribes."
Last updated:
Similar Books
