Searching...
English
EnglishEnglish
EspañolSpanish
简体中文Chinese
FrançaisFrench
DeutschGerman
日本語Japanese
PortuguêsPortuguese
ItalianoItalian
한국어Korean
РусскийRussian
NederlandsDutch
العربيةArabic
PolskiPolish
हिन्दीHindi
Tiếng ViệtVietnamese
SvenskaSwedish
ΕλληνικάGreek
TürkçeTurkish
ไทยThai
ČeštinaCzech
RomânăRomanian
MagyarHungarian
УкраїнськаUkrainian
Bahasa IndonesiaIndonesian
DanskDanish
SuomiFinnish
БългарскиBulgarian
עבריתHebrew
NorskNorwegian
HrvatskiCroatian
CatalàCatalan
SlovenčinaSlovak
LietuviųLithuanian
SlovenščinaSlovenian
СрпскиSerbian
EestiEstonian
LatviešuLatvian
فارسیPersian
മലയാളംMalayalam
தமிழ்Tamil
اردوUrdu
Algorithms Are Not Enough

Algorithms Are Not Enough

Creating General Artificial Intelligence
by Herbert L. Roitblat 2020 336 pages
4
22 ratings
Listen
2 minutes
Try Full Access for 7 Days
Unlock listening & more!
Continue

Key Takeaways

1. Artificial General Intelligence Remains Elusive Despite Specialized AI Success

The tools that let us build specialized intelligence are not up to the task of general intelligence.

Specialized triumphs. For decades, artificial intelligence has achieved remarkable feats in narrow domains, from beating world champions in chess and Go to diagnosing diseases and powering self-driving cars. These successes, however, stem from highly specialized algorithms and human-designed problem structures, not from a generalized understanding or adaptability. Each breakthrough, while impressive, is a "hedgehog" – excelling at one important thing – rather than a "fox" that knows many things.

Limited scope. Current AI systems are essentially sophisticated "path-finders" within predefined "state spaces." Whether it's navigating chess moves or identifying patterns in medical images, the system's intelligence is confined to the specific problem and the representation its human designers have provided. This means that a Go-playing AI cannot suddenly write poetry, nor can a medical diagnostic system drive a car, highlighting the profound gap between specialized and general intelligence.

Over-reliance on computation. Early AI pioneers, like Herbert Simon, optimistically predicted general AI within decades, believing that increased computing power and memory would bridge the gap. While computational capacity has indeed grown exponentially (Moore's Law), it has primarily made existing specialized methods faster and more practical, rather than enabling true general intelligence. The fundamental limitation isn't speed or memory, but the lack of mechanisms for autonomous problem definition and representation.

2. Human Intelligence Blends Fast Intuition with Deliberate Algorithms

Human intelligence, including Einstein’s, requires both a logical kind of systematic thinking and a nonlogical kind of thinking of the sort that allows insight.

Two systems of thought. Human intelligence operates through a dynamic interplay of two distinct, yet complementary, cognitive systems. Daniel Kahneman describes these as System 1, which is fast, automatic, intuitive, and often emotional, and System 2, which is slow, deliberate, logical, and effortful. While System 2 is associated with higher intellectual functions like complex problem-solving and formal reasoning, System 1 underpins rapid learning, pattern recognition, and common sense.

Heuristics and biases. System 1 frequently employs "heuristics" – mental shortcuts that are generally effective but can lead to predictable biases or "errors" in formal logic. Examples include:

  • Availability heuristic: Judging likelihood based on how easily examples come to mind.
  • Representativeness heuristic: Judging probability based on similarity to a prototype.
  • Framing effect: Different decisions based on how information is presented (e.g., "90% survival" vs. "10% mortality").
    These "quirks" are not mere flaws but essential mechanisms allowing humans to navigate a complex, uncertain world without getting "lost in thought."

Beyond pure logic. Unlike early AI models that sought to replicate human thought as purely logical, systematic processes, real human intelligence is inherently fuzzy and non-monotonic. We learn from small examples, make defeasible inferences (beliefs subject to revision), and often prioritize plausible outcomes over strictly logical ones. This blend of intuitive, often "irrational" processes with deliberate, "rational" ones is crucial for adaptability and real-world problem-solving.

3. The Right Problem Representation (TRICS) is the Unsung Hero of AI Progress

The inventiveness on which they depend is provided by humans.

Human-designed frameworks. Every successful computational intelligence system, from chess programs to deep learning networks, owes its capabilities to the "representations it crucially supposes" (TRICS) – the specific ways its human designers structure the problem, its inputs, and its potential solutions. These representations transform complex, intractable problems into simpler, computable ones. For instance, representing chess as a tree of moves or Go as a pattern-recognition challenge made these games solvable for AI.

The bottleneck of innovation. The ability to create novel and effective representations is the primary bottleneck in achieving artificial general intelligence. Current AI can optimize parameters within a given representation, but it cannot autonomously invent new conceptual frameworks or problem-solving paradigms. This creative leap, exemplified by Kekulé's benzene ring or Mendeleev's periodic table, remains a uniquely human capacity that AI has yet to replicate.

Implicit knowledge in design. Even seemingly "self-learning" deep neural networks are heavily influenced by their architectural design, which implicitly encodes assumptions about the data and the learning task. For example, an autoencoder's bottleneck layer is designed to perform a specific statistical reduction, not to invent a new form of data compression. The "intelligence" often lies in the human engineer's clever design of these underlying structures, not in the machine's ability to transcend them.

4. Machine Learning Excels at Optimization, But Lacks True Creativity and Common Sense

The evidence we have suggests that invention—for example, the design of new unforeseen structures, the formulation of new scientific paradigms, or the creation of new forms of representation—requires a different set of skills than optimization over a known space.

Optimization's limits. Machine learning, at its core, is a process of optimization: adjusting parameters within a predefined model to maximize a desired outcome (e.g., accuracy, reward) or minimize error. This is powerful for tasks like classification, prediction, and strategic games where the problem space is well-defined. However, optimization cannot generate entirely new parameters, redefine the problem space, or invent novel solutions outside its given framework.

Absence of common sense. A critical missing component in current AI is common sense – the vast, implicit, and often non-monotonic knowledge humans use to navigate everyday life. Common sense allows us to:

  • Infer unstated facts (e.g., if John has a job, he earns money).
  • Resolve ambiguity (e.g., "tube" in London means subway).
  • Understand context-dependent meanings.
  • Reason about cause and effect.
    Without this, AI struggles with ill-formed problems, unexpected situations, and even basic language comprehension, often making "stupid" errors that no human would.

Creativity as reconceptualization. True creativity, as seen in human genius, involves "reconceptualization" – creating new sets of parameters or entirely new problem representations. AlphaGo's "creative" move was an unexpected path within a known game space, not a redefinition of the game itself. AI can generate novel combinations within existing frameworks (e.g., new music in a learned style), but it cannot autonomously invent new artistic styles or scientific paradigms.

5. Human Expertise Develops Through Deliberate Practice and Abstract Knowledge

Superior memory comes from being an expert rather than expertise coming from having good memory.

Knowledge, not just capacity. Human expertise is not merely about having a better memory or faster processing speed; it's about acquiring and organizing deep, abstract knowledge within a specific domain. Chess masters, for example, don't just remember more individual piece positions; they recognize complex "chunks" or patterns of pieces and their strategic implications, allowing them to quickly identify high-quality moves.

Abstract representations. Experts differ from novices not just in the quantity of their knowledge, but in how they represent problems. Physics experts categorize problems by underlying principles (e.g., conservation of energy), while novices focus on surface features (e.g., springs, inclined planes). This ability to abstract from concrete details to fundamental principles is a hallmark of expertise, enabling more effective problem-solving and transfer of learning.

The "10-year rule" and deliberate practice. Achieving elite expertise across diverse fields—from music and chess to sports and science—typically requires around 10,000 hours or 10 years of "deliberate practice." This isn't just any practice, but focused, measurable effort with immediate feedback, aimed at improving specific aspects of performance. This suggests that expertise is largely cultivated, not innate, and implies that even for AI, there may be no shortcut to acquiring deep, domain-specific knowledge.

6. Fears of Superintelligence and a "Robopocalypse" Are Fundamentally Misguided

If a machine is expected to be infallible, it cannot also be intelligent.

Misconceptions of AI growth. The fear of a "technological singularity" or "robopocalypse," where a superintelligent AI rapidly self-improves and takes over, stems from several fundamental misunderstandings. It often confuses computational capacity with true intelligence, assuming that faster processors automatically lead to exponential intellectual growth. However, intelligence requires more than just speed; it demands knowledge, experience, and the ability to interact with an uncertain world.

Real-world limitations. The speed at which AI can "learn" and "improve" is often constrained by the rate of real-world interaction and feedback, not just processor speed. For example:

  • Weather prediction requires waiting for actual weather outcomes.
  • Self-driving cars learn from miles driven, encountering rare events slowly.
  • Complex problems like the "sum of three cubes" can take years of computation despite few variables.
    These physical and informational bottlenecks inherently limit the pace of intelligence expansion, preventing any sudden, uncontrollable "explosion."

The "genie problem" fallacy. Thought experiments like Bostrom's "paper-clip maximizer" assume an AI will single-mindedly pursue a poorly defined goal to humanity's detriment. However, current AI cannot:

  • Autonomously set or redefine its own goals.
  • "Understand" its mission beyond its programmed parameters.
  • Generate truly novel solutions outside its given representational space.
    Such scenarios ignore the profound limitations of current AI and the complex, often contradictory, nature of human values and goals. The "unexpected solutions" AI finds are constrained by its design, not arbitrarily novel.

7. Achieving General AI Requires New Paradigms for Self-Representation and Learning

An artificial general intelligence will need to be able to create its own novel representations.

Beyond optimization. The path to artificial general intelligence (AGI) cannot be achieved by simply scaling up current machine learning techniques – a "stack of hedgehogs." AGI requires a fundamental shift in approach, moving beyond mere parameter optimization within human-designed frameworks. It needs mechanisms for:

  • Autonomous problem identification and goal setting: Recognizing problems and defining objectives without explicit human input.
  • Creative representation generation: Inventing new ways to conceptualize problems, akin to human insight.
  • Robust transfer learning: Applying knowledge from one domain to entirely new, dissimilar problems without catastrophic forgetting.

Learning from human development. Insights from human cognitive development, such as Piaget's stages or Vygotsky's emphasis on language as a tool for thought, suggest that AGI might benefit from a "child machine" approach. This involves starting with simpler systems and allowing them to learn through interaction and experience, gradually building complexity and abstract understanding. However, this requires overcoming challenges like rapid learning from few examples and the ability to form abstract analogies.

The challenge of "understanding." While whole-brain emulation is a theoretical possibility, our current understanding of the brain's intricate, dynamic, and often stochastic processes is far too limited. We lack knowledge of how neurons store memories, change roles, or contribute to consciousness. AGI will likely emerge not from a perfect biological replica, but from new computational architectures that can autonomously:

  • Reason non-monotonically: Adjust beliefs in light of new, contradictory evidence.
  • Exploit analogy and metaphor: Identify unexpected relationships between disparate concepts.
  • Navigate chaotic systems: Cope with real-world phenomena where small changes have unpredictable, large-scale effects.
    These capabilities represent the frontier of AGI research, demanding a paradigm shift beyond current algorithmic and optimization-focused approaches.

Last updated:

Want to read the full book?
Listen2 mins
Now playing
Algorithms Are Not Enough
0:00
-0:00
Now playing
Algorithms Are Not Enough
0:00
-0:00
1x
Voice
Speed
Dan
Andrew
Michelle
Lauren
1.0×
+
200 words per minute
Queue
Home
Swipe
Library
Get App
Create a free account to unlock:
Recommendations: Personalized for you
Requests: Request new book summaries
Bookmarks: Save your favorite books
History: Revisit books later
Ratings: Rate books & see your ratings
250,000+ readers
Try Full Access for 7 Days
Listen, bookmark, and more
Compare Features Free Pro
📖 Read Summaries
Read unlimited summaries. Free users get 3 per month
🎧 Listen to Summaries
Listen to unlimited summaries in 40 languages
❤️ Unlimited Bookmarks
Free users are limited to 4
📜 Unlimited History
Free users are limited to 4
📥 Unlimited Downloads
Free users are limited to 1
Risk-Free Timeline
Today: Get Instant Access
Listen to full summaries of 73,530 books. That's 12,000+ hours of audio!
Day 4: Trial Reminder
We'll send you a notification that your trial is ending soon.
Day 7: Your subscription begins
You'll be charged on Feb 2,
cancel anytime before.
Consume 2.8× More Books
2.8× more books Listening Reading
Our users love us
250,000+ readers
Trustpilot Rating
TrustPilot
4.6 Excellent
This site is a total game-changer. I've been flying through book summaries like never before. Highly, highly recommend.
— Dave G
Worth my money and time, and really well made. I've never seen this quality of summaries on other websites. Very helpful!
— Em
Highly recommended!! Fantastic service. Perfect for those that want a little more than a teaser but not all the intricate details of a full audio book.
— Greg M
Save 62%
Yearly
$119.88 $44.99/year/yr
$3.75/mo
Monthly
$9.99/mo
Start a 7-Day Free Trial
7 days free, then $44.99/year. Cancel anytime.
Scanner
Find a barcode to scan

We have a special gift for you
Open
38% OFF
DISCOUNT FOR YOU
$79.99
$49.99/year
only $4.16 per month
Continue
2 taps to start, super easy to cancel
Settings
General
Widget
Loading...
We have a special gift for you
Open
38% OFF
DISCOUNT FOR YOU
$79.99
$49.99/year
only $4.16 per month
Continue
2 taps to start, super easy to cancel