Searching...
English
EnglishEnglish
EspañolSpanish
简体中文Chinese
FrançaisFrench
DeutschGerman
日本語Japanese
PortuguêsPortuguese
ItalianoItalian
한국어Korean
РусскийRussian
NederlandsDutch
العربيةArabic
PolskiPolish
हिन्दीHindi
Tiếng ViệtVietnamese
SvenskaSwedish
ΕλληνικάGreek
TürkçeTurkish
ไทยThai
ČeštinaCzech
RomânăRomanian
MagyarHungarian
УкраїнськаUkrainian
Bahasa IndonesiaIndonesian
DanskDanish
SuomiFinnish
БългарскиBulgarian
עבריתHebrew
NorskNorwegian
HrvatskiCroatian
CatalàCatalan
SlovenčinaSlovak
LietuviųLithuanian
SlovenščinaSlovenian
СрпскиSerbian
EestiEstonian
LatviešuLatvian
فارسیPersian
മലയാളംMalayalam
தமிழ்Tamil
اردوUrdu
The Deep Learning Revolution

The Deep Learning Revolution

by Terrence J. Sejnowski 2018 352 pages
3.74
586 ratings
Listen
1 minutes
Try Full Access for 7 Days
Unlock listening & more!
Continue

Key Takeaways

1. The Deep Learning Revolution: A Long-Awaited Breakthrough

The recent progress in artificial intelligence (AI) was made by reverse engineering brains.

A new era. Deep learning, a branch of machine learning, has ignited a revolution in AI, enabling computers to learn from experience much like babies do. This shift, rooted in mathematics, computer science, and neuroscience, contrasts sharply with earlier logic-based AI approaches. Its success is driven by unprecedented access to vast datasets and computational power.

Early visions. The field of AI, born in the 1950s, initially split into two competing philosophies: one focused on logic and explicit programming, the other on learning from data. For decades, the logic-based approach dominated, but it struggled with real-world complexity and scalability. The learning-from-data paradigm, though slower to mature, ultimately proved more effective.

Persistence pays. The journey to deep learning's current prominence was long and challenging, marked by periods of skepticism and underfunding for neural network research. A small group of persistent researchers, often working against the mainstream AI establishment, kept the vision alive. Their dedication laid the groundwork for the dramatic breakthroughs we witness today.

2. Brains as Pattern Recognizers: The Perceptron's Early Promise and Limits

If you understand the basic principles for how a perceptron learns to solve a pattern recognition problem, you are halfway to understanding how deep learning works.

Mimicking vision. Early AI pioneers, like Frank Rosenblatt, recognized the brain's power as a pattern recognizer and sought to mimic its architecture. His "perceptron" was a simple neural network model designed to classify patterns, learning from examples by adjusting connection strengths (weights) based on errors. This incremental learning process was a foundational step.

Learning from examples. The perceptron's elegance lay in its ability to automatically find a set of weights to correctly classify inputs, provided such a solution existed. It learned by comparing its output to a "teacher's" correct answer and making small adjustments. This process of learning from examples, rather than explicit programming, was revolutionary.

Minsky's critique. Despite early successes like "SEXNET" (a perceptron classifying faces), the perceptron had significant limitations. Marvin Minsky and Seymour Papert's 1969 book, Perceptrons, mathematically proved that these single-layer networks could only solve "linearly separable" problems, effectively halting research in neural networks for a generation.

3. Unlocking Multilayer Networks: The Boltzmann Machine and Backpropagation

The Boltzmann machine learning algorithm could provably learn how to solve problems that required hidden units, showing that, contrary to the opinion of Marvin Minsky and Seymour Papert and most everyone else in the field, it was possible to train a multilayer network and overcome the limitations of the perceptron.

Breaking the logjam. The "neural network winter" ended with the advent of new learning algorithms for multilayer networks. Geoffrey Hinton and Terrence Sejnowski's "Boltzmann machine" demonstrated that networks with "hidden units" (intermediate layers) could learn complex problems, a direct counter to Minsky's earlier critique. This model, inspired by statistical mechanics, used a "simulated annealing" process to find optimal solutions.

Learning in "sleep." The Boltzmann machine's learning algorithm had two phases: a "wake" phase where inputs and outputs were clamped, and a "sleep" phase where they were unclamped. Learning occurred by adjusting connection strengths based on the difference in activity correlations between these phases, mirroring Hebbian synaptic plasticity observed in the brain. This suggested a biological plausibility for learning.

The backprop revolution. While the Boltzmann machine was theoretically profound, it was computationally slow. David Rumelhart, Geoffrey Hinton, and Ronald Williams' "backpropagation of errors" (backprop) algorithm, published in 1986, provided a more efficient way to train multilayer networks. By calculating error gradients layer by layer, backprop enabled rapid progress, leading to early successes like "NETtalk," which learned to pronounce English text.

4. Nature's Blueprint: Visual Cortex Inspires Convolutional Networks

The deep learning network that Kaiming He and colleagues used to achieve this low rate in many ways resembles the visual cortex; it was introduced by Yann LeCun, who originally called it “Le Net.”

Cortical inspiration. The architecture of the visual cortex, with its hierarchical layers and specialized neurons, provided a powerful blueprint for deep learning. David Hubel and Torsten Wiesel's discoveries of "simple cells" (responding to oriented edges) and "complex cells" (responding to features across a visual field) directly influenced the design of convolutional neural networks (ConvNets).

Convolutional architecture. Yann LeCun's ConvNet, a direct precursor to modern deep learning vision systems, uses small sliding filters (convolutions) across an image to detect features. These filters, akin to simple cells, create feature maps that are then "pooled" to achieve translation invariance, much like complex cells. This layered processing allows the network to build increasingly abstract representations of objects.

Matching brain and AI. The remarkable similarity between the statistical properties of neurons in a monkey's visual cortex and units in a trained deep learning network is striking. This convergence suggests that, despite different learning mechanisms (e.g., backprop vs. biological plasticity), both systems arrive at similar solutions for visual processing. This highlights a fruitful symbiotic relationship between neuroscience and AI.

5. Learning from Rewards: Reinforcement Learning and Dopamine's Role

The heart of TD-Gammon is the temporal difference learning algorithm, which was inspired by learning experiments with animals.

Beyond supervision. Reinforcement learning (RL) offers a powerful paradigm where an agent learns by interacting with an environment, receiving rewards for successful actions. Richard Sutton's "temporal difference learning" algorithm solved the "temporal credit assignment problem"—determining which past actions were responsible for a delayed reward. This was a breakthrough for games like backgammon.

TD-Gammon's triumph. Gerald Tesauro's TD-Gammon, using a backprop network trained with temporal difference learning, learned to play backgammon at world-championship levels through self-play. It developed subtle, creative strategies previously unknown to human experts, demonstrating that AI could not only match but surpass human performance in complex domains, not just through brute force but through learned strategy.

Dopamine's secret. The brain's reward system, particularly dopamine neurons, implements temporal difference learning. These neurons signal "reward prediction error"—a burst of dopamine when a reward is unexpected, and a dip when a reward is less than expected. This biological mechanism drives motivation and learning, influencing decisions and the formation of habits, from bee foraging to human planning.

6. The Power of Scale: Big Data and Computing Fuel AI's Ascent

What made deep learning take off was big data.

Exponential growth. The explosion of deep learning in the 2010s was not due to entirely new algorithms, but to the confluence of existing algorithms with massive datasets and exponentially increasing computational power. Moore's Law, predicting the doubling of transistors every 18 months, provided the necessary hardware.

Data as the new oil. Internet companies like Google, Amazon, and Microsoft amassed petabytes of labeled data—millions of images, audio recordings, and text. This "big data" became the fuel for deep learning, allowing networks to be trained on a scale previously unimaginable. The ability to generalize from these vast datasets was crucial for real-world applications.

NIPS as an incubator. The Neural Information Processing Systems (NIPS) conferences, founded in 1987, served as a vital interdisciplinary incubator. They brought together engineers, physicists, mathematicians, psychologists, and neuroscientists, fostering the exchange of ideas that ultimately led to deep learning's breakthroughs. The 2012 NIPS conference, in particular, marked a turning point where deep learning's superior performance became undeniable.

7. Beyond Supervision: Unsupervised Learning and Generative AI

Unsupervised learning is the next frontier in machine learning.

Learning without labels. While supervised learning requires labeled data (input-output pairs), unsupervised learning discovers hidden patterns and structures in unlabeled data. Anthony Bell and Terrence Sejnowski's "Independent Component Analysis" (ICA) is a prime example, capable of blindly separating mixed signals and extracting sparse, efficient representations from natural images and sounds, similar to early sensory processing in the brain.

Generative models. The Boltzmann machine, in its unsupervised form, could generate new data samples after learning the statistical structure of its training set. This concept evolved into "Generative Adversarial Networks" (GANs), where two neural networks compete: one generates synthetic data (e.g., photorealistic images), and the other tries to distinguish real from fake. This adversarial process produces astonishingly realistic and novel outputs.

Unlocking creativity. GANs demonstrate a form of artificial creativity, generating images of objects that never existed. They can also perform "vector arithmetic" on learned representations, allowing for seamless morphing or blending of concepts (e.g., adding "glasses" to a face). This capability hints at a deeper understanding of semantic relationships within the network, with implications for fashion, art, and other creative fields.

8. Hardware Reimagined: Neuromorphic Chips for Brain-Inspired Efficiency

We are seeing the birth of a new architecture for the computer chip industry.

The efficiency gap. The human brain operates on a mere 20 watts, a marvel of efficiency compared to power-hungry supercomputers. This stark difference highlights the need for new hardware architectures for AI. Traditional von Neumann digital computers, with their separate memory and processing units, are inefficient for parallel, brain-like computations.

Carver Mead's vision. Carver Mead, a pioneer in VLSI chip design, foresaw the limitations of digital computing and championed "neuromorphic engineering." His "silicon retina" and "silicon cochlea" chips, built with analog circuits, mimicked biological sensory processing with vastly lower power consumption. This approach aims to embed neural algorithms directly into hardware, blurring the lines between software and hardware.

Spiking and plasticity. Neuromorphic chips, like Tobias Delbrück's "Dynamic Vision Sensor" (DVS), use asynchronous "spikes" rather than synchronous frames, mirroring how biological neurons communicate. This event-driven approach is highly efficient for dynamic environments. Furthermore, "spike-timing-dependent plasticity" (STDP) in biological synapses, where the timing of spikes determines synaptic strengthening or weakening, offers a biologically plausible learning mechanism for future neuromorphic hardware.

9. Information's Deep Roots: Algorithms, Codes, and Predictive Brains

The information explosion has transformed biology into a quantitative science.

Shannon's legacy. Claude Shannon's information theory, developed in 1948, laid the foundation for modern digital communication, defining how information can be precisely measured and transmitted through noisy channels. This theory underpins everything from cell phones to the internet, quantifying information in bits, bytes, and beyond.

Number theory's practical impact. Solomon Golomb's work on "shift register sequences," an abstract concept from number theory, proved fundamental to secure digital communication. His codes, initially used for deep space probes, are now embedded in billions of cell phones. This illustrates how "pure" mathematics can yield profound practical applications, often in unexpected ways.

The predictive brain. The brain is a master of information processing, constantly making "unconscious inferences" (Helmholtz) and predictions about the world. Sensory systems primarily detect change, and the brain uses these signals to update its internal model. This "predictive coding" framework suggests that only prediction errors are propagated up the cortical hierarchy, allowing for efficient processing and interpretation of sensory input.

10. AI's Transformative Impact: Reshaping Industries and Human Potential

We are entering an era of discovery and enlightenment that will make us smarter, live longer, and prosper.

Industry disruption. Deep learning is rapidly transforming numerous industries. Self-driving cars are poised to revolutionize transportation, while AI-powered medical diagnosis promises to personalize treatment and improve healthcare outcomes. Financial services, legal professions, and even creative fields like fashion are being reshaped by algorithmic intelligence.

Job market shifts. While AI will automate many routine cognitive tasks, it will also create new jobs requiring different, evolving skills. The book highlights the need for lifelong learning and new educational systems, such as Massive Open Online Courses (MOOCs), to equip the workforce for this changing landscape. MOOCs like "Learning How to Learn" offer accessible pathways to acquire new skills.

Augmenting human capabilities. AI is not just replacing human tasks; it's augmenting human intelligence. From enhancing scientific discovery (e.g., DeepLensing in astronomy) to improving cognitive function through brain-training games (e.g., NeuroRacer), AI can make us smarter and more capable. This partnership between humans and machines promises a future of enhanced productivity and well-being.

11. The Grand Challenges: Consciousness, General Intelligence, and Ethical Frontiers

It may be easier to create consciousness than to fully understand it.

The mystery of consciousness. Francis Crick dedicated his later career to understanding consciousness, focusing on "neural correlates of consciousness" (NCCs). Research on "grandmother cells" (neurons responding to specific individuals) and phenomena like binocular rivalry and the "flash-lag effect" reveal the brain's complex, often "postdictive," construction of conscious perception, which is not a simple reflection of sensory input.

General intelligence debate. Marvin Minsky, a founding father of AI, remained skeptical of neural networks, arguing they focused on "applications" rather than "general intelligence." He envisioned intelligence emerging from interactions of simpler agents. However, recent breakthroughs in deep learning, especially with dynamic external memory, are now tackling complex reasoning tasks, challenging Minsky's long-held "intuition."

Ethical considerations. The rise of powerful AI systems raises critical ethical questions. Biases in training data can lead to discriminatory outcomes (e.g., in facial recognition or loan applications). The potential for autonomous weapons and the concentration of AI power in certain nations or corporations demand careful consideration. The book advocates for incorporating fairness into AI's cost functions and addressing issues on a case-by-case basis rather than blanket bans.

12. Nature's Enduring Wisdom: Evolution as the Ultimate Algorithm

Evolution is cleverer than you are.

Orgel's Second Rule. Leslie Orgel's "Second Rule"—evolution is cleverer than you are—serves as a humbling reminder that natural processes often yield solutions far more ingenious than human intuition. This applies to the brain's complex workings, which are largely inaccessible to introspection, and to the evolution of life itself, from the RNA world to the intricate algorithms of bacterial chemotaxis.

Progressive adaptation. Brains evolved through a long process of "progressive adaptation," building upon existing structures rather than starting anew. This "tinkering" approach, often involving gene duplication and modification, created specialized intelligences adapted to diverse environmental niches. Understanding this evolutionary history is crucial for reverse-engineering biological intelligence.

Algorithmic biology. The book concludes by advocating for "algorithmic biology," a new field that uses the language of algorithms to describe problem-solving strategies in biological systems. By identifying these natural algorithms—from gene networks to neural networks—we can gain insights for both engineering new computing paradigms and achieving a systems-level understanding of life's nested complexities.

Last updated:

Want to read the full book?

Review Summary

3.74 out of 5
Average of 586 ratings from Goodreads and Amazon.

The Deep Learning Revolution receives mixed reviews (3.74/5). Critics argue it's more memoir than technical guide, offering limited deep learning instruction and focusing heavily on Sejnowski's career and colleagues. However, supporters praise it as an excellent historical overview of neural networks' evolution, detailing key pioneers and developments from the 1980s onward. Readers appreciate the balance between accessible storytelling and technical context, though some found explanations unclear. The book effectively traces how AI shifted from rule-based systems to neural networks, though it requires patience and some technical background to fully appreciate.

Your rating:
4.22
4 ratings

About the Author

Terrence Joseph Sejnowski is the Francis Crick Professor at The Salk Institute for Biological Studies, directing the Computational Neurobiology Laboratory. As a Howard Hughes Medical Institute Investigator, his pioneering research spans neural networks and computational neuroscience. At UC San Diego, he serves as Professor of Biological Sciences with adjunct positions in Neurosciences, Psychology, Cognitive Science, and Computer Science. He directs the Institute for Neural Computation and the Crick-Jacobs Center for Theoretical and Computational Biology. His collaborations with figures like Geoffrey Hinton helped establish foundational deep learning concepts, including work on Boltzmann machines and backpropagation algorithms.

Listen1 mins
Now playing
The Deep Learning Revolution
0:00
-0:00
Now playing
The Deep Learning Revolution
0:00
-0:00
1x
Voice
Speed
Dan
Andrew
Michelle
Lauren
1.0×
+
200 words per minute
Queue
Home
Swipe
Library
Get App
Create a free account to unlock:
Recommendations: Personalized for you
Requests: Request new book summaries
Bookmarks: Save your favorite books
History: Revisit books later
Ratings: Rate books & see your ratings
250,000+ readers
Try Full Access for 7 Days
Listen, bookmark, and more
Compare Features Free Pro
📖 Read Summaries
Read unlimited summaries. Free users get 3 per month
🎧 Listen to Summaries
Listen to unlimited summaries in 40 languages
❤️ Unlimited Bookmarks
Free users are limited to 4
📜 Unlimited History
Free users are limited to 4
📥 Unlimited Downloads
Free users are limited to 1
Risk-Free Timeline
Today: Get Instant Access
Listen to full summaries of 73,530 books. That's 12,000+ hours of audio!
Day 4: Trial Reminder
We'll send you a notification that your trial is ending soon.
Day 7: Your subscription begins
You'll be charged on Dec 17,
cancel anytime before.
Consume 2.8× More Books
2.8× more books Listening Reading
Our users love us
250,000+ readers
Trustpilot Rating
TrustPilot
4.6 Excellent
This site is a total game-changer. I've been flying through book summaries like never before. Highly, highly recommend.
— Dave G
Worth my money and time, and really well made. I've never seen this quality of summaries on other websites. Very helpful!
— Em
Highly recommended!! Fantastic service. Perfect for those that want a little more than a teaser but not all the intricate details of a full audio book.
— Greg M
Save 62%
Yearly
$119.88 $44.99/year/yr
$3.75/mo
Monthly
$9.99/mo
Start a 7-Day Free Trial
7 days free, then $44.99/year. Cancel anytime.
Scanner
Find a barcode to scan

We have a special gift for you
Open
38% OFF
DISCOUNT FOR YOU
$79.99
$49.99/year
only $4.16 per month
Continue
2 taps to start, super easy to cancel
Settings
General
Widget
Loading...
We have a special gift for you
Open
38% OFF
DISCOUNT FOR YOU
$79.99
$49.99/year
only $4.16 per month
Continue
2 taps to start, super easy to cancel