Searching...
English
EnglishEnglish
EspañolSpanish
简体中文Chinese
FrançaisFrench
DeutschGerman
日本語Japanese
PortuguêsPortuguese
ItalianoItalian
한국어Korean
РусскийRussian
NederlandsDutch
العربيةArabic
PolskiPolish
हिन्दीHindi
Tiếng ViệtVietnamese
SvenskaSwedish
ΕλληνικάGreek
TürkçeTurkish
ไทยThai
ČeštinaCzech
RomânăRomanian
MagyarHungarian
УкраїнськаUkrainian
Bahasa IndonesiaIndonesian
DanskDanish
SuomiFinnish
БългарскиBulgarian
עבריתHebrew
NorskNorwegian
HrvatskiCroatian
CatalàCatalan
SlovenčinaSlovak
LietuviųLithuanian
SlovenščinaSlovenian
СрпскиSerbian
EestiEstonian
LatviešuLatvian
فارسیPersian
മലയാളംMalayalam
தமிழ்Tamil
اردوUrdu
Why AI Undermines Democracy and What To Do About It

Why AI Undermines Democracy and What To Do About It

by Mark Coeckelbergh 2024 144 pages
3.3
44 ratings
Listen
Try Full Access for 3 Days
Unlock listening & more!
Continue

Key Takeaways

1. AI is inherently political and currently undermines democracy's core principles.

AI is not politically neutral but currently shapes our political systems in ways that threaten democracy and support anti-democratic tendencies by undermining democratic principles, by eroding the knowledge and trust needed for democracy, and by fostering the good of the few at the expense of that of the many.

Beyond neutrality. AI is more than a mere tool; it actively shapes our actions, goals, and societal structures, carrying profound political consequences that extend far beyond its intended uses. This inherent political nature means AI is not a passive instrument but an active force influencing the direction of our governance.

Erosion of principles. Current AI applications directly threaten foundational democratic principles like liberty, equality, and fraternity. Examples include:

  • Liberty: Voter manipulation (Cambridge Analytica), automated surveillance, predictive policing, and biased judicial decisions.
  • Equality: Algorithmic unfairness leading to job losses, discrimination, and increased power asymmetries favoring big tech.
  • Fraternity: Polarization, filter bubbles, and echo chambers that divide society and undermine shared understanding.

Digital authoritarianism. These erosions contribute to the rise of "digital authoritarianism" and "digital totalitarianism," where technology enables regimes to control citizens, suppress dissent, and create climates of fear and mistrust. This shift is not just about misuse but about how AI's capabilities inherently lend themselves to such control, especially in already vulnerable democracies.

2. History reveals technology's consistent role in centralizing power, a trend AI amplifies.

The point is not just that technology and power have always been friends, but that technologies have consistently led to specific forms of social and political order and organization: forms that involve centralized, non-democratic control.

Ancient roots of control. From Plato's kybernetes (steersman) metaphor for governing the state to the Roman Empire's engineered roads, technology has historically facilitated centralized power. Writing and numbers, for instance, were tools for ancient accountants and managers, enabling control over economies and populations.

Industrial to digital. The Industrial Revolution further centralized power in industrial capitalism, leading to exploitation and class struggle. The digital revolution, initially promising decentralization, has paradoxically concentrated immense power in big tech companies and autocratic governments, creating new forms of control.

Techno-authoritarianism. This historical pattern suggests that AI, as a powerful digital technology, naturally pushes towards centralized, non-democratic control. The author argues that this is not an unavoidable determinism, but a strong historical tendency that must be actively resisted through conscious political and technological choices.

3. AI erodes the fundamental knowledge and trust essential for a resilient democracy.

If we can no longer distinguish truth from falsehood, and if trust between citizens is destroyed, then democracy does not work.

Truth under attack. Totalitarian regimes historically aimed to destroy truth, creating "fictitious worlds through consistent lying." AI, through deepfakes, misinformation, and authoritative-sounding but false text generation, contributes to an epistemic environment where distinguishing truth from untruth becomes nearly impossible, a disaster for democracy.

Knowledge and power asymmetry. AI and data science create significant knowledge imbalances between governments, big tech, and ordinary citizens. This asymmetry translates directly into power imbalances, where experts and corporations gain control, leading to:

  • Loss of epistemic and political agency for citizens.
  • Decisions about data and AI use made by a few, not democratically.
  • "Automating the government" leading to technocratic rule and increased surveillance.

Erosion of trust and community. The spread of misinformation, epistemic bubbles, and non-transparent algorithmic decisions fosters deep mistrust—between citizens and government, and among citizens themselves. This social disconnection undermines the "common sense" and shared world necessary for democratic deliberation and can create conditions ripe for authoritarianism.

4. A richer, more participative definition of democracy is crucial to counter AI's challenges.

I argue that deliberative, participative, and republican ideals of democracy should guide us in discussing the problem of AI and democracy.

Beyond mere voting. A "thin" definition of democracy as simply majority voting is insufficient to address the complex challenges posed by AI. A richer conception, rooted in republican, Enlightenment, and deliberative traditions, emphasizes active citizen participation, public deliberation, and a commitment to the common good.

Core democratic ideals: This robust vision of democracy requires:

  • Self-rule: Citizens actively govern themselves, not just elect representatives.
  • Civic virtue: Citizens care about and contribute to the common good, transcending private interests.
  • Communication: Fostering a public sphere where ideas are freely discussed, tested, and revised, leading to shared understanding.
  • Non-domination: Protecting citizens from arbitrary, uncontrolled power, whether from the state or powerful corporations.

Navigating complexity. Modern societies face "knowledge asymmetries" where expert knowledge is crucial, yet democratic consent is also needed. This creates a tension between technocracy (rule by experts) and the potential for ignorance in direct democracy. A richer democratic ideal seeks to balance these, ensuring that all citizens can participate meaningfully, informed by, but not subservient to, expertise.

5. Strengthening democratic institutions and robust regulation are vital first defenses against AI.

The law must preserve democratic institutions and curb "the unaccountable power of those who design and control digital technology" and their market individualism, which wrongly believes that everything can be left to individuals and their companies pursuing their own interests.

Legal safeguards. The law is a primary instrument for preserving democracy against AI's threats. Constitutions, human rights, and legal accountability must be leveraged to limit the power of tech companies and ensure democratic oversight. This includes establishing clear legal frameworks for AI use and development.

Institutional reform. Beyond existing laws, democratic institutions need fundamental transformation. This involves:

  • Constitutional adjustments: Rebalancing power towards democratically elected bodies over judicial powers and big tech.
  • New deliberative bodies: Creating "open mini-publics" or citizen juries to involve ordinary citizens in law-making, potentially aided by AI.
  • Quality journalism: Supporting independent media and public-owned platforms to fact-check, inform, and moderate public discourse, countering misinformation and polarization.

Regulation and ownership. Effective AI regulation, like the EU AI Act, must be robust, ethically grounded, and globally coordinated to prevent "ethics washing" and corporate lobbying from undermining democratic control. Furthermore, reconsidering the entanglement of AI with capitalism, through ideas like "digital commons" or public utility models for big tech, can redistribute power and ensure technology serves the many, not just the few.

6. "Democratic AI" demands embedding political values directly into technology's design and development.

If we really want democracy, we had better create more democratic technologies.

Proactive intervention. Instead of merely reacting to AI's negative impacts, a proactive approach involves shaping technology development itself to be ethically and politically responsible. This means integrating democratic values and concerns directly into the design process, moving beyond simply regulating existing AI.

Democracy by Design (DAD). This concept advocates for "democratic AI development" and "democracy by design," aiming to create AI that is inherently totalitarianism-proof and democracy-enhancing. Key aspects include:

  • Value-sensitive design: Incorporating political values like freedom, equality, and justice into AI's core architecture.
  • Stakeholder involvement: Systematically involving politically relevant stakeholders, including citizens, in the development process.
  • Institutional bridges: Creating permanent, institutionalized mechanisms to mediate between political bodies and tech developers, ensuring democratic input is continuously integrated.

Education for democratic tech. Empowering citizens and developers through education is crucial. This means:

  • Civic and critical thinking skills: Equipping all citizens to understand AI's societal role and engage critically with digital media.
  • Ethics for AI developers: Integrating ethical and political dimensions into computer science and engineering curricula, fostering a broader notion of quality that includes societal well-being and democratic goals.

7. AI can actively foster democracy by enhancing genuine communication and civic engagement.

The idea of AI for democracy is to have AI foster and strengthen, rather than erode, democracy: to develop AI as a true communication technology in the republican and Enlightenment sense of the word.

Beyond information exchange. AI for democracy envisions technology as a tool for building a common world, developing common sense, and working towards the common good, not just for transmitting data. This means AI should facilitate high-quality communication that fosters community and shared understanding.

AI as a democratic enabler. Specific applications of AI can be designed to strengthen the epistemic basis of democracy:

  • Civic education: AI programs assisting with knowledge provision and deliberation processes.
  • Misinformation detection: AI designed to identify and explain misinformation, enhancing trust in information.
  • Diverse information access: Recommender systems that ensure exposure to diverse viewpoints, countering filter bubbles.
  • AI-aided direct democracy: Platforms for online participation, discussion management, and citizen assemblies, as seen in Taiwan's vTaiwan or Zarkadakis's "Cyber Republic."

Structured participation. While grassroots initiatives are valuable, top-down institutional changes are also necessary to guide AI's development. This involves creating structured, inclusive processes where AI can help citizens meaningfully participate in complex decision-making, bridging knowledge asymmetries without succumbing to technocracy.

8. A "New Renaissance" of digital humanism is imperative to guide AI towards the common good.

In order to develop a vision for AI (and indeed for democracy), we need a broader vision about the future of our societies and the planet, which can then guide the above-mentioned institutional and policy changes.

Vision beyond technology. Guiding AI towards democracy requires more than technical fixes or regulations; it demands a profound cultural and educational transformation—a "New Renaissance" and "New Enlightenment." This involves developing new narratives and imaginations for a future where technology serves humanity and the planet.

Digital humanism. This movement advocates for developing technologies aligned with humanistic values, the good life, and democracy, while fostering interdisciplinary study of human-technology relations. It moves beyond merely criticizing technology to constructively shaping it, promoting:

  • Inclusive humanism: Addressing criticisms of anthropocentrism by considering non-human interests and a holistic ecological perspective.
  • Interdisciplinary education: Radically restructuring universities to integrate humanities and technical sciences, ensuring AI developers understand the ethical and political dimensions of their work.

Cultivating common ground. This cultural shift aims to build new communities of knowledge and communication, leveraging digital technologies to create a better public sphere and strengthen trust in knowledge and society. It's about fostering communicative skills and epistemic virtues—listening, open discussion, and working towards shared goals—across all citizens.

9. The common good, defined through deliberation and shared experience, must be AI's ultimate aim.

If we care about sustaining and creating a better and richer kind of democracy, however, one that is less vulnerable to anti-democratic forms of populism and does not lead to totalitarian nightmares, we can and must do better: we can and must imagine and develop AI as a communication technology, understood in the richer, political and democratic sense of communication developed here.

Beyond individual interests. The common good is not merely an aggregation of individual desires but what is shared and beneficial for the flourishing of the entire political community. In a democracy, this means transcending private interests to collectively define and pursue shared societal goals.

Deliberative and experimental approach. Given diverse values and interests, the common good should be understood as an outcome of participative and deliberative democratic procedures, rather than a fixed, pre-defined ideal. This involves:

  • Collective learning: Experimenting with AI policy and development to discover what truly serves the common good in specific contexts.
  • Inclusive deliberation: Ensuring all voices are heard and respected in defining shared goals, balancing individual liberties with collective well-being.
  • Protecting the commons: Creating and maintaining "digital commons" (e.g., data commons, open-source AI) to ensure resources and benefits are accessible and shared, rather than privatized.

AI for a common world. Ultimately, AI must become a "communication technology" that actively builds community and a "common world" through shared knowledge and experience. This means designing AI to foster understanding, bridge divides, and encourage collective action, rather than promoting isolation, mistrust, or the narrow interests of a few tech oligarchs.

Last updated:

Want to read the full book?

Review Summary

3.3 out of 5
Average of 44 ratings from Goodreads and Amazon.

Why AI Undermines Democracy and What To Do About It receives mixed 3-4 star reviews, averaging 3.3/5. Readers appreciate the book's practical policy recommendations and focus on solutions, particularly the executive summary and final chapters. However, critics note poor language quality, suggesting inadequate editing, and overly academic writing that's sometimes hard to follow. The book examines AI's impact on democracy through the principles of liberty, equality, and fraternity, arguing that big tech companies undermine these values. While readers find it necessary and thought-provoking, some felt it lacked depth despite its accessible size.

Your rating:
3.96
6 ratings

About the Author

Mark Coeckelbergh is Professor of Philosophy of Media and Technology at the University of Vienna, where he specializes in the philosophical implications of emerging technologies. His extensive bibliography includes notable works published by MIT Press, such as "AI Ethics" and "New Romantic Cyborgs: Romanticism, Information Technology, and the End of the Machine." He has also authored "Introduction to Philosophy of Technology" among other books. Coeckelbergh's work focuses on applied political philosophy in the context of technology, examining how artificial intelligence and digital technologies intersect with democratic principles and social structures in contemporary society.

Listen
Now playing
Why AI Undermines Democracy and What To Do About It
0:00
-0:00
Now playing
Why AI Undermines Democracy and What To Do About It
0:00
-0:00
1x
Voice
Speed
Dan
Andrew
Michelle
Lauren
1.0×
+
200 words per minute
Queue
Home
Swipe
Library
Get App
Create a free account to unlock:
Recommendations: Personalized for you
Requests: Request new book summaries
Bookmarks: Save your favorite books
History: Revisit books later
Ratings: Rate books & see your ratings
600,000+ readers
Try Full Access for 3 Days
Listen, bookmark, and more
Compare Features Free Pro
📖 Read Summaries
Read unlimited summaries. Free users get 3 per month
🎧 Listen to Summaries
Listen to unlimited summaries in 40 languages
❤️ Unlimited Bookmarks
Free users are limited to 4
📜 Unlimited History
Free users are limited to 4
📥 Unlimited Downloads
Free users are limited to 1
Risk-Free Timeline
Today: Get Instant Access
Listen to full summaries of 26,000+ books. That's 12,000+ hours of audio!
Day 2: Trial Reminder
We'll send you a notification that your trial is ending soon.
Day 3: Your subscription begins
You'll be charged on Mar 16,
cancel anytime before.
Consume 2.8× More Books
2.8× more books Listening Reading
Our users love us
600,000+ readers
Trustpilot Rating
TrustPilot
4.6 Excellent
This site is a total game-changer. I've been flying through book summaries like never before. Highly, highly recommend.
— Dave G
Worth my money and time, and really well made. I've never seen this quality of summaries on other websites. Very helpful!
— Em
Highly recommended!! Fantastic service. Perfect for those that want a little more than a teaser but not all the intricate details of a full audio book.
— Greg M
Save 62%
Yearly
$119.88 $44.99/year/yr
$3.75/mo
Monthly
$9.99/mo
Start a 3-Day Free Trial
3 days free, then $44.99/year. Cancel anytime.
Scanner
Find a barcode to scan

We have a special gift for you
Open
38% OFF
DISCOUNT FOR YOU
$79.99
$49.99/year
only $4.16 per month
Continue
2 taps to start, super easy to cancel
Settings
General
Widget
Loading...
We have a special gift for you
Open
38% OFF
DISCOUNT FOR YOU
$79.99
$49.99/year
only $4.16 per month
Continue
2 taps to start, super easy to cancel