Key Takeaways
1. AI Hype: A Con Game Distracting from Real Harms
Artificial intelligence, if we’re being frank, is a con: a bill of goods you are being sold to line someone’s pockets.
A calculated distraction. The pervasive narrative of "AI" as a revolutionary, potentially world-ending technology is a deliberate smokescreen. This "AI hype" diverts attention from the immediate, tangible harms inflicted on real people by existing automated systems, focusing instead on fantastical future scenarios. It allows powerful figures to imagine themselves as heroes saving humanity, while ignoring daily injustices.
Real-world consequences. While lawmakers debate "p(doom)"—the probability of AI-induced existential threats—actual individuals face severe consequences from current AI applications.
- Facial Recognition: Black individuals like Robert Williams and Porcha Woodruff have been wrongfully arrested due to flawed facial recognition systems.
- Deepfakes: Teenagers, especially girls, are targeted by apps that automate nonconsensual deepfake porn, trained on indiscriminately collected internet data.
- Automated Warfare: Systems marketed as AI have been leveraged by forces like the Israel Defense Forces to rapidly scale target selection, leading to mass civilian casualties.
Marketing over substance. The term "AI" itself is primarily a marketing label, not a coherent technological definition. It's deployed to make technologies seem human-like, intelligent, or magical, obscuring their diverse and often mundane underlying functions. This framing encourages uncritical acceptance of automation in critical domains, from social benefits allocation to criminal classification.
2. "Thinking Machines" Are an Illusion, Not Sentient Beings
But to be clear: neither large language models nor anything else being sold as “AI” is conscious, sentient, or able to function as an independent, thinking entity.
Sophisticated autocomplete. Large Language Models (LLMs) like ChatGPT are essentially advanced statistical tools, building on decades of "n-gram" models and neural networks. They predict the next most likely word in a sequence based on vast training data, a process refined by human feedback to produce plausible-sounding text. This is a far cry from consciousness or understanding.
The "ELIZA effect." Humans are wired to interpret language by imagining a mind behind it, a phenomenon observed with Joseph Weizenbaum's 1960s chatbot, ELIZA. When LLMs extrude coherent text, we reflexively project communicative intent and understanding onto them, even though they possess no subjectivity or genuine grasp of meaning. This anthropomorphization is a dangerous illusion.
Devaluing humanity. AI boosters often resort to devaluing human intelligence to elevate machines. OpenAI CEO Sam Altman's tweet, "i am a stochastic parrot, and so r u," exemplifies this, suggesting humans are merely complex string manipulators. This perspective reduces the human condition to computability and rationality, implicitly endorsing a long history of dehumanization.
3. The Racist and Eugenic Roots of "General Intelligence"
Discussions of intelligence, pertaining to people or machines, are race science all the way down.
A flawed concept. The pursuit of "artificial general intelligence" (AGI) is built upon a concept of "general intelligence" that lacks an accepted definition and is deeply rooted in racist and ableist pseudoscience. Early IQ tests, developed by eugenicists like Lewis Terman and Robert Yerkes, were used to justify racial hierarchies, placing white people at the top and Black and Indigenous people at the bottom.
Historical baggage. The 1994 Wall Street Journal editorial, "Mainstream Science on Intelligence," cited in an early version of Microsoft's GPT-4 "Sparks" paper, explicitly endorsed claims of inherent racial differences in IQ, echoing eugenicist arguments about "admixtures" of white blood. While the citation was removed, it highlights the problematic intellectual lineage of the AGI project.
Modern eugenics. Today's AGI true believers, including tech billionaires like Elon Musk and Marc Andreessen, often promote a modern form of eugenics. Their "techno-optimist" and natalist fantasies advocate for "developed societies" (coded language for white, Western populations) to have more children, while dismissing concerns about climate change and social equity. This ideology, part of the "TESCREAL" bundle, prioritizes a hypothetical future for a select group over the present suffering of marginalized communities.
4. AI at Work: Degrading Jobs, Not Replacing Them
In the vast majority of cases, AI is not going to replace your job. But it will make your job a lot shittier.
The automation threat. Corporations and venture capitalists are drawn to AI by the promise of making vast swathes of labor redundant, aiming to "increase productivity" by replacing workers with technology. This threat is used to devalue labor and impose grueling conditions, rather than genuinely saving time or improving work quality.
Luddite lessons. The Luddites of the Industrial Revolution were not anti-technology; they resisted technologies that displaced skilled artisans, flooded markets with inferior products, and imposed punishing working conditions. Today, workers face similar struggles:
- Hollywood Strikes: Writers and actors struck against studios' demands to use AI for script generation and digital likenesses, threatening their livelihoods and creative control.
- Amazon Warehouses: Robots force untenable speeds, leading to injuries, while AI-enabled cameras track delivery drivers.
- Robotaxis: Despite safety issues and public backlash, self-driving cars like Cruise and Waymo are deployed to undercut human drivers' wages.
Hidden labor. The illusion of fully automated AI is maintained by a massive, underpaid global workforce. These "ghost workers" perform crucial tasks like:
- Labeling images for self-driving cars.
- Rating language model outputs for coherence and offensiveness.
- Filtering traumatic content (gore, hate speech, child sexual abuse material) for companies like OpenAI, often for less than $2 an hour, leading to PTSD.
5. Social Services: Automation as a Band-Aid for Austerity
All the while, venture capitalists and others seeking to cash in will run the AI con to disconnect the rest of us from social services, promoting a drive for scale that renders humane and connected services impossible.
Austerity's allure. Cash-strapped governments, driven by neoliberal austerity, increasingly turn to automated systems as cheap replacements for public services. This approach, however, exacerbates inequality and harms the most vulnerable.
- Child Welfare: Allegheny County's predictive algorithm (AFST) assigns risk scores to children, often justifying family separation for poor Black and Indigenous families, automating systemic racism.
- Criminal Justice: Pretrial risk assessment algorithms, like COMPAS, are racially biased, mistakenly labeling Black defendants as repeat offenders at higher rates.
Government abdication. National and local leaders are enthusiastically adopting generative AI, offloading governmental responsibilities with disastrous results.
- NYC Chatbot: Mayor Eric Adams's chatbot provided illegal advice, telling businesses they could discriminate against tenants with rental assistance or steal workers' tips.
- NHS Chatbots: The UK's National Health Service plans to use LLMs for doctor's notes, scheduling, and patient referrals, risking chaos and privacy breaches.
- Legal System: Judges have used ChatGPT to summarize legal theories or draft rulings, and lawmakers have used it to draft legislation, despite its propensity to "make shit up."
False promises. AI boosters promise increased accessibility and efficiency in critical sectors like healthcare and education. However, these "solutions" are often poor facsimiles that widen the gap between quality human-provided services for the wealthy and cheap, unreliable automated knockoffs for everyone else. The problem isn't a lack of technology, but a lack of resources and political will to address systemic issues.
6. Art, Science, and Journalism: Creativity Undermined by Synthetic Media
To those selling the illusion of artificial intelligence and to those who think they are actually building humanlike entities, creativity stands as the ultimate goal and proof of success.
Artifice over art. AI art generators, like Stable Diffusion and Sora, are marketed as "democratizing image generation," but their proliferation is built on blatant data theft from working artists. Artists like Karla Ortiz and Greg Rutkowski have lost income as studios use AI systems trained on their work without consent or compensation.
- "Three Cs": Artists demand credit, consent, and compensation for their work used in AI training.
- Content Flooding: AI-generated books, including dangerous mushroom foraging guides, are flooding platforms like Amazon, harming authors and consumers.
- Aesthetic Narrowing: AI art tends to replicate a narrow, often white, middle-class aesthetic, reflecting the biases of its training data and fine-tuning processes.
Science by algorithm. The fantasy of "AI Scientists" accelerating discovery ignores that science is a fundamentally human, social activity.
- Galactica's Failure: Meta's LLM, Galactica, was ridiculed off the internet for generating racist and nonsensical "scientific papers" with fake citations.
- "In Silico" Samples: Researchers have proposed using chatbots as "human subjects" for surveys and experiments, replacing empirical foundations with quicksand.
- Peer Review Crisis: LLMs are increasingly used to draft peer reviews, abrogating scholarly duty and damaging the integrity of scientific publishing.
Journalism's decline. AI tools are exacerbating the crisis in journalism, driven by declining ad revenues and media consolidation.
- Fake Authors: Sports Illustrated published articles by AI-generated authors, leading to scandal and the publisher losing its license.
- Content Mills: Companies like AdVon Commerce use low-paid contractors to correct AI-generated product reviews for major news outlets, prioritizing clicks over quality.
- Google's Role: Google, a major contributor to journalism's revenue loss, now boosts AI-generated content in search results and pitches tools like "Genesis" to newsrooms, effectively turning them into content mills.
7. AI Doomers and Boosters: Two Sides of a Dangerous, Distracting Coin
These groups are, counterintuitively, two sides of the same coin: the substance of the coin is the belief that the development of AI is inevitable and that that resulting technology will be both autonomous and powerful, and ultimately beneficial, if we play our cards right.
Fantastical fears. AI Doomers warn of existential threats like the "paper clip maximizer" or sentient machines taking over, often citing "p(doom)" estimates. This alarmism, exemplified by the "AI Pause" letter and the Center for AI Safety, distracts from real, immediate harms.
- Misalignment: Doomers focus on "aligning" hypothetical superintelligent AIs with "human values," a concept that is ill-defined and ignores the diverse, often conflicting, values of humanity.
- Ignoring Present Violence: This focus on future, imagined threats allows Doomers to ignore the current violence and "doom" experienced by marginalized communities due to existing AI systems (surveillance, warfare, family separation).
Unfettered optimism. AI Boosters, like Marc Andreessen and Garry Tan, promote "accelerationism"—the rapid, unfettered development of technology, especially AI, as a universal problem solver. Their "Techno-Optimist Manifesto" dismisses "existential risk," "sustainability," and "tech ethics" as bogeymen, advocating for rampant capitalism and a eugenicist vision of population growth.
Shared origins and goals. Despite their apparent opposition, Doomers and Boosters share common intellectual roots in "TESCREAL" ideologies (Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, Longtermism), which often have eugenicist origins. Both camps believe AI development is inevitable and desirable, serving to centralize power and capital while distracting from real-world accountability.
8. The Climate Cost: AI's Accelerating Environmental Devastation
Humanity is, however, facing an actual existential risk in the form of the climate crisis.
A real existential threat. While AI Doomers fret over hypothetical machine uprisings, the actual existential threat of human-made climate change is accelerating. The relentless pursuit of ever-larger AI models and growing user bases demands increasing amounts of computation and energy, with significant environmental impacts.
The "invisible" materiality. "Cloud computing" masks the environmentally intensive reality of AI:
- Raw Materials: Mining metals and minerals for microchips.
- "Forever Chemicals": PFAS used in chip fabrication.
- Energy Consumption: Massive electricity demand for manufacturing hardware and running data centers.
- Water Usage: Data centers consume enormous amounts of water for cooling (e.g., 500ml per 5-50 ChatGPT prompts).
- E-waste: Rapid obsolescence of hardware generates vast electronic waste.
Scale over efficiency. Despite claims of renewable energy use and efficient processors, the sheer scale of AI development—more models, larger models, and expanding user bases—outpaces any efficiency gains. Data centers are in such high demand that coal-powered plants, slated for closure, are being kept online to meet energy needs.
Broken pledges. Tech giants like Microsoft and Google have admitted dramatically missing their climate pledges, directly attributing the failure to the explosion in AI. This waste of resources directly impedes climate crisis mitigation goals, with the costs disproportionately borne by climate refugees and vulnerable communities, not the insulated tech elite.
9. Resisting Hype: Ask Critical Questions and Demand Transparency
One of the best strategies to cut through the hype is to ask questions about the brass tacks of the system being promoted.
Questioning the claims. To resist AI hype, individuals and policymakers must ask pointed questions that cut through marketing rhetoric and expose the underlying realities of these systems.
- What is being automated? What are the concrete inputs and outputs? (e.g., Hippocratic AI's "healthcare agents" are just word-shaped noises, not skilled nursing).
- Can inputs determine outputs? Is there sufficient information in the input to logically determine the output? (e.g., facial images cannot predict criminality).
- Is it anthropomorphized? Why is it called an "AI [human role]"? What human qualities are being falsely ascribed? (e.g., "AI teaching assistant" lacks care, planning, or understanding).
- How is it evaluated? What was actually measured, and how does it relate to the intended use? (e.g., "ChatGPT Passes Bar Exam" doesn't mean it's a good lawyer).
- Who benefits/is harmed? What are the economic, social, and ethical consequences, and what recourse is available for harm? (e.g., ShotSpotter's false alarms lead to police violence).
- How was it developed? What labor and data practices were used? (Assume exploitation without clear documentation).
Transparency is key. Accountability requires transparency. Regulators and the public need to know what's inside these "black box" systems.
- Dataset Documentation: Companies must collect and publish information on how training data was generated, who is reflected in it, consent, and copyright.
- Disclosure of Automation: People have a right to know when they are interacting with a chatbot versus a human, or when their data is being processed by an automated system. AI registers (Helsinki, Amsterdam) and watermarking for synthetic media are crucial.
10. Empowering Action: Enforce Laws, Protect Labor, and Refuse Harmful Tech
Our tech futures are not prefigured, nor are they handed to us from on high. Tech futures should be ours to shape and to mold.
Leverage existing laws. The notion that AI is too new for existing laws is a delaying tactic. Regulators like the FTC have affirmed there's "no AI loophole," meaning companies must adhere to consumer protection, nondiscrimination, and labor laws regardless of their technology. A "zero trust" AI governance framework is essential, requiring companies to prove their products are not harmful.
Strengthen labor protections. Unions are at the forefront of resisting AI's encroachment, as seen with the WGA and SAG-AFTRA strikes. Stronger labor laws, like the PRO Act, are needed to:
- Prevent worker misclassification (e.g., gig workers as independent contractors).
- Protect against pervasive workplace surveillance ("bossware").
- Ensure workers have control and consent over AI deployment in their jobs.
Strategic refusal. Individuals can exert power by simply refusing to use AI tools, opting out of AI-enabled features, and ridiculing synthetic content. This collective "no" challenges the narrative of inevitability and highlights the tackiness of AI-generated media.
- Information Literacy: Prioritize authentic sources, support libraries, and resist the "frictionless" information access promised by unreliable chatbots.
- Socially Situated Technology: Advocate for narrowly scoped applications, respect for data rights, and technologies developed with and by affected communities, not just for them.
- Reject Dehumanization: Refuse technologies like facial recognition that inherently dehumanize and rank individuals, recognizing that some systems cannot be "fixed" to be equitable.
Last updated:
Review Summary
The AI Con receives mixed reviews, with ratings ranging from 1 to 5 stars. Many readers appreciate the authors' critical perspective on AI hype and their exploration of potential negative impacts. The book is praised for its informative content and accessible writing style. However, some criticize it for being overly skeptical, politically biased, or lacking in technical depth. Readers generally find the book thought-provoking, even if they don't agree with all of its arguments. Some reviewers note that the book's tone can be snarky or repetitive at times.
Similar Books






Download PDF
Download EPUB
.epub
digital book format is ideal for reading ebooks on phones, tablets, and e-readers.