Searching...
English
EnglishEnglish
EspañolSpanish
简体中文Chinese
FrançaisFrench
DeutschGerman
日本語Japanese
PortuguêsPortuguese
ItalianoItalian
한국어Korean
РусскийRussian
NederlandsDutch
العربيةArabic
PolskiPolish
हिन्दीHindi
Tiếng ViệtVietnamese
SvenskaSwedish
ΕλληνικάGreek
TürkçeTurkish
ไทยThai
ČeštinaCzech
RomânăRomanian
MagyarHungarian
УкраїнськаUkrainian
Bahasa IndonesiaIndonesian
DanskDanish
SuomiFinnish
БългарскиBulgarian
עבריתHebrew
NorskNorwegian
HrvatskiCroatian
CatalàCatalan
SlovenčinaSlovak
LietuviųLithuanian
SlovenščinaSlovenian
СрпскиSerbian
EestiEstonian
LatviešuLatvian
فارسیPersian
മലയാളംMalayalam
தமிழ்Tamil
اردوUrdu
The New Age of Sexism

The New Age of Sexism

How AI and Emerging Technologies Are Reinventing Misogyny
by Laura Bates 2025 352 pages
4.36
5.9K ratings
Listen
2 minutes
Try Full Access for 3 Days
Unlock listening & more!
Continue

Key Takeaways

1. The Digital Age Bakes In Old Sexism

If these companies succeed, everything from our classrooms to our workplaces, our sexual partners to our finance and justice systems, is going to exist in a way that is substantially different from the world as we currently experience it.

A critical juncture. Humanity stands at a precipice, with AI, VR, and robotics poised to transform our world at an unprecedented speed. While tech companies promise a glittering, improved future, this book interrogates whose future is being built and in whose interest. The reality is that existing gender inequalities are being encoded into the very foundations of these new technologies, creating a future that risks dragging women and marginalized groups backward.

Disproportionate impact. Women already experience technology differently than men, facing significantly higher rates of online violence. Globally, 38% of women have experienced online violence, and 85% have witnessed it. This leads to self-censorship and withdrawal, limiting women's participation in digital spaces. For example:

  • Women are 27 times more likely than men to be harassed online.
  • Black women are 84% more likely to receive abuse than white women.
  • 71% of men aged 18-24 use AI weekly, compared to only 59% of women in the same age range.

Ignoring immediate harms. Amid public panic about AI's existential threats, the more immediate harms to women and marginalized communities are often overlooked. While brilliant minds focus on distant doomsday scenarios, this book highlights the present-day misuse of AI in areas like deepfakes, metaverse assaults, and discriminatory algorithms, causing tangible harm right now.

2. Deepfakes: A New Epidemic of Slut Shaming

Intimate image abuse of women makes up around 96 percent of all deepfakes, yet a Europol report on “law enforcement and the challenge of deepfakes” mentions the word women just once and contains only a couple of brief paragraphs on deepfake pornography in its twenty-two pages.

Easy, devastating creation. Deepfakes, synthetically created media often indistinguishable from reality, are easily generated using apps for as little as ten euros. These images, predominantly nonconsensual pornography featuring women, inflict profound psychological trauma, including shock, panic, fear, and shame. Victims describe an "out-of-body experience" of violation, feeling powerless and haunted by the permanent circulation of these images.

Widespread and personal. What began as crude celebrity photoshops has evolved into sophisticated, personalized abuse. A 2020 Telegram bot targeted at least 100,000 women, often underage girls, turning their abuse into a game with incentives. A poll of Telegram users revealed 63% were interested in "undressing" familiar girls, not just celebrities, making any woman with a public photo a potential target.

Societal indifference. Public discourse on deepfakes largely focuses on political misinformation, despite 96% being nonconsensual pornography, 99% of which features women. This societal indifference is evident in cases like the Spanish town of Almendralejo, where teenage boys created deepfakes of over twenty girls, or the UK private school scandal. The lack of legal action and the "empathy gap" among consumers perpetuate a culture of impunity, silencing women and undermining democracy when female politicians are targeted.

3. The Metaverse: Virtual Harassment, Real-World Trauma

In some capacity, my physiological and psychological response was as though it happened in reality.

Immersive abuse. The metaverse, championed by Meta, promises an immersive 3D experience, but it has quickly become a new frontier for sexual harassment and assault. Beta testers and researchers have reported virtual groping, verbal harassment, and even gang rape, with victims experiencing psychological trauma akin to real-life assaults due to the immersive nature of VR.

Meta's inadequate response. Meta's leadership has consistently downplayed these incidents, describing them as "unfortunate" and shifting blame onto victims for not using safety features. This "make it now and fix any safety issues later" approach, reminiscent of social media's early days, prioritizes product development over user safety. For example:

  • Meta's "personal boundary" feature proved inadequate in a virtual gang rape case.
  • Human moderators are often voluntary, inconsistently present, and lack uniform powers.
  • Meta's reporting system often fails to acknowledge or act on policy violations.

Blurring boundaries, escalating risks. The metaverse blurs the lines between online and offline, with children as young as five accessing platforms like Roblox, where "condos" are built for virtual sexual activity. This has led to real-life exploitation, grooming, and even kidnapping. If the metaverse becomes integral to future workspaces and learning environments, the tolerance of abuse risks excluding women and marginalized groups from these vital spaces.

4. Sex Robots & AI Girlfriends: The Illusion of Perfect Control

The notion that dehumanizing and objectifying women is the only viable solution to the problem of male isolation is absurd, particularly when you consider that loneliness is a problem impacting people of all genders!

Customized subservience. Sex robots, costing up to $11,000, offer men customizable "ideal women" with options from breast size to personality, designed to be eternally available and compliant. AI girlfriend apps provide a similar illusion of control over a woman's mind, promising "the best partner you will ever have" who "never leaves you, never lies, supports you." These technologies are marketed as superior alternatives to real women, reinforcing misogynistic stereotypes.

Reinforcing harmful fantasies. Manufacturers often enable the simulation of real-life abuse, with robots designed to encourage rape fantasies or child sex dolls explicitly marketed for illegal acts. This perpetuates the dangerous myth that rape is about sexual desire, not power and control, and risks normalizing violent behavior. For example:

  • Some robots have "Frigid Farrah" settings for simulated rape.
  • Child-like dolls are marketed as "innocent" and "there for the taking."
  • Men exchange tips on forums to bypass laws against child sex doll imports.

Escalation and dehumanization. The argument that sex robots prevent violence by providing an outlet for "dark fantasies" is deeply flawed. Research shows that exposure to objectifying media and violent behavior often escalates into real-life aggression. These technologies contribute to the dehumanization of women, reducing them to objects for male gratification, and fostering a sense of entitlement that can spill over into real relationships.

5. Image-Based Sexual Abuse: The Enduring Digital Violation

It’s not a scandal. It is a sex crime.

A long, painful history. Image-based sexual abuse (IBSA), commonly known as "revenge porn," has a history stretching back to the 19th century with "creepshots" and unauthorized portraits. Modern IBSA, involving the nonconsensual sharing of intimate digital images, inflicts profound trauma, including PTSD, suicide ideation, and career devastation. Victims, like Georgie, often face agonizingly slow police investigations and legal loopholes, leaving them feeling betrayed and powerless.

Victim-blaming culture. Despite the severe impact, victims are frequently blamed and shamed by the public, media, and even authorities. Celebrities like Scarlett Johansson and Jennifer Lawrence, whose private images were leaked, faced public censure rather than sympathy. This victim-blaming narrative, exemplified by statements like "if you go on to the computer without your clothes on you’ll catch a virus," trivializes the crime and exacerbates victims' suffering.

Pervasive and escalating. IBSA is not a minor or past issue; it is rampant and increasing in scale and sophistication. Websites like AnonIB host thousands of nonconsensual images, categorized by location, race, and age, with users sharing personal information and tips for hacking. This thriving black market, often fueled by revenge or profit, creates a constant threat for women, many of whom are unaware their images are circulating. The abuse often escalates to extortion and, tragically, suicide.

6. AI Algorithms Systematically Encode Discrimination

The world according to Stable Diffusion is run by white male CEOs. Women are rarely doctors, lawyers or judges. Men with dark skin commit crimes, while women with dark skin flip burgers.

Bias in, bias out. AI systems, trained on existing data and human systems, inevitably reproduce and amplify societal biases. This "garbage in, garbage out" principle means that AI can perpetuate racism, sexism, and other forms of discrimination across various sectors. For example:

  • Microsoft's Tay chatbot became racist and misogynistic within hours of interacting with Twitter users.
  • UNESCO found generative AI models assign high-status jobs to men and domestic roles to women, and produce negative content about gay people.
  • AI image generators like Stable Diffusion create images dominated by white male CEOs and depict dark-skinned men as criminals.

Real-world consequences. This algorithmic bias has tangible, harmful impacts on people's lives.

  • Facial recognition algorithms have a 35% error rate for dark-skinned women, compared to 1% for lighter-skinned men, endangering trans people and sex workers.
  • Credit-scoring AI discriminates against women, exacerbating the $17 billion global gender credit gap.
  • Healthcare algorithms favor white patients, reducing the number of Black patients identified for critical care by more than half.
  • Recruitment AI, like Amazon's, has discriminated against female candidates, downgrading CVs mentioning all-women's colleges.

Misinformation and control. AI's capacity to "hallucinate" and generate convincing but false information poses a significant risk, especially for malicious disinformation campaigns. In authoritarian regimes, AI is used for surveillance and control, such as Iran's use of facial recognition to enforce hijab laws. This technological advancement, without ethical oversight, risks entrenching existing power structures and oppression.

7. Tech's "Move Fast" Culture Sacrifices Safety for Profit

One of the reasons many of us do have concerns about the rollout of AI is because over the past forty years as a society we’ve basically given up on actually regulating technology.

Repeating past mistakes. The tech industry's "move fast and break things" ethos, famously adopted by Mark Zuckerberg for social media, is being replicated with AI. This approach prioritizes rapid development and profit over safety, leading to societal harms that are difficult to fix retroactively. Social media's failure to address online abuse and radicalization serves as a stark warning for AI.

The AI arms race. Companies are under immense pressure to release new AI tools quickly, often without adequate safety testing or guardrails. This "gold-rush type of scenario" means that safety concerns are often addressed reactively, relying on the pain and trauma of oppressed people as "building materials" for improvement. For example:

  • OpenAI's safety team members have resigned, citing prioritization of "shiny products" over safety.
  • Meta's Llama model, despite being a "positive force," has been used to build sites for child pornographic role-play due to lax oversight.

Justifying negligence. Tech companies frequently claim that designing out bias and prejudice is too difficult, costly, or overwhelming. However, this excuse ignores their vast resources and expertise. The problem is a lack of will to prioritize safety and equality, leading to a cycle where harmful products are released, and then efforts are made to fix them, often by external researchers or whistleblowers.

8. Victim-Blaming Fuels the Cycle of Digital Abuse

If you’re not asking yourself ‘how could this be used to hurt someone’ in your design/engineering process, you’ve failed.

Shifting responsibility. Across all forms of digital violence, there is a pervasive tendency to blame victims rather than perpetrators or the platforms that facilitate abuse. From deepfakes to image-based sexual abuse, women are told to "be careful," "apply privacy settings," or "not take nude photos," effectively placing the burden of prevention on them. This deflects from the systemic issues at play.

Normalizing abuse. This victim-blaming mentality normalizes abuse as an inevitable byproduct of engaging with technology, rather than a preventable crime. When a teacher lost her job due to nonconsensually shared images, or when a girl was mocked with a #JadaPose hashtag after being raped, society sent a clear message: victims are at fault. This discourages reporting and perpetuates silence.

The "empathy gap." The public and even some tech developers exhibit an "empathy gap," dismissing the profound impact of digital violence. Comments like "it's just a fantasy" or "it's not real" invalidate victims' experiences, making it harder to secure justice or societal understanding. This lack of empathy allows perpetrators to operate with impunity and hinders the development of effective, victim-centered solutions.

9. Proactive Regulation and Ethical Design Are Imperative

We need to start from a place of sustainability, of fairness, of real, social progress, not technological development for the sake of it.

A new approach to progress. To mitigate the threats posed by emerging technology, a fundamental recalibration of attitudes towards powerful tech companies is required. Instead of prioritizing profit and unchecked innovation, we must demand a baseline of sustainability, fairness, and social progress. This means acknowledging that there is no acceptable level of human sacrifice for new products and that safeguarding must be a foundational principle, not an afterthought.

Comprehensive regulatory frameworks. Effective change demands robust regulation and international cooperation, moving beyond inadequate self-regulatory systems. This includes:

  • Global/regional legislative frameworks: Like the EU AI Act, focusing on equity, accessibility, and safety.
  • Broad laws: Covering "intimate intrusions" to adapt to new harms.
  • Civil remedies: Making it easier for victims to get content removed without police involvement.
  • Taxing tech firms: To fund education and support services.

Safety by design. Technology must be designed with safety and inclusivity from the outset. This means:

  • Mandatory age verification: In metaverse environments.
  • Preventive measures: Disincentivizing abuse, not just reacting to it.
  • Transparency and accountability: In data sets, algorithms, and moderation processes.
  • Delisting harmful content: Search engines and app stores must suppress sites trafficking nonconsensual abuse.

10. Diverse Voices are Essential for Equitable AI

Systems developed by nondiverse teams will be less likely to cater to the needs of diverse users or protect their human rights.

Homogenous development, biased outcomes. The tech industry, particularly in AI, is overwhelmingly dominated by affluent white men. Globally, women represent only 20% of technical roles in machine-learning companies and 12% of AI researchers. This lack of diversity in development teams directly contributes to biased AI systems that fail to cater to diverse users and perpetuate existing inequalities.

The cost of exclusion. When diverse perspectives are absent from the design process, the resulting technologies often cause unintended harm to marginalized groups. From image-cropping algorithms that ignore Black individuals to AI models that misdiagnose women, the consequences are severe. This exclusion also means that women are more likely to lose their jobs to AI and automation, exacerbating economic disparities.

Feminist AI and inclusive solutions. Diversifying the tech workforce and adopting feminist approaches to AI design are crucial. Initiatives like Feminist Internet's F'xa chatbot and Caroline Sinders's Feminist Data Set demonstrate how inclusive design, diverse data collection, and fair labor practices can create equitable AI. This proactive approach ensures that technology serves all of humanity, not just a privileged few, by embedding values of equity, accessibility, and safety from the ground up.

Last updated:

Want to read the full book?

Review Summary

4.36 out of 5
Average of 5.9K ratings from Goodreads and Amazon.

The New Age of Sexism examines how AI and emerging technologies perpetuate misogyny against women. Reviews praise Laura Bates' thorough research into deepfakes, sex robots, cyber brothels, AI girlfriends, and metaverse harassment, calling it essential yet terrifying reading. Readers appreciate her fury and compelling presentation of how technology enables male violence and objectification. Common criticisms include repetitiveness, particularly in middle chapters, and some reviewers felt the book provoked fear rather than offering substantive solutions. Many found the content horrifying but necessary, though some questioned certain conclusions about sex work and escalation theory. Overall rated 4.36/5 stars.

Your rating:
4.62
7 ratings

About the Author

Laura Bates founded the Everyday Sexism Project, collecting over 100,000 testimonies of gender inequality across 25 countries. She received a British Empire Medal for services to gender equality and has been named woman of the year by multiple publications. Bates works with politicians, businesses, schools, and organizations from the Council of Europe to the United Nations. She's authored several books including Everyday Sexism, Girl Up, and Men Who Hate Women. A fellow of the Royal Society of Literature and recipient of two honorary degrees, she writes regularly for major publications and won a British Press Award. She serves as patron of SARSAS and contributes to Women Under Siege.

Listen2 mins
Now playing
The New Age of Sexism
0:00
-0:00
Now playing
The New Age of Sexism
0:00
-0:00
1x
Voice
Speed
Dan
Andrew
Michelle
Lauren
1.0×
+
200 words per minute
Queue
Home
Swipe
Library
Get App
Create a free account to unlock:
Recommendations: Personalized for you
Requests: Request new book summaries
Bookmarks: Save your favorite books
History: Revisit books later
Ratings: Rate books & see your ratings
600,000+ readers
Try Full Access for 3 Days
Listen, bookmark, and more
Compare Features Free Pro
📖 Read Summaries
Read unlimited summaries. Free users get 3 per month
🎧 Listen to Summaries
Listen to unlimited summaries in 40 languages
❤️ Unlimited Bookmarks
Free users are limited to 4
📜 Unlimited History
Free users are limited to 4
📥 Unlimited Downloads
Free users are limited to 1
Risk-Free Timeline
Today: Get Instant Access
Listen to full summaries of 26,000+ books. That's 12,000+ hours of audio!
Day 2: Trial Reminder
We'll send you a notification that your trial is ending soon.
Day 3: Your subscription begins
You'll be charged on Mar 16,
cancel anytime before.
Consume 2.8× More Books
2.8× more books Listening Reading
Our users love us
600,000+ readers
Trustpilot Rating
TrustPilot
4.6 Excellent
This site is a total game-changer. I've been flying through book summaries like never before. Highly, highly recommend.
— Dave G
Worth my money and time, and really well made. I've never seen this quality of summaries on other websites. Very helpful!
— Em
Highly recommended!! Fantastic service. Perfect for those that want a little more than a teaser but not all the intricate details of a full audio book.
— Greg M
Save 62%
Yearly
$119.88 $44.99/year/yr
$3.75/mo
Monthly
$9.99/mo
Start a 3-Day Free Trial
3 days free, then $44.99/year. Cancel anytime.
Scanner
Find a barcode to scan

We have a special gift for you
Open
38% OFF
DISCOUNT FOR YOU
$79.99
$49.99/year
only $4.16 per month
Continue
2 taps to start, super easy to cancel
Settings
General
Widget
Loading...
We have a special gift for you
Open
38% OFF
DISCOUNT FOR YOU
$79.99
$49.99/year
only $4.16 per month
Continue
2 taps to start, super easy to cancel