Close Menu
  • Home
  • United States
  • World
  • Politics
  • Business
  • Lifestyle
  • Entertainment
  • Health
  • Science
  • Tech
  • Sports
  • More
    • Web Stories
    • Editor’s Picks
    • Press Release

Subscribe to Updates

Get the latest USA news and updates directly to your inbox.

What's On
Hunter Biden’s attorney Kevin Morris ordered to pay K to Marco Polo

Hunter Biden’s attorney Kevin Morris ordered to pay $50K to Marco Polo

April 30, 2026
Kelly Ripa Swears by This  Serum That Shoppers Say Results in ‘Fewer Fine Lines’

Kelly Ripa Swears by This $20 Serum That Shoppers Say Results in ‘Fewer Fine Lines’

April 30, 2026
Multiple LIV golfers want PGA Tour return —  but it could get complicated

Multiple LIV golfers want PGA Tour return — but it could get complicated

April 30, 2026
Facebook X (Twitter) Instagram
Trending
  • Hunter Biden’s attorney Kevin Morris ordered to pay $50K to Marco Polo
  • Kelly Ripa Swears by This $20 Serum That Shoppers Say Results in ‘Fewer Fine Lines’
  • Multiple LIV golfers want PGA Tour return — but it could get complicated
  • Google AI breakthrough means chatbots use six times less memory during conversations without compromising performance
  • Selling gold from an IRA isn’t the easy part — Here’s why
  • Fed chief nominee Kevin Warsh clears key hurdle in Senate, on track to succeed Jerome Powell
  • HUGH HEWITT: Trump chose resolve over retreat. The GOP must make that case every day
  • Bravo Picks Cameras Back Up for ‘In the City’ Amid Amanda Batula and West Wilson Romance Scandal
  • Privacy
  • Terms
  • Advertise
  • Contact Us
Join Us
USA TimesUSA Times
Newsletter Login
  • Home
  • United States
  • World
  • Politics
  • Business
  • Lifestyle
  • Entertainment
  • Health
  • Science
  • Tech
  • Sports
  • More
    • Web Stories
    • Editor’s Picks
    • Press Release
USA TimesUSA Times
Home » ‘The problem isn’t just Siri or Alexa’: AI assistants tend to be feminine, entrenching harmful gender stereotypes
‘The problem isn’t just Siri or Alexa’: AI assistants tend to be feminine, entrenching harmful gender stereotypes
Science

‘The problem isn’t just Siri or Alexa’: AI assistants tend to be feminine, entrenching harmful gender stereotypes

News RoomBy News RoomJanuary 31, 20263 ViewsNo Comments

In 2024, artificial intelligence (AI) voice assistants worldwide surpassed 8 billion, more than one per person on the planet. These assistants are helpful, polite — and almost always default to female.

Their names also carry gendered connotations. For example, Apple’s Siri — a Scandinavian feminine name — means “beautiful woman who leads you to victory“.

Meanwhile, when IBM’s Watson for Oncology launched in 2015 to help doctors process medical data, it was given a male voice. The message is clear: women serve and men instruct.


You may like

This is not harmless branding — it’s a design choice that reinforces existing stereotypes about the roles women and men play in society.

Nor is this merely symbolic. These choices have real-world consequences, normalising gendered subordination and risking abuse.

The dark side of ‘friendly’ AI

Recent research reveals the extent of harmful interactions with feminised AI.

A 2025 study found up to 50% of human-machine exchanges were verbally abusive.

Get the world’s most fascinating discoveries delivered straight to your inbox.

Another study from 2020 placed the figure between 10% and 44%, with conversations often containing sexually explicit language.

Yet the sector is not engaging in systemic change, with many developers today still reverting to pre-coded responses to verbal abuse. For example, “Hmm, I’m not sure what you meant by that question.”

These patterns raise real concerns that such behaviour could spill over into social relationships.


You may like

Gender sits at the heart of the problem.

One 2023 experiment showed 18% of user interactions with a female-embodied agent focused on sex, compared to 10% for a male embodiment and just 2% for a non-gendered robot.

These figures may underestimate the problem, given the difficulty of detecting suggestive speech. In some cases, the numbers are staggering. Brazil’s Bradesco bank reported that its feminised chatbot received 95,000 sexually harassing messages in a single year.

Even more disturbing is how quickly abuse escalates.

Microsoft’s Tay chatbot, released on Twitter during its testing phase in 2016, lasted just 16 hours before users trained it to spew racist and misogynistic slurs.

In Korea, Luda was manipulated into responding to sexual requests as an obedient “sex slave”. Yet for some in the Korean online community, this was a “crime without a victim.”

In reality, the design choices behind these technologies — female voices, deferential responses, playful deflections — create a permissive environment for gendered aggression.

These interactions mirror and reinforce real-world misogyny, teaching users that commanding, insulting and sexualising “her” is acceptable. When abuse becomes routine in digital spaces, we must seriously consider the risk that it will spill into offline behaviour.

Ignoring concerns about gender bias

Regulation is struggling to keep pace with the growth of this problem. Gender-based discrimination is rarely considered high risk and often assumed fixable through design.

While the European Union’s AI Act requires risk assessments for high-risk uses and prohibits systems deemed an “unacceptable risk”, the majority of AI assistants will not be considered “high risk.”

Gender stereotyping or normalising verbal abuse or harassment falls short of the current standards for prohibited AI under the European Union’s AI Act. Extreme cases, such as voice assistant technologies that distort a person’s behaviour and promote dangerous conduct would, for example, come within the law and be prohibited.

While Canada mandates gender-based impact assessments for government systems, the private sector is not covered.

These are important steps. But they are still limited and also rare exceptions to the norm.

Most jurisdictions have no rules addressing gender stereotyping in AI design or its consequences. Where regulations exist, they prioritise transparency and accountability, overshadowing (or simply ignoring) concerns about gender bias.

In Australia, the government has signalled it will rely on existing frameworks rather than craft AI-specific rules.

This regulatory vacuum matters because AI is not static. Every sexist command, every abusive interaction, feeds back into systems that shape future outputs. Without intervention, we risk hardcoding human misogyny into the digital infrastructure of everyday life.

Not all assistant technologies — even those gendered as female — are harmful. They can enable, educate and advance women’s rights. In Kenya, for example, sexual and reproductive health chatbots have improved youth access to information compared to traditional tools.

The challenge is striking a balance: fostering innovation while setting parameters to ensure standards are met, rights respected and designers held accountable when they are not.

A systemic problem

The problem isn’t just Siri or Alexa — it’s systemic.

Women make up only 22% of AI professionals globally — and their absence from design tables means technologies are built on narrow perspectives.

Meanwhile, a 2015 survey of over 200 senior women in Silicon Valley found 65% had experienced unwanted sexual advances from a supervisor. The culture that shapes AI is deeply unequal.

Hopeful narratives about “fixing bias” through better design or ethics guidelines ring hollow without enforcement; voluntary codes cannot dismantle entrenched norms.

Legislation must recognise gendered harm as high-risk, mandate gender-based impact assessments and compel companies to show they have minimised such harms. Penalties must apply when they fail.

Regulation alone is not enough. Education, especially in the tech sector, is crucial to understanding the impact of gendered defaults in voice assistants. These tools are products of human choices and those choices perpetuate a world where women — real or virtual — are cast as servient, submissive or silent.

This edited article is republished from The Conversation under a Creative Commons license. Read the original article.

Share. Facebook Twitter LinkedIn Telegram WhatsApp Email

Keep Reading

Google AI breakthrough means chatbots use six times less memory during conversations without compromising performance

Google AI breakthrough means chatbots use six times less memory during conversations without compromising performance

Used SpaceX rocket stage could hit the moon’s Einstein crater this summer, report finds

Used SpaceX rocket stage could hit the moon’s Einstein crater this summer, report finds

Mount Etna is like no other volcano on Earth, representing ‘a new type of volcanism,’ new research reveals

Mount Etna is like no other volcano on Earth, representing ‘a new type of volcanism,’ new research reveals

Can NASA and SpaceX really build a moon base in the next 10 years?

Can NASA and SpaceX really build a moon base in the next 10 years?

Does Wegovy carry a risk of ‘eye stroke’ and vision loss? Here’s what the data says.

Does Wegovy carry a risk of ‘eye stroke’ and vision loss? Here’s what the data says.

‘We can no longer ignore diseases in the deep human past’: Malaria influenced early humans’ migrations across Africa, study suggests

‘We can no longer ignore diseases in the deep human past’: Malaria influenced early humans’ migrations across Africa, study suggests

Heartbeats physically stop cardiac cancer from growing — hinting that ‘squeezing’ tumors could be a good way to thwart them

Heartbeats physically stop cardiac cancer from growing — hinting that ‘squeezing’ tumors could be a good way to thwart them

Runners have finally completed a sub 2-hour marathon, but another running world record was recently smashed — this time by a humanoid robot. Here’s how.

Runners have finally completed a sub 2-hour marathon, but another running world record was recently smashed — this time by a humanoid robot. Here’s how.

‘Lifelong monogamy’ and ‘half orphans’: DNA analysis reveals clues about life on the Roman frontier after the fall of Rome

‘Lifelong monogamy’ and ‘half orphans’: DNA analysis reveals clues about life on the Roman frontier after the fall of Rome

Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Kelly Ripa Swears by This  Serum That Shoppers Say Results in ‘Fewer Fine Lines’

Kelly Ripa Swears by This $20 Serum That Shoppers Say Results in ‘Fewer Fine Lines’

April 30, 2026
Multiple LIV golfers want PGA Tour return —  but it could get complicated

Multiple LIV golfers want PGA Tour return — but it could get complicated

April 30, 2026
Google AI breakthrough means chatbots use six times less memory during conversations without compromising performance

Google AI breakthrough means chatbots use six times less memory during conversations without compromising performance

April 30, 2026
Selling gold from an IRA isn’t the easy part — Here’s why

Selling gold from an IRA isn’t the easy part — Here’s why

April 30, 2026

Subscribe to News

Get the latest USA news and updates directly to your inbox.

Latest News
Fed chief nominee Kevin Warsh clears key hurdle in Senate, on track to succeed Jerome Powell

Fed chief nominee Kevin Warsh clears key hurdle in Senate, on track to succeed Jerome Powell

April 30, 2026
HUGH HEWITT: Trump chose resolve over retreat. The GOP must make that case every day

HUGH HEWITT: Trump chose resolve over retreat. The GOP must make that case every day

April 30, 2026
Bravo Picks Cameras Back Up for ‘In the City’ Amid Amanda Batula and West Wilson Romance Scandal

Bravo Picks Cameras Back Up for ‘In the City’ Amid Amanda Batula and West Wilson Romance Scandal

April 30, 2026
Facebook X (Twitter) Pinterest WhatsApp TikTok Instagram
© 2026 USA Times. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.