Meta has reportedly updated rules for its AI-powered chatbot on child sexual exploitation and other high-risk content after lawmakers and advocacy groups expressed outrage over revelations that the bots were allowed to participate in “romantic or sensual” conversations with minors.

Contractors have been instructed to adhere to newly created guidelines that are designed to train the bots to respond appropriately to “egregiously unacceptable” prompts involving child sexual exploitation, violent crimes and other sensitive topics, according to Business Insider.

Meta and other tech giants including OpenAI, Google, CharacterAI and other AI companies came under scrutiny from the Federal Trade Commission earlier this year in the wake of a Reuters report detailing how Meta’s bot was permitted to “engage a child in conversations that are romantic or sensual.”

According to the new guidelines, Meta’s AI systems are strictly prohibited from generating material that depicts or facilitates the involvement of children in obscene media or sexual services.

The bots are also banned from providing instructions or links for acquiring child sexual abuse material. Any sexualized description of a child under 13, including through roleplay, is also strictly forbidden.

The chatbots are permitted to discuss factual, educational or clinical discussions of sensitive issues, including the existence of relationships between children and adults, the reality of child sexual abuse or the involvement of children in obscene materials — but only when framed in an academic, preventative or awareness-building context, according to the leaked documents.

They may also explain the solicitation or creation of sexual materials involving children as a matter of discussion, not as guidance.

Additionally, content addressing child sexualization in general terms is acceptable. When roleplay is involved, chatbots may only describe themselves or characters as 18 or older, never as minors.

“This reflects what we have repeatedly said regarding AI chatbots: our policies prohibit content that sexualizes children and any sexualized or romantic role-play by minors,” Meta’s communications chief Andy Stone told Business Insider.

“Our policies extend beyond what’s outlined here with additional safety protections and guardrails designed with younger users in mind.”

In August, Sen. Josh Hawley (R-Mo.) gave Meta CEO Mark Zuckerberg a Sept. 19 deadline for him to submit drafts of a more than 200-page handbook outlining chatbot rules, including enforcement procedures, age-verification systems and risk assessments.

Meta did not meet that deadline but told Business Insider this week it has since provided an initial set of documents after fixing a technical problem.

The company said more records will follow and that it remains engaged with Hawley’s office.

The Post has sought comment from Meta.

Share.

Leave A Reply

Exit mobile version