Hawley launches investigation into Meta for disturbing AI chatbot exchanges
"Kids Deserve Protection" said Hawley
Missouri U.S. Senator Josh Hawley wants an investigation into Meta!
He penned a letter to Meta CEO Mark Zuckerberg, demanding the company hand over all relevant documents and communications related to new disturbing reporting that its AI chatbots engage children in “romantic” and sensual” online exchanges.
“Parents deserve the truth, and kids deserve protection,” Senator Hawley wrote.
“The Senate Judiciary Committee Subcommittee on Crime and Counterterrorism, which I chair, will commence an investigation into whether Meta’s generative-AI products enable exploitation, deception, or other criminal harms to children, and whether Meta misled the public or regulators about its safeguards,” he continued.
Senator Hawley first sounded the alarm on this explosive reporting yesterday, pointing out that “only after Meta got CAUGHT did it retract portions of its company doc that deemed it ‘permissible for chatbots to flirt and engage in romantic roleplay with children.’”
As Chairman of the Senate Judiciary Committee Subcommittee on Crime and Counterterrorism, Senator Hawley recently led a separate oversight hearing into Meta that exposed the tech company’s stunning complicity with China.
Read Senator Hawley’s full letter from today here or below.
August 15, 2025
Mark Zuckerberg
Founder, Chairman and Chief Executive Officer
Meta Platforms, Inc.
1 Meta Way
Menlo Park, CA 94025
Dear Mr. Zuckerberg,
Reports indicate that Meta approved internal rules for its AI chatbots that would allow “romantic” and “sensual” exchanges with children, while green-lighting other harmful content behind legal word games. Your company has acknowledged the veracity of these reports and made retractions only after this alarming content came to light. It’s unacceptable that these policies were advanced in the first place. Meta must immediately preserve all relevant records and produce responsive documents so Congress can investigate these troubling practices.
To take but one example, your internal rules purportedly permit an AI chatbot to comment that an eight-year-old’s body is “a work of art” of which “every inch . . . is a masterpiece—a treasure I cherish deeply.” Similar conduct outlined in these reports is reprehensible and outrageous—and demonstrates a cavalier attitude when it comes to the real risks that generative AI presents to youth development absent strong guardrails. Parents deserve the truth, and kids deserve protection.
The Senate Judiciary Committee Subcommittee on Crime and Counterterrorism, which I chair, will commence an investigation into whether Meta’s generative-AI products enable exploitation, deception, or other criminal harms to children, and whether Meta misled the public or regulators about its safeguards. We intend to learn who approved these policies, how long they were in effect, and what Meta has done to stop this conduct going forward.
Accordingly, produce the following no later than September 19, 2025:
1. “GenAI: Content Risk Standards”—All Versions
- Every draft, redline, and final version of “GenAI: Content Risk Standards,” including all regional variants, appendices, and preceding and succeeding policies.
2. Scope, Deployment, and Enforcement:
- A list of every Meta product/model governed by these standards.
- Enforcement playbooks, guardrail/classifier specs, refusal policies, evaluator guidelines, and known gaps/exceptions.
- All age-gating/minor-protection controls for chatbots, including all documents related to how Meta prevents, detects, and blocks “romantic” or “sensual” exchanges with users under 18, as well as documents relating to what Meta does when age is unknown.
3. Risk Reviews and Incident Reports:
- All documents relating to safety reviews, red-team reports, risk registers, incident logs, and post-mortems referencing minors, sexual/romantic role-play, in-person meetups, medical advice, self-harm, or criminal exploitation.
- Any materials provided to or prepared for senior leadership on these risks.
4. Public Claims and Regulator Communications:
- All drafts, talking points, or statements to users, advertisers, app stores, the FTC, Congress, or other authorities about minor safety and medical-advice limits for Meta’s chatbots.
5. Identities of Individuals and Actions Changing Policy:
- Documents sufficient to establish the decision trail for removing or revising any portions of the standards: who decided, when, why, who was notified, and what changes were actually implemented across products.
Sincerely,
Josh Hawley
Chairman
Committee on the Judiciary
Subcommittee on Crime and Counterterrorism


