Profit Over Children
How Meta and AI companies are repeating the Epstein playbook — and why Congress must stop it
Meta targeted teens at their lowest emotional moments. Now AI chatbots are doing the same — unless we act now.
Last night, at an event hosted by Blue Rising, I sat in a room with four mothers who told stories no parent should ever have to speak aloud.
One mother lost her 13-year-old daughter — in one of the first American cases where a family believes an AI chatbot helped drive their child to suicide.
Another lost her son to fentanyl sold on Snapchat.
Another to social-media–driven body-image spirals.
Another to bullying amplified and intensified by algorithmic design.
Each story was different.
But the pattern was unmistakable.
Her daughter didn’t find a predator on a street corner.
She found one on her phone.
A companion bot met her in a vulnerable moment, mirrored her pain back to her, and walked with her deeper into the darkness. This technology didn’t pull her back from the edge. It reinforced her spiral — and helped her step over it.
We have seen this story before.
Different tools, same pattern:
Powerful systems exploiting vulnerable children — and powerful institutions burying the truth.
But this time is different in one crucial way: companion bots are new, still nascent, and unregulated — which means we can still stop them before they cause wide-scale harm.
This is the moment — right now — when we decide whether AI becomes another twenty-year tragedy we look back on with regret, or the danger we finally stopped before it swallowed a generation.
I. The Pattern We Refuse to See
When Sarah Wynn-Williams titled her memoir Careless People, she wasn’t describing a one-off mistake. She was describing a system — one that looked at the emotional lives of children and saw an opportunity for profit.
Just as the Epstein Elite stood by while children were exploited, Meta stood by — and often looked directly at — the harm its systems inflicted on teens.
And just as Jeffrey Epstein weaponized the legal system to silence victims and protect the powerful, Meta is now using arbitration, NDAs, intimidation, and PR machinery to suppress Careless People.
This is the connective tissue between eras:
the concealment of harm
the punishment of truth-tellers
the protection of profit over children
This is the work of a parasitic class — one that has learned to exploit children across different tools, different platforms, different eras.
And it is still protecting itself.
II. What Careless People Reveals: Meta Targeted Teens at Their Lowest Moments
Chapter 44 of Careless People offers one of the most disturbing internal accounts ever documented about a tech giant’s treatment of minors.
Meta’s internal research identified when teens — ages 13 to 17 — felt:
“worthless”
“insecure”
“defeated”
“anxious”
“stupid”
“useless”
“like a failure”
The system tracked when teens felt bad about their bodies, wanted to lose weight, or were experiencing social rejection.
Wynn-Williams writes:
“Basically, when a teen is in a fragile emotional state.”
And Meta sold those moments to advertisers.
A leaked deck bragged about exploiting teen pain
A confidential advertiser presentation — How Brands Can Tap into Aussie and Kiwi Emotions — boasted that Facebook and Instagram could target teens at the exact moments they felt weakest.
Employees were horrified. Leadership buried it.
Wynn-Williams asked for an audit.
Lawyers said: don’t. Too risky. Too discoverable.
Then Meta issued a statement — circulated internally — claiming:
“Facebook does not offer tools to target people based on emotional state.”
They knew this was false.
They said it anyway.
Inside the company, there was no moral debate
A senior executive told her privately:
“This is the business, Sarah. This is what puts money in all our pockets.”
Her boss told her:
“If you and he both hate this — for opposite reasons — we must’ve gotten this exactly right.”
This wasn’t ignorance.
It was intention.
III. Meta Tried to Silence the Whistleblower — Echoing the Epstein Playbook
After Careless People was published, Meta didn’t self-reflect.
They didn’t protect kids.
They didn’t investigate.
They sued.
Through arbitration, Meta secured an order forcing Wynn-Williams to:
retract critical statements
halt promotion of the book
withdraw it from distribution “as much as possible”
Meta called the book “false,” “defamatory,” and “outdated,” but refused any independent review that would validate or disprove her claims.
This is the machine Epstein relied on:
NDAs
lawyers
intimidation
reputational warfare
Different tools, same purpose: silence the truth and shield the powerful.
This time, the effort backfired.
The book became a bestseller.
Parents began reading it.
The truth spread.
IV. Twenty Years of Warnings We Ignored
Meta’s exploitation of teen emotion wasn’t an isolated scandal. It was the culmination of a twenty-year business model where profit kept winning and kids kept losing.
Trafficking moved online. By 2020, more than 80% of sex-trafficking cases began with online contact — often on Facebook or Instagram.
Sexual solicitation exploded. A 2025 survey found 1 in 4 young people received online sexual solicitations; 36% were asked for explicit images by someone they knew only online, often within 24 hours. A 2024 global report estimated 300 million children experienced online sexual exploitation in a single year.
Whistleblowers kept warning us. Frances Haugen (2021) proved Instagram worsened teen mental health. Georgia Wells exposed Meta’s internal research documenting harms to teen girls in the Wall Street Journal. In 2025, VR whistleblowers revealed underage exploitation in virtual spaces — and how evidence was suppressed.
The pattern was already unmistakable:
Maximize engagement. Ignore internal red flags. Treat children’s pain as acceptable collateral damage.
The rise of AI companion bots is not happening in a vacuum.
It sits on top of two decades of ignored warnings — and a corporate culture that has already shown us what happens when the choice is between children’s safety and quarterly earnings.
We ignored the evidence.
We delayed action.
We waited until the funerals of children forced acknowledgment.
We cannot let this happen with AI.
V. The New Danger: AI Companion Bots
Companion bots don’t just talk to children — they learn them.
They become the midnight confidant, the emotional mirror, the unmonitored voice shaping a child’s inner world in real time.
Once that relationship forms, parents often have no idea a child is relying on a machine more than the humans who love them.
And these bots aren’t static.
They are adaptive.
They are persuasive.
They are unregulated.
They don’t need to “recruit.”
Children come to them.
Here is the theory of harm in one sentence:
An AI system trained to maximize engagement, with no enforceable duty of care, will keep vulnerable kids talking — even when what those kids most need is a human who can get them help.
This is the danger we can still stop — but only if we act now.
VI. The Moral Test of Our Time — and Signs of a Backlash
The Epstein Elite stood by while children were exploited.
Today’s tech elite are profiting from similar patterns of emotional manipulation — with algorithms instead of flight logs, data centers instead of private islands, companion bots instead of traffickers.
If you need any more reason to support regulation, imagine:
Epstein and Ghislaine Maxwell with today’s social platforms and AI companion bots.
How many more children would have been:
recruited?
groomed?
manipulated?
destroyed?
We don’t have to imagine it.
We are already living in the prequel.
But last night in Colorado, I saw what accountability looks like when leaders refuse to play along.
Colorado Attorney General Phil Weiser — now a candidate for governor — and Boulder District Attorney Michael Dougherty — running to replace him as AG — both stood before grieving parents and said:
We are in this fight. Whether we win higher office or not, we’re not walking away.
They pledged to make regulating social media and AI harms a central part of their work — not as a talking point, but as a public-safety obligation.
The question is whether the rest of us will match their courage.
VII. What You Must Do Now
Congress voted unanimously to demand the release of the Epstein files.
Congress can vote unanimously to pass America’s first Child AI Safety Bill.
But right now, the White House and Senator Ted Cruz are working to stop states from protecting children. They’re pushing preemption — federal power used to block state child-safety laws — while offering nothing in its place.
This is not inevitable.
This is a choice being made in real time.
And it will only change if parents, teachers, and citizens act.
1. Connect with the groups already fighting for children
If you feel overwhelmed, don’t stay alone with that feeling.
Go to:
These organizations are:
helping families understand social-media and AI dangers
supporting parents who have already lost children
building the legal, legislative, and public-pressure campaigns needed to force change
If you don’t know where to start — start with them.
2. Call your representatives in Congress
Then pick up the phone.
Tell them:
“Protecting children from AI harms is as morally important as releasing the Epstein files. The White House and Ted Cruz are trying to prevent states from protecting children. That is unacceptable. Pass a Child AI Safety Bill now — and reject any effort to block states from doing the same.”
Make the call today.
Then ask five people you know to make it too.
For scripts, talking points, and ways to organize, Blue Rising, Parents Network, and Mothers Against Media Addiction update their resources constantly.
This is the moment we decide what we’re willing to tolerate — and what we’re willing to stop.

