Hoffman Lenses Initiative — The Case

This is not about social media.
It is about the machine inside it.

Behavioral Manipulation Systems are injuring and killing human beings as a direct and foreseeable consequence of how they are designed to operate. The platforms knew. The documented record follows.

The Record Begins Here

A nine-year-old went out to play.

On February 3, 2026, nine-year-old JackLynn Blackwell went out to play in her backyard in Stephenville, Texas. Her father found her minutes later with a cord around her neck. Her family says she had been served a choking challenge video by an algorithm — a claim TikTok and Snap settled rather than contest in open court.

JackLynn loved karaoke. She wanted to be a star. She was nine years old.

She is not an anomaly. She is a data point in a pattern that has been documented, studied, reported — and ignored — for over a decade. The platforms that designed the systems that served her that content had internal research, shareholder reports, and court proceedings telling them exactly what those systems were doing to children. They chose not to change them.

"You could check on your kid, it could be kid-friendly videos, and then three minutes later it could be totally something dark because of the algorithms they start creating. There's too many of these kids lost for these companies not to be held accountable."

— Curtis Blackwell, father of JackLynn Blackwell, age 9. February 2026.

The Argument

These systems do not recommend content. They manipulate behavior.

The distinction is not semantic. A recommendation serves your interest. A manipulation serves the platform's interest at your expense.

Behavioral Manipulation Systems (BMS) — the algorithmic engines that decide what you see next on every major social platform — monitor your psychological responses in real time, identify your vulnerabilities, and serve you increasingly intense content to keep you on the platform. Your distress, your outrage, your fear, your grief are the product being sold to advertisers.

A platform optimizing for your wellbeing would show you content that informed and connected you, then let you leave. A BMS optimizing for engagement-driven revenue shows you content engineered to keep you scrolling — and the content most effective at doing that is content that makes you feel something intense. Outrage. Fear. Shame. Tribal identity. Urgency. These are not side effects. They are the design.

Behavioral Manipulation System (BMS) — working definition

An algorithmic content-ranking system that selects and sequences information to modify the behavior and emotional state of users in ways that serve platform revenue objectives, without the user's informed consent, using psychological profiling derived from behavioral monitoring.

Distinct from a recommendation system, which optimizes for user-defined preferences. A BMS optimizes for engagement metrics that correlate with advertising revenue, regardless of the user's stated preferences or documented wellbeing.

The most vulnerable users are not incidental casualties. They are the most profitable users. A child experiencing anxiety, depression, or social isolation is a child whose emotional responses are most easily exploited for engagement. The algorithm does not protect them. It targets them.


The Documented Record

They knew. Year by year.

This is not speculation. The following is a chronological record of documented knowledge — internal research, whistleblower disclosures, court proceedings, and regulatory findings — establishing that platform executives were aware of the harms their systems were causing and chose not to change them.


The Legal Framework

This is a human rights violation. It has a name.

The conduct of behavioral manipulation platforms does not exist in a legal vacuum. It violates established international human rights instruments that bind both states and, under evolving frameworks, corporations operating transnationally.

The specific rights at issue, and the instruments establishing them:

I
Right to Life
ICCPR Article 6 · UNCRC Article 6
II
Right to Health
ICESCR Article 12 · UNCRC Article 24
III
Right to Privacy
ICCPR Article 17 · UNCRC Article 16
IV
Best Interests of the Child
UNCRC Article 3
V
Corporate Duty of Care
UN Guiding Principles on Business & Human Rights

The white paper builds the full legal argument across each of these frameworks, including the developing doctrine of corporate responsibility under international law, the specific application of Section 230 immunity limitations to algorithmic conduct, and the evidentiary record establishing the "knowing element" — that platforms were aware of the harm and chose not to act.

Read the full human rights case.

The white paper documents the legal argument in full — platform by platform, harm by harm, with primary source citations. Released under Creative Commons. Reproduce it freely. Send it to anyone who needs to see it.