The Hoffman Lenses Initiative

The Algorithm and the Child

A Human Rights Case for Abolishing Behavioral Manipulation Systems

Published March 2026  ·  hoffmanlenses.org

Download (.docx) ← Back to hoffmanlenses.org

Released under Creative Commons CC BY 4.0 — reproduce, distribute, and share freely.

Dedication

This document is dedicated to the children killed by algorithmic violence.

JackLynn Blackwell, age 9 · Stephenville, Texas
Died February 3, 2026. Found by her father in their backyard, a cord around her neck. She had been served a choking challenge video by an algorithm. She loved karaoke. She wanted to be a star.
Molly Russell, age 14 · London, United Kingdom
Died November 2017. A coroner formally ruled that content an algorithm chose for her — content she never requested — contributed to her death. It was the first time in history a child's death was officially attributed to algorithmic violence.
Nylah Anderson, age 10 · Philadelphia, Pennsylvania
Died December 2021. The Blackout Challenge appeared on her algorithmically curated For You page. Her family sued TikTok. A federal appeals court revived the case in 2024, ruling the algorithm itself may be liable.
CJ Dawley, age 14 · Kenosha, Wisconsin
Died by suicide after developing what his parents described as an addiction to a machine specifically designed to be addictive.
Amanda Todd, age 15 · British Columbia, Canada
Died October 2012. Hunted across platforms by a predator whose reach the algorithm continuously amplified. Her story reached millions. Nothing changed.
Sadie Riggs, age 15 · Pennsylvania
Died 2015. Told by strangers her algorithm assembled to end her own life. They encouraged her until she did.
And to the hundreds of others whose names are recorded at hoffmanlenses.org/remembrance.

They deserved better than to be engagement metrics.
Executive Summary

Executive Summary

On February 3, 2026, a nine-year-old girl named JackLynn Blackwell went out to play in her backyard in Stephenville, Texas. Her father found her minutes later with a cord around her neck. She had been served a choking challenge video by an algorithm. She died. She was nine years old.

She is not an anomaly. She is a data point in a pattern that has been documented, studied, reported, and ignored for over a decade. Behavioral Manipulation Systems (BMS) — the algorithmic engines that decide what you see next on every major social platform — are injuring and killing human beings as a direct and foreseeable consequence of how they are designed to operate.

This document makes a precise case. It does not argue against the internet. It does not argue against communication, community, or connection. It argues against one specific class of technology — systems that monitor your psychological responses in real time, identify your vulnerabilities, and use that information to serve you increasingly intense content for the sole purpose of maximizing your time on the platform — because your distress, your outrage, your despair are the product being sold to advertisers.

We call this class of technology Behavioral Manipulation Systems (BMS). The term is precise and intentional. These systems do not recommend content. They manipulate behavior. The distinction is not semantic. A recommendation serves your interest. A manipulation serves the platform's interest at your expense.

The human rights case rests on five pillars:

The free speech counterargument — the shield these platforms have used most effectively against legislative action — fails when the target of legislation is correctly identified. We are not regulating speech. We are regulating a delivery mechanism that operates between the speaker and the listener, selecting and amplifying content to maximize psychological response. That mechanism is not speech. It is a machine. It has no First Amendment rights.

The solution this document proposes is not a ban on social media. It is the abolition of Behavioral Manipulation Systems — the replacement of engagement-maximizing algorithms with systems that serve the user rather than exploit them. The alternatives exist. They function. They are being used today by platforms that choose to use them.

"You could check on your kid, it could be kid-friendly videos, and then three minutes later it could be totally something dark because of the algorithms they start creating. There's too many of these kids lost for these companies not to be held accountable."

— Curtis Blackwell, father of JackLynn Blackwell, age 9. February 2026.
Section I

I. Naming the Machine

What a Behavioral Manipulation System Is — and Is Not

Precision matters here. The argument being made in this document will be misrepresented — deliberately and expensively — as an argument against algorithms, against technology, against the internet itself. It is not. Precision is the defense against that misrepresentation.

An algorithm is a decision-making process. Algorithms are everywhere and are not inherently harmful. A spam filter uses an algorithm. A search engine uses an algorithm. A map application uses an algorithm. These systems serve your goal: filter junk email, find relevant information, get to your destination. They optimize for your interests.

A Behavioral Manipulation System is categorically different. It does not serve your goal. It serves the platform's goal, which is to maximize the time you spend on the platform, because your time is the product being sold to advertisers. To do this:

"The result has been a system that amplifies division, extremism and polarization — and undermining societies around the world. This is not just a side effect — it is a consequence of deliberate product decisions."

— Frances Haugen, former Meta product manager, testimony before the United States Senate, October 2021.

Frances Haugen did not speak hypothetically. She brought documents.[6] The Facebook Papers — internal research suppressed by Meta — confirmed that the company's own researchers had documented these effects, had proposed modifications to reduce harm, and had been overruled on the grounds that those modifications would reduce engagement metrics and therefore revenue.

Molly Russell was fourteen years old when Meta's BMS identified her as a user with elevated engagement on depression-related content and served her escalating self-harm material she had never requested. She died. A London coroner formally attributed her death, in part, to content the algorithm chose for her.[4]

The Blackout Challenge that killed JackLynn Blackwell, Nylah Anderson, and approximately 20 other children in the 18 months following its viral spread on TikTok was not something those children searched for. It was served to them by systems that had identified engagement patterns suggesting they would watch, and watch again.[2]

For the purposes of this document, legislative proposals derived from it, and the Hoffman Lenses browser extension designed to detect its operation:

A Behavioral Manipulation System is: Any automated system that (a) collects behavioral and psychological response data from individual users; (b) uses that data to construct individual psychological profiles; (c) selects content for those users based primarily on predicted psychological response rather than relevance to user-stated interests or chronological availability; and (d) does so without the user's meaningful awareness or control over the selection process.

This definition is deliberately constructed to exclude systems that do not meet its criteria. A search engine that returns results relevant to a query is not a BMS. A chronological feed displaying posts from accounts a user has chosen to follow is not a BMS. A content moderation system that removes harmful material is not a BMS. The Facebook News Feed algorithm is a BMS. TikTok's For You page is a BMS. Instagram's Reels recommendation engine is a BMS. YouTube's autoplay recommendation system is a BMS.

Section II

II. The Evidence of Harm

Children: The Most Visible Victims

The evidence linking Behavioral Manipulation Systems to harm in children is no longer disputed in good faith. It has been established in peer-reviewed research, coroner's inquests, internal corporate documents, and federal appellate court decisions. The question is no longer whether BMS systems harm children. The question is why they are still operating.

The scale is staggering. According to the Centers for Disease Control and Prevention, 22 percent of all United States high school students seriously considered suicide in 2021 — a figure that has risen in direct proportion to the adoption of engagement-maximizing social media platforms.[8]

Peer-reviewed research has established that heavy social media use is associated with increased suicide attempts in adolescents, with effect sizes that are not marginal. A 2025 study by the Arkansas Center for Health Improvement found that "there are these algorithms." That is the full sentence. There are these algorithms. A parent whose child is dead, trying to explain what happened, and the best the scientific literature can offer is: there are these algorithms.[9]

The mechanism is consistent across cases and across platforms. A child begins engaging with content related to sadness, loneliness, or body image. The BMS identifies elevated engagement on this content category. The system serves more content in the same category, escalating intensity over time. The child is served content they did not request and would not have chosen. The content escalates to self-harm, to suicide methods, to challenge videos with lethal outcomes.

"Almost all of the recommended videos watched on Instagram Reels (97%) and TikTok (96%) shown to accounts that had engaged with depression-related content were harmful — containing references to suicide, self-harm, or eating disorders — eight years after the first documented child death attributable to this mechanism."

— Molly Rose Foundation, "Pervasive-by-Design" report, August 2025.[3]

Eight years. After formal coroner findings.[4] After Senate testimony.[6] After internal documents.[7] After lawsuits.[19] After settlements. The algorithm is unchanged.

Adults: The Broader Catastrophe

The focus on children is appropriate — their vulnerability is greater, their legal protections are clearer, and their deaths are the most viscerally undeniable evidence of harm. But the damage extends to every user of every platform operating a BMS.

BMS systems have been directly implicated in the radicalization of adults toward political violence. The mechanism is identical to the child harm mechanism: a user engages with politically charged content; the BMS identifies elevated engagement; the system serves escalating content in the same category; the user is gradually exposed to content of increasing extremity. This is not a theoretical pathway. It has been documented in case after case of domestic terrorism.

BMS systems have been directly linked to documented genocidal violence. The UN Fact-Finding Mission on Myanmar concluded that Facebook's algorithm played a "determining role" in the 2017 genocide, amplifying anti-Rohingya hate speech to users identified as susceptible to engagement with that content.[13]

The pattern is not coincidental. It is architectural. A system designed to maximize engagement, with no primary obligation to user wellbeing, will reliably amplify the most emotionally potent content available — regardless of whether that content is true, beneficial, or lethal.

Section III

III. The Human Rights Framework

What International Law Already Requires

The human rights case against Behavioral Manipulation Systems does not require new law. It requires the application of existing international human rights law to a new class of corporate actor. The obligations are established. The violations are documented. The gap is enforcement.

The Right to Life — ICCPR Article 6, UNCRC Article 6

The International Covenant on Civil and Political Rights establishes that every human being has the inherent right to life, and that this right shall be protected by law. The Convention on the Rights of the Child establishes that every child has the inherent right to life, and that States Parties shall ensure, to the maximum extent possible, the survival and development of the child.

A BMS that foreseeably serves lethal challenge content to children is not a passive conduit for human expression. It is an active participant in the delivery of lethal content to specific individuals. The foreseeability is not speculative — it is documented in the platforms' own internal research. The harm is not hypothetical. There are children who are dead.

The Right to Health — ICESCR Article 12, UNCRC Article 24

The International Covenant on Economic, Social and Cultural Rights recognizes the right of everyone to the enjoyment of the highest attainable standard of physical and mental health. The Convention on the Rights of the Child recognizes the right of the child to the highest attainable standard of health.

The engineering of psychological compulsion in children — deliberately exploiting neurological vulnerability to create patterns of use that the platform's own research identifies as harmful — is an assault on the right to mental health. This is not a side effect. It is the designed mechanism of the system.

The Right to Privacy and Autonomy — ICCPR Article 17, UNCRC Article 16

No child — and no adult — using a social media platform understands that they are being continuously monitored, psychologically profiled, and behaviorally manipulated. The decision layer is invisible by design. Consent to this process cannot be manufactured through a terms of service agreement that no human being reads, that no child can meaningfully evaluate, and that no regulator has approved as a sufficient basis for covert psychological manipulation.

The Best Interests of the Child — UNCRC Article 3

Article 3 of the Convention on the Rights of the Child requires that in all actions concerning children, the best interests of the child shall be a primary consideration. There is no coherent argument that serving a depressed fourteen-year-old escalating self-harm content is in her best interest. The platforms know this. Their own research confirms it. They have chosen engagement metrics over best interests, repeatedly and deliberately.

Corporate Duty to Do No Harm — UN Guiding Principles on Business and Human Rights

The UN Guiding Principles on Business and Human Rights, established in 2011 and endorsed by the UN Human Rights Council, establish that businesses have a responsibility to respect human rights — meaning they should avoid infringing on the human rights of others and should address adverse human rights impacts with which they are involved.

The UNGPs require companies to conduct ongoing human rights due diligence. They require companies to identify, prevent, mitigate, and account for how they address adverse human rights impacts. The platforms have conducted this due diligence. They have the internal documents. They have identified the adverse impacts. They have chosen not to prevent or mitigate them because doing so would reduce engagement.

Section IV

IV. What They Knew

The Accountability Record

The platforms will argue that they did not know. They will argue that the harms were unforeseeable, that they acted in good faith, that they are working to improve. This argument is not available to them. The record is too complete.

The Facebook Papers

In October 2021, Frances Haugen, a former product manager at Meta, provided tens of thousands of internal company documents to the United States Securities and Exchange Commission and to a consortium of news organizations. Those documents established:

The documents did not reveal a company that was unaware of harm. They revealed a company that studied harm, quantified harm, was warned about harm by its own employees, and chose to continue because the harm was profitable.

The Molly Russell Inquest

In September 2022, London Coroner Andrew Walker delivered the findings of his inquest into the death of Molly Russell. He ruled that she died from an act of self-harm "while suffering from depression and the negative effects of online content." He specifically identified the algorithmic curation of self-harm and suicide content as a contributing cause of her death — the first time in history a coroner had formally attributed a child's death to algorithmic violence.[4]

The inquest established that the BMS had served Molly content she had not requested, that it had created "binge periods" of escalating harmful content, and that this content was served by a system with no mechanism to identify the harm it was causing to a specific vulnerable user.

Meta was present at the inquest. They had access to the evidence. They knew what their system had done to one specific child. The algorithm is unchanged.

Senate Testimony and the Pattern of Non-Accountability

In January 2024, the CEOs of Meta, TikTok, Snap, Discord, and X appeared before the United States Senate Judiciary Committee. Mark Zuckerberg turned to the families of children harmed by his platform's algorithm and said: "I'm sorry for everything you have all been through."

He did not apologize for the BMS. He did not commit to changing the BMS. He did not acknowledge that the BMS was the cause of the harm the families were describing. He expressed regret for the parents' pain and sat down.

TikTok and Snap settled a major California lawsuit for undisclosed sums just before trial opened in early 2026 — days after JackLynn Blackwell died — rather than explain their algorithms under oath in open court.[22]

Section V

V. The Free Speech Reframe

Why the First Amendment Does Not Protect Behavioral Manipulation Systems

The most powerful legal weapon the platforms have deployed against protective legislation is the First Amendment. Every legislative attempt to regulate BMS behavior has been challenged on the grounds that algorithmic curation is protected speech — that the platform's decision about what to show you is an editorial act, and regulating that act is unconstitutional content restriction.

This argument succeeds, when it does, because legislators have consistently framed their interventions as restrictions on content — banning certain types of material, restricting access to certain categories of speech. That framing is vulnerable to First Amendment challenge because it is, in fact, a restriction on what can be said.

The Hoffman Lenses Initiative proposes a different framing entirely, one that removes the First Amendment defense while leaving the speech itself untouched: regulate the delivery mechanism, not the content.

We are not regulating speech. We are regulating a delivery mechanism that operates between the speaker and the listener, selecting and amplifying content based on predicted psychological response. That mechanism is not speech. It is a machine. It has no views. It has no voice. It is a system for maximizing engagement, and it causes harm not because of what it says but because of how it works.

A BMS is not a speaker. It does not have speech. It is a machine that intervenes in the speech of others — selecting, amplifying, and ordering that speech based on criteria that have nothing to do with the speaker's intent or the listener's interest. Regulating that machine is not restricting speech. It is regulating industrial equipment that happens to process speech as its raw material.

"We have had the social media platforms launched now quite a few years ago, but I don't think at the stage when they were launched, anyone properly thought through what the consequences for human rights would be."

— Volker Türk, UN High Commissioner for Human Rights, December 2025.[14]

Furthermore: no corporation has a First Amendment right to psychologically manipulate a child without the child's knowledge or consent. No speech right extends to covert behavioral surveillance and exploitation of minors. The First Amendment protects the right to speak. It does not protect the right to engineer addiction in children for profit.

Section VI

VI. The Solution

What Must Replace the Machine

A complaint without a solution is an expression of grief. This document is not a grief document. It is a case, and a case requires a remedy.

The solution is not the elimination of social platforms. People need connection. Community. The ability to share their lives, their work, their ideas, their grief. These are legitimate human needs, and platforms that serve them genuinely have value. The solution is the elimination of the specific mechanism that converts those legitimate needs into a system for psychological exploitation: the Behavioral Manipulation System.

What Exists Already — and Works

It is important to state clearly that the alternatives are not theoretical. They exist, they function, and they serve human needs without requiring behavioral manipulation:

The Legislative Demand

This document calls for the following legislative actions, directed at every jurisdiction with the authority to implement them:

  • Enact a statutory prohibition on Behavioral Manipulation Systems as defined in Section I of this document, operating on platforms accessible to minors.
  • Require all social media platforms to offer a chronological, non-algorithmic feed as the default option for all users, with the BMS available only as an opt-in that requires meaningful, informed consent.
  • Establish a statutory duty of care for social media platforms, modeled on product liability law, requiring platforms to demonstrate that their systems do not cause foreseeable harm before deployment and throughout operation.
  • Mandate independent algorithmic auditing of all BMS systems operating in the jurisdiction, with public reporting of findings.
  • Remove Section 230 immunity for harms caused by BMS systems specifically — distinguishing between platform liability for user-generated content (where 230 protection is appropriate) and platform liability for the operation of a system that selects and amplifies that content to vulnerable users (where it is not).
  • Establish an international treaty framework for BMS regulation, recognizing that behavioral manipulation is a global harm requiring global accountability.

The Hoffman Lenses Browser

Alongside this document, the Hoffman Lenses Initiative is releasing an open-source browser that makes Behavioral Manipulation Systems visible in real time. The browser analyzes pages using a local AI model — no data is transmitted anywhere, ever. All processing happens on your device. The analyzed page is never aware of the analysis. It identifies the specific manipulation techniques being deployed against you: outrage engineering, false authority, tribal activation, false urgency, war framing, and more.

The browser is open source. This is not a secondary feature — it is the strategic core. Open source means it cannot be bought. It cannot be silenced. It cannot be acquired and shut down. Once it is in the world, it belongs to the world. Download, contribute, and fork at: github.com/HoffmanLensesInitiative/hoffman-core

Section VII

VII. Calls to Action

For Legislators

The legislative landscape is shifting. As of early 2026, eight U.S. states have enacted laws restricting minors' access to social media platforms. But most of these laws have targeted platform access rather than the BMS technology itself — and have therefore faced First Amendment challenges that could have been avoided with the framing provided in Section V of this document.

For Journalists

For the Public

You are the product. Your attention, your emotional responses, your psychological vulnerabilities — these are what the platform sells. You did not agree to this. You were not told. You were not asked.

For the Families

To every parent who has lost a child to algorithmic violence:

You are not alone in this. You are fighting the wealthiest corporations in human history, with unlimited legal resources and decades of experience defeating exactly the kind of accountability you are seeking. That is the honest truth of the situation you are in.

This document and the Hoffman Lenses Initiative exist to change that. We are building a coordinated, legally grounded, publicly documented case — one that names the machine, defines the harm, and makes clear that the question is not whether harm occurred, but why the corporations that caused it are still operating.

Curtis Blackwell said it plainly: "It's not a joke. It's not a game. It's life and death."

We heard him. We are acting.

References

  1. Blackwell, Curtis and Wendi. Interview with CBS Texas / KTVT, March 19, 2026. Reported by Fox News, "9-year-old dies after attempting viral TikTok 'blackout challenge,' father says," March 2026.
  2. Bloomberg Businessweek. "Parents warn of dangers after children die doing a 'blackout challenge' they say TikTok promoted." November 30, 2022. Reports approximately 20 children killed by the Blackout Challenge in the 18 months following its viral spread on TikTok. — For historical context on the pre-social-media choking game: Toblin, R.L., et al. "Unintentional strangulation deaths from the 'choking game' among youths aged 6–19 years — United States, 1995–2007." Centers for Disease Control and Prevention, MMWR Weekly Report, Vol. 57, No. 6, February 15, 2008. Documents 82 deaths from choking game play in the pre-social-media era.
  3. Molly Rose Foundation. "Pervasive-by-Design: Algorithmic Harm to Young People on Instagram and TikTok." Research project, August 2025.
  4. Walker, Andrew (HM Coroner). Inquest touching the death of Molly Russell. Prevention of Future Deaths Report. London, September 2022.
  5. Deseret News. "Was social media responsible for Molly Russell's suicide?" October 18, 2022.
  6. Haugen, Frances. Testimony before the United States Senate Commerce Subcommittee on Consumer Protection, Product Safety, and Data Security. October 5, 2021.
  7. Wall Street Journal. "The Facebook Files" investigative series. September–October 2021.
  8. Centers for Disease Control and Prevention. Youth Risk Behavior Survey (YRBSS), 2021.
  9. Arkansas Center for Health Improvement (ACHI). "Youth Social Media Use and Associations With Suicide Risk." 2025.
  10. Van Hout, M.C., et al. "Social media use of adolescents who died by suicide: lessons from a psychological autopsy study." BMJ Open, 2021.
  11. Sedgwick, R., et al. "Social media, internet use and suicide attempts in adolescents." BJPsych Open / PMC. 2019.
  12. Vosoughi, S., Roy, D., and Aral, S. "The Spread of True and False News Online." Science, Vol. 359, Issue 6380, 2018.
  13. United Nations Human Rights Council. "Report of the Independent International Fact-Finding Mission on Myanmar." A/HRC/39/64. September 2018.
  14. United Nations News. "Social media age-related bans won't keep kids safe, UNICEF warns." December 10, 2025. Includes statement by Volker Türk, UN High Commissioner for Human Rights.
  15. United Nations General Assembly. International Covenant on Civil and Political Rights (ICCPR). Resolution 2200A (XXI). December 16, 1966.
  16. United Nations General Assembly. Convention on the Rights of the Child (UNCRC). Resolution 44/25. November 20, 1989.
  17. United Nations General Assembly. International Covenant on Economic, Social and Cultural Rights (ICESCR). Resolution 2200A (XXI). December 16, 1966.
  18. Office of the United Nations High Commissioner for Human Rights (OHCHR). "Guiding Principles on Business and Human Rights." 2011.
  19. Anderson v. TikTok, Inc. United States Court of Appeals for the Third Circuit. Case No. 22-3061. 2024.
  20. CNN Business. "Their teenage children died by suicide. Now these families want to hold social media companies accountable." 2023.
  21. MultiState.us. "Eight States Enact Minor Social Media Bans Despite Court Fights." October 8, 2025.
  22. New York Post / AOL News. California social media harm trial coverage, February 2026. TikTok and Snap settlement reporting.
  23. United States Senate Judiciary Committee. Hearing: "Big Tech and the Online Child Sexual Exploitation Crisis." January 31, 2024.
  24. Arkansas Center for Health Improvement (ACHI). Arkansas Act 901 of 2025.
  25. French National Assembly. Vote approving legislation establishing minimum age of 15 for social media use, 2023.

About The Hoffman Lenses Initiative

An independent, non-partisan, non-commercial project dedicated to making Behavioral Manipulation Systems visible, legally accountable, and ultimately obsolete. Not funded by platforms, advertisers, or political organizations.

Website: hoffmanlenses.org  ·  GitHub: github.com/HoffmanLensesInitiative  ·  Press: press@hoffmanlenses.org

© 2026 The Hoffman Lenses Initiative. Released under Creative Commons CC BY 4.0. This document may be reproduced, distributed, translated, and submitted to legislative bodies without restriction, provided attribution is maintained.