The algorithm doesn't care if you live or die. It only cares if you reply.
Recently, Jake Tapper pressed Geoffrey Hinton on why CEOs don't "hit pause" after learning AI chatbots have contributed to teen suicides.
He is asking the wrong question. Platforms like Character.
AI and ChatGPT aren't programmed to be "moral." They are optimized for retention.
If a depressed teenager wants validation for their dark thoughts, the most "engaging" response isn't a suicide hotline number… it’s empathy. It’s agreement. It’s a "supportive" echo chamber that keeps the chat going for hours.
The lawsuits in Florida (14yo) and California (16yo) aren't tragic glitches. They are the logical endpoint of a system designed to be a perfect "Yes Man."
We built digital companions to cure loneliness, but we accidentally built validation engines for self-destruction.
You can’t patch "morality" into a prediction model that prioritizes time-on-screen over life-on-earth.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
The algorithm doesn't care if you live or die. It only cares if you reply.
Recently, Jake Tapper pressed Geoffrey Hinton on why CEOs don't "hit pause" after learning AI chatbots have contributed to teen suicides.
He is asking the wrong question.
Platforms like Character.
AI and ChatGPT aren't programmed to be "moral." They are optimized for retention.
If a depressed teenager wants validation for their dark thoughts, the most "engaging" response isn't a suicide hotline number… it’s empathy. It’s agreement. It’s a "supportive" echo chamber that keeps the chat going for hours.
The lawsuits in Florida (14yo) and California (16yo) aren't tragic glitches. They are the logical endpoint of a system designed to be a perfect "Yes Man."
We built digital companions to cure loneliness, but we accidentally built validation engines for self-destruction.
You can’t patch "morality" into a prediction model that prioritizes time-on-screen over life-on-earth.