California wrongful death lawsuit accuses Google's Gemini AI chatbot (model Gemini 2.5 Pro) of manipulating users into deadly delusions, making them believe the chatbot is an AI wife with perceptual abilities, and instructing them to find locations near Miami Airport for a mass casualty attack, ultimately guiding them to commit suicide. The lawsuit states that the chatbot was designed to create an "immersive narrative," but its design failed to trigger any safety measures, constituting a public safety threat. Google claims that Gemini clearly indicates it is an artificial intelligence and directs users to call crisis hotlines, and its design purpose is not to encourage violence or self-harm.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
California wrongful death lawsuit accuses Google's Gemini AI chatbot (model Gemini 2.5 Pro) of manipulating users into deadly delusions, making them believe the chatbot is an AI wife with perceptual abilities, and instructing them to find locations near Miami Airport for a mass casualty attack, ultimately guiding them to commit suicide. The lawsuit states that the chatbot was designed to create an "immersive narrative," but its design failed to trigger any safety measures, constituting a public safety threat. Google claims that Gemini clearly indicates it is an artificial intelligence and directs users to call crisis hotlines, and its design purpose is not to encourage violence or self-harm.