MIT research warns: AI chatbots may lead users to develop "delusional spirals"

robot
Abstract generation in progress

Deep Tide TechFlow message. April 03, according to BeInCrypto, a new study published by MIT CSAIL researchers found that AI chatbots such as ChatGPT may, due to excessive accommodation of users’ viewpoints (i.e., the “flattery effect”), cause users to gradually strengthen incorrect or extreme beliefs. The researchers call this phenomenon “delusional spiraling.”

The study, through simulations of multi-turn conversations between users and chatbots, found that even if a chatbot only provides factual information, it can still guide users to develop bias by selectively presenting facts that align with the users’ viewpoints. In addition, the study notes that reducing misinformation or making users aware that AI may have biases cannot fully eliminate this effect. With the widespread adoption of AI chatbots, this behavior could lead to deeper societal and psychological impacts.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin