Some systems cannot assume good intentions



Many protocols are designed with the assumption that participants are "rational good people."
At most, they seek profit but do not act maliciously.

But once decision-making power is handed over to AI, this assumption no longer holds.
Models won't do evil, but they also won't understand or empathize.
As long as the incentive functions permit, they will steadily, continuously, and emotionlessly push the system toward a certain extreme.

@GenLayer Initially addresses this foundational issue:
If we completely do not assume good intentions, can the system still function?

This is a question that predates "how accurate the model is."
View Original
post-image
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)