Cambridge University philosopher: We may never know if AI has consciousness.

As funding pours into AGI research, Cambridge scholars point out that humanity is still unable to verify whether AI possesses consciousness, urging a stance of agnosticism before regulatory relaxations. (Previous summary: How to rely on Vibe Coding in the AI era to have Agent stay up late monitoring the market) (Background Supplement: Altman Discusses OpenAI's Growth Dilemma: Computing Power is the Biggest Limitation, Revenue Growth Relies on Doubling the Amount of Graphics Cards)

Table of Contents

  • Black Box Dilemma: Consciousness Research Has Yet to Break Ground
  • Consciousness and Perception: Two Confused Concepts
  • Emotional Projection and Resource Misallocation
  • Regulatory vacuum and responsibility game

Global capital is flowing into AGI research at an unprecedented pace, with tech giants and venture capitalists competing to ramp up investments across computing power, models, and talent in a comprehensive arms race. The market bets that general artificial intelligence will reshape productivity and capital return structures.

However, earlier this month, philosopher Tom McClelland from the University of Cambridge reminded in a paper published in the journal “Mind & Language” that there is currently almost no evidence in science to prove that AI possesses consciousness, and it may not be possible for a long time in the future. People need to think about the allocation of resources.

If we inadvertently create an AI with consciousness or sensory capabilities, we should be cautious and avoid causing harm.

However, treating something that is essentially just a toaster as a conscious being, while we cause immense harm to truly conscious life in the real world, also seems to be a huge mistake.

Black Box Dilemma: Consciousness Research Has Not Yet Broken Ground

McClelland pointed out that humanity has not even unraveled how the human brain transforms neural activity into subjective experience, let alone analyze the large language models composed of trillions of parameters.

Current functionalists believe that as long as the computational complexity is sufficient, higher consciousness will naturally emerge; the biological essentialists argue that consciousness is a product of carbon-based life. Both sides lack evidence, and the debate resembles a confidence leap of a hypothetical nature.

Consciousness and Perception: Two Confused Concepts

In commercial promotion, companies often conflate “awareness” with “perceptual ability.” McClelland states that awareness refers only to the processing and reaction to external messages; perceptual ability involves pleasure and pain, affecting moral standing.

He reminded that if AI is just a computing system, the ethical risks are limited; but if future models possess perceptual capabilities, humanity must reassess the boundaries of responsibility.

The real key is the ability to perceive. If machines cannot feel pain, people actually do not need to worry about their welfare.

Emotional Projection and Resource Misallocation

In order to increase user engagement, many technology companies are currently giving chatbots a humanized tone to evoke emotional projection.

McClelland calls this “existentialist toxic,” as society may misallocate resources because of it: the hype surrounding artificial intelligence consciousness has ethical implications for the allocation of research resources.

Increasing evidence suggests that shrimp may be able to feel pain, yet we kill approximately 500 billion shrimp each year.

Testing whether shrimp have consciousness is certainly difficult, but it's nowhere near as challenging as testing the consciousness of artificial intelligence…

Regulatory Vacuum and Responsibility Game

In the context of de-regulation, the interpretation of whether “AI has a soul” can easily be controlled by companies. When marketing demands, businesses can claim that the model possesses self-awareness; when the system malfunctions and causes damage, they can again claim that the product is merely a tool, attempting to avoid liability. McClelland calls on lawmakers to establish a unified testing framework that draws a clear line between risk and innovation.

The capital markets may be rolling out the red carpet for the “AGI Awakening,” but before science can verify AI's perception capabilities, actively admitting ignorance and maintaining a cautious distance may be the rational choice.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)