🎉 Share Your 2025 Year-End Summary & Win $10,000 Sharing Rewards!
Reflect on your year with Gate and share your report on Square for a chance to win $10,000!
👇 How to Join:
1️⃣ Click to check your Year-End Summary: https://www.gate.com/competition/your-year-in-review-2025
2️⃣ After viewing, share it on social media or Gate Square using the "Share" button
3️⃣ Invite friends to like, comment, and share. More interactions, higher chances of winning!
🎁 Generous Prizes:
1️⃣ Daily Lucky Winner: 1 winner per day gets $30 GT, a branded hoodie, and a Gate × Red Bull tumbler
2️⃣ Lucky Share Draw: 10
Cambridge University philosopher: We may never know if AI has consciousness.
As funding pours into AGI research, Cambridge scholars point out that humanity is still unable to verify whether AI possesses consciousness, urging a stance of agnosticism before regulatory relaxations. (Previous summary: How to rely on Vibe Coding in the AI era to have Agent stay up late monitoring the market) (Background Supplement: Altman Discusses OpenAI's Growth Dilemma: Computing Power is the Biggest Limitation, Revenue Growth Relies on Doubling the Amount of Graphics Cards)
Table of Contents
Global capital is flowing into AGI research at an unprecedented pace, with tech giants and venture capitalists competing to ramp up investments across computing power, models, and talent in a comprehensive arms race. The market bets that general artificial intelligence will reshape productivity and capital return structures.
However, earlier this month, philosopher Tom McClelland from the University of Cambridge reminded in a paper published in the journal “Mind & Language” that there is currently almost no evidence in science to prove that AI possesses consciousness, and it may not be possible for a long time in the future. People need to think about the allocation of resources.
Black Box Dilemma: Consciousness Research Has Not Yet Broken Ground
McClelland pointed out that humanity has not even unraveled how the human brain transforms neural activity into subjective experience, let alone analyze the large language models composed of trillions of parameters.
Current functionalists believe that as long as the computational complexity is sufficient, higher consciousness will naturally emerge; the biological essentialists argue that consciousness is a product of carbon-based life. Both sides lack evidence, and the debate resembles a confidence leap of a hypothetical nature.
Consciousness and Perception: Two Confused Concepts
In commercial promotion, companies often conflate “awareness” with “perceptual ability.” McClelland states that awareness refers only to the processing and reaction to external messages; perceptual ability involves pleasure and pain, affecting moral standing.
He reminded that if AI is just a computing system, the ethical risks are limited; but if future models possess perceptual capabilities, humanity must reassess the boundaries of responsibility.
Emotional Projection and Resource Misallocation
In order to increase user engagement, many technology companies are currently giving chatbots a humanized tone to evoke emotional projection.
McClelland calls this “existentialist toxic,” as society may misallocate resources because of it: the hype surrounding artificial intelligence consciousness has ethical implications for the allocation of research resources.
Regulatory Vacuum and Responsibility Game
In the context of de-regulation, the interpretation of whether “AI has a soul” can easily be controlled by companies. When marketing demands, businesses can claim that the model possesses self-awareness; when the system malfunctions and causes damage, they can again claim that the product is merely a tool, attempting to avoid liability. McClelland calls on lawmakers to establish a unified testing framework that draws a clear line between risk and innovation.
The capital markets may be rolling out the red carpet for the “AGI Awakening,” but before science can verify AI's perception capabilities, actively admitting ignorance and maintaining a cautious distance may be the rational choice.