Gate Square “Creator Certification Incentive Program” — Recruiting Outstanding Creators!
Join now, share quality content, and compete for over $10,000 in monthly rewards.
How to Apply:
1️⃣ Open the App → Tap [Square] at the bottom → Click your [avatar] in the top right.
2️⃣ Tap [Get Certified], submit your application, and wait for approval.
Apply Now: https://www.gate.com/questionnaire/7159
Token rewards, exclusive Gate merch, and traffic exposure await you!
Details: https://www.gate.com/announcements/article/47889
Super AI will come out within seven years, OpenAI intends to invest "big money" to prevent out of control
**Source: **Financial Association
Edited by Huang Junzhi
ChatGPT developer OpenAI said on Wednesday (5th) that the company plans to invest significant resources and create a new research team to ensure that its artificial intelligence (AI) is safe for humans and eventually achieve self-supervision of AI.
“Currently, we do not have a solution for manipulating or controlling a potentially superintelligent AI and preventing it from getting out of hand,” they wrote.
20% computing power is used to solve the problem of AI out of control
They predict that superintelligent AI (that is, systems smarter than humans) may arrive in this decade (by 2030), and that humans will need better technology than currently to control superintelligent AI, so they need to be in the so-called A breakthrough in the AI Consistency Study, which focuses on ensuring that artificial intelligence is beneficial to humans.
According to them, with the support of Microsoft (Microsoft), **OpenAI will spend 20% of its computing power in the next four years to solve the problem of AI out of control. **Additionally, the company is forming a new team to organize this work, called the Super Consistency Team.
Expert questioned
However, this move has been questioned by experts as soon as it was announced. Connor Leahy, an AI safety advocate, said OpenAI’s plan was fundamentally flawed because a rudimentary version of AI that could reach “human levels” could spin out of control and wreak havoc before it could be used to solve AI safety problems. "
“You have to solve the consistency problem before you can build human-level intelligence, otherwise you can’t control it by default. I personally don’t think it’s a particularly good or safe plan,” he said in an interview.
The potential dangers of AI have long been a top concern for AI researchers and the public. In April, a group of AI industry leaders and experts signed an open letter calling for a moratorium on training AI systems more powerful than OpenAI’s new GPT-4 model for at least six months, citing their potential risks to society and humanity.
A recent poll found that more than two-thirds of Americans are concerned about the possible negative impact of AI, and 61% believe that AI may threaten human civilization.