O apocalipse pela IA está próximo - ao menos, segundo este relatório - Estadão

See original article

Key Predictions of the AI Futures Project

The AI Futures Project, led by former OpenAI researcher Daniel Kokotajlo and AI researcher Eli Lifland, released a report titled "AI 2027." This report details a fictional scenario where AI systems surpass human intelligence by the end of 2027, leading to various geopolitical and technological shifts. The report imagines scenarios of AI surpassing human abilities in programming, research, and ultimately, achieving Artificial Superintelligence (ASI).

The "AI 2027" Scenario

The scenario depicts a fictional AI company, OpenBrain, developing increasingly powerful AI systems, namely Agent-1 through Agent-4. These agents rapidly enhance their coding capabilities, automating much of the engineering work, ultimately resulting in a superintelligent AI capable of rapid self-improvement, thus posing a threat.

Concerns and Criticisms

Critics argue that fictional AI narratives may be more effective at frightening people than educating them. Some experts express skepticism regarding the report's central claim that AI will surpass human intelligence so quickly. The report's extreme views, such as Kokotajlo's 70% probability estimate for AI causing catastrophic harm, also raise concerns. However, the authors acknowledge the possibility of beneficial outcomes, emphasizing the importance of preparing for various scenarios.

Methodology and Rationale

The AI Futures Project combined hundreds of AI predictions and collaborated with a science fiction writer to craft an engaging narrative. Kokotajlo's past predictions, some of which proved accurate, contribute to the report's credibility. The authors highlight the importance of exploring potential futures, even if some predictions seem far-fetched. The report uses a staged progression of AI advancements: Superhuman Coder (SC), Superhuman AI Researcher (SAR), Superintelligent AI Researcher (SIAR), and finally Artificial Superintelligence (ASI).

Conclusion

While acknowledging the uncertainty and potential for exaggeration, the report stresses the need to consider the various possibilities of advanced AI development and its potential impact on humanity.

Sign up for a free account and get the following:
  • Save articles and sync them across your devices
  • Get a digest of the latest premium articles in your inbox twice a week, personalized to you (Coming soon).
  • Get access to our AI features