International report calls for collaboration on AI, highlighting both benefits and risks

A landmark report claims to shed light on the double-edged sword of advanced artificial intelligence (AI)A landmark report claims to shed light on the double-edged sword of advanced artificial intelligence. Backed by over 30 nations, the International Scientific Report on the Safety of Advanced AI paints a picture of a technology brimming with potential benefits, but also fraught with potential risks if safety isn’t prioritized. This first iteration of the report, launched at the AI Safety Summit, fulfils a key commitment set forth during the historic Bletchley Park discussions and the subsequent Bletchley Declaration.

Building on an initial “State of Science” report launched last November, it brings together the expertise of a diverse global team of specialists. This includes an Expert Advisory Panel from 30 leading nations, as well as representatives from the UN and EU. Their aim: to provide policymakers worldwide with a unified source of information for shaping their approaches to safety.

The report acknowledges the immense potential of advanced AI to drive positive change. It highlights benefits already witnessed in healthcare, drug discovery, and climate change solutions. However, the report cautions that, like any powerful technology, current and future advancements could lead to unintended consequences. Malicious actors could exploit the technology for large-scale disinformation campaigns, fraud, and scams. Even more concerning are potential future risks like widespread job market disruption, economic imbalances, and power inequalities.

However, the report also reveals a lack of complete consensus among experts on several key issues. This includes the current capabilities of AI and its potential trajectory. Opinions diverge on the likelihood of extreme risks like mass unemployment, AI-enabled terrorism, or a complete loss of control over the technology. This highlights the crucial need for further research and underscores the importance of the report’s central message: international collaboration on safety is paramount.

“AI is the defining technology challenge of our time,” said Secretary of State for Science, Innovation, and Technology, Michelle Donelan. She emphasised the international nature of the challenge, acknowledging the importance of a shared, evidence-based approach to understanding risks. The report is seen as a vital tool for achieving this, building on the momentum of the Bletchley Park talks. Donelan further highlighted the role the report will play in upcoming discussions at the Seoul Summit, signifying its potential to shape the global conversation around AI safety.

This interim publication focuses specifically on advanced “general-purpose” AI. This encompasses cutting-edge systems capable of generating text, images, and making automated decisions. A final report, anticipated for release in time for the AI Action Summit hosted by France, will incorporate additional perspectives from industry, civil society, and the broader AI community. This feedback loop ensures the report remains relevant as the technology evolves, reflecting the latest research and expanding on critical areas to provide a comprehensive view of risks.

Professor Yoshua Bengio, Chair of the International Scientific Report on the Safety of Advanced AI, emphasized the collaborative nature of the project. He highlighted the work undertaken by a vast network of scientists and panel members from 30 nations, the EU, and the UN over the past six months, which will now inform policy discussions at the AI Seoul Summit and beyond. Professor Bengio stressed the dual nature of AI – its potential for positive change and the need for responsible development and regulation. He called for continued collaboration among governments, academia, and society at large, emphasizing the importance of working together to harness the benefits of AI safely and responsibly.

The report garnered praise from leading figures in the AI safety field. Prof. Andrew Yao of Tsinghua University commended the report’s timeliness and authority, while Marietje Schaake, International Policy Director at the Stanford University Cyber Policy Center, emphasised the need for democratic governance of AI based on independent research. Nuria Oliver, Director of ELLIS Alicante, the Institute of Humanity-centric AI, lauded the report’s comprehensiveness and collaborative nature, highlighting its potential to shape the development of secure and beneficial AI for all.

The report also delves into the rapid pace of development, acknowledging significant progress but also highlighting the ongoing disagreements around current capabilities and uncertainty regarding the long-term sustainability of this rapid growth. The UK, a self-proclaimed leader in AI safety through its establishment of the world’s first state-backed AI Safety Institute, is positioned to play a key role in shaping the future of this technology. The upcoming AI Seoul Summit presents a crucial opportunity to further discussions sparked by the report and cement the issue of safety on the international agenda. With the final report expected ahead of the next round of discussions in France, the signatories believe the International Scientific Report on the Safety of Advanced AI  stands as a vital first step in fostering global collaboration for a safer and more prosperous future.