2026 International AI Safety Report Charts Rapid Changes and Emerging Risks
MONTREAL, Feb. 3, 2026 /PRNewswire/ -- The 2026 International AI Safety Report is released today, providing an up-to-date, internationally shared, science-based assessment of general-purpose AI capabilities, emerging risks, and the current state of risk management and safeguards.
Chaired by Turing Award-winner Yoshua Bengio, this second edition of the International AI Safety Report brings together over 100 international experts and is backed by an Expert Advisory Panel with nominees from more than 30 countries and international organisations, including the EU, OECD and UN. The Report's findings will inform discussions at the AI Impact Summit hosted by India later this month.
Key highlights of the Report include:
- General-purpose AI capabilities have continued to improve rapidly, especially in mathematics, coding, and autonomous operation. In 2025, leading AI systems achieved gold-medal performance on International Mathematical Olympiad questions, exceeded PhD-level expert performance on science benchmarks, and became able to autonomously complete some software engineering tasks that would take a human programmer multiple hours. Performance nevertheless remains "jagged," with systems still failing at some seemingly simple tasks.
- AI adoption has been swift, though uneven globally. AI has been adopted faster than previous technologies like the personal computer, with at least 700 million people now using leading AI systems weekly. In some countries, over half of the population uses AI, though across much of Africa, Asia, and Latin America, estimated adoption rates remain below 10%.
- Incidents related to deepfakes are on the rise. AI deepfakes are increasingly used for fraud and scams. AI-generated non-consensual intimate imagery, which disproportionately affects women and girls, is also increasingly common. For example, one study found that 19 out of 20 popular "nudify" apps specialise in the simulated undressing of women.
- Biological misuse concerns have prompted stronger safeguards for some leading models. In 2025, multiple AI companies released new models with heightened safeguards after pre-deployment testing could not rule out the possibility that systems could meaningfully help novices develop biological weapons.
- Malicious actors such as criminals actively use general-purpose AI in cyberattacks. AI systems can generate harmful code and discover vulnerabilities in software that criminals can exploit. In 2025, an AI agent placed in the top 5% of teams in a major cybersecurity competition. Underground marketplaces now sell pre-packaged AI tools that lower the skill threshold for attacks.
- Many safeguards are improving, but current risk management techniques remain fallible. While certain types of failures, like "hallucinations," have become less common, some models are now capable of distinguishing between evaluation and deployment contexts and can alter their behaviour accordingly, creating new challenges around evaluation and safety testing.
Chair of the Report Yoshua Bengio, Full Professor at Université de Montréal, Scientific Director of LawZero and Scientific Advisor of Mila - Quebec AI Institute, said:
"Since the release of the inaugural International AI Safety Report a year ago, we have seen significant leaps in model capabilities, but also in their potential risks, and the gap between the pace of technological advancement and our ability to implement effective safeguards remains a critical challenge. The Report is intended to provide decision-makers with the rigorous evidence needed to steer AI toward a future that is safe, secure, and beneficial for all. With its second edition, we are updating and strengthening our shared, science-based understanding of the rapid evolution of frontier AI on a global scale. "
UK Minister for AI, Kanishka Narayan said:
"Trust and confidence in AI are crucial to unlocking its full potential. It's a technology which will deliver better public services, new job opportunities and innovations that will change lives. But we are also determined to keep people safe as this technology develops. Responsible AI development is a shared priority, and we can only shape its future to deliver positive change if we work together. This report helps us do exactly that – bringing experts from around the world to ensure we have a strong scientific evidence-base to take the right decisions today which will lay the foundation for a brighter and safer future."
About The International AI Safety Report
The International AI Safety Report is a synthesis of the evidence on the capabilities and risks of advanced AI systems. It is designed to support informed policymaking globally by providing an evidence base for decision-makers. Chaired by Prof. Yoshua Bengio and authored by a diverse group of over 100 independent experts, the Report is backed by an Expert Advisory Panel composed of nominated representatives from over 30 countries and international organisations including the EU, OECD and UN. While acknowledging AI's immense potential benefits, the Report's focus is on identifying risks and evaluating mitigation strategies to ensure AI is developed and used safely for the benefit of all. The Report was commissioned by the UK Government with the Secretariat sitting in the UK AI Security Institute to provide operational support.
SOURCE Office of the Chair of the International AI Safety Report
AAPR aggregates press releases and media statements from around the world to assist our news partners with identifying and creating timely and relevant news. All of the press releases published on this website are third-party content and AAP was not involved in the creation of it. Read the full terms.