Envisioning a Global Regime Complex to Govern Artificial Intelligence by Emma Klein and Stewart Patrick (2024)
Read:
Introduction (p. 1-3)
A Regime Complex for AI (p. 5-8)
The growing risks posed by AI have led to calls for stronger global governance. Strengthening the existing weak 'regime complex' by improving coordination and procedural legitimacy is considered the most desirable and realistic pathway for effective global AI governance.
Understanding the First Wave of AI Safety Institutes: Characteristics, Functions, and Challenges Araujo et al. (2024)
This policy brief analyses the "first wave" of AI Safety Institutes (AISIs) established by Japan, the UK, and US since November 2023, identifying their three core functions: research, standards development, and cooperation.
Bias baked in | Corporate Europe Observatory (2025)
This investigative report exposes how Big Tech corporations dominate the EU's AI standards-setting process and how this ultimately allows industry to write the technical rules that will determine compliance with fundamental rights protections under the EU AI Act.
The Role of AI Safety Institutes in Contributing to International Standards for Frontier AI Safety by Kristina Fort (2024)
Kristina is a Talos Alumni!
AI Safety Institutes (AISIs) are uniquely positioned to contribute to international AI safety standards due to their technical expertise, government backing, and convening power.
It's Too Hard for Small and Medium-Sized Businesses to Comply With the EU AI Act: Here's What to Do by Gideon Abako (2025)
The EU AI Act creates disproportionate compliance burdens for small and medium-sized businesses compared to large enterprises, and the author proposes some policy solutions.
Governments Need to Protect AI Industry Whistleblowers: Here's How by Michelle Nie (2025)
AI industry whistleblowers face unique barriers and the article proposes ways governments could expand whistleblower protections.
A Framework for Information-Sharing Among AI Safety and Security Institutes by Lara Thurnherr (2025)
This article proposes a framework for information-sharing among AI Safety Institutes (AISIs), arguing that the UK and US AISIs should strategically share institutional guidance and safety research while avoiding sharing sensitive information.
AI Companies' Safety Research Leaves Important Gaps. Governments and Philanthropists Should Fill Them. by Oscar Delaney and Oliver Guest (2025)
This article analyses AI safety papers from OpenAI, Google DeepMind, and Anthropic, finding that they neglect crucial safety approaches that governments and philanthropists could fill.