IN THIS LESSON

  • Link

    Examines the range of expert opinions regarding the timeline and implications of achieving artificial general intelligence (or AGI). Provides an overview of scaling laws.

    If you’d like to read more about scaling laws, check out these two pieces from the optional reading list: Can AI Scaling Continue Through 2030? by Epoch AI and Scaling up: how increasing inputs has made artificial intelligence more capable by Our World in Data.

  • Link

    Watch/read Helen Toner’s talk at the Technical Innovations for AI Policy Conference about how far the current paradigm can go, AI improving AI, and whether thinking of AI as a tool will keep making sense.

  • Link

    Select and read through one risk from each section you think is particularly important:

    • Executive Summary Risk Section (p. 17-21) 

    • 2.1 Risks from malicious use (p. 62-88)

    • 2.2 Risks from malfunctions (p. 88-110)

    • 2.3 Systemic risks (p. 110-149)

    • 2.4 Impact of open-weight general-purpose AI models on AI risks (p. 149-156)

    • Key information (p. 149) - table 2.5 (p. 152)

    When designing governance mechanisms to reduce risks from advanced AI, it is important to be cognisant of all the fields and risk types which AI could impact. The report summarises the scientific evidence on three core questions: What can general-purpose AI do? What are risks associated with general-purpose AI? And what mitigation techniques are there against these risks? 

    This is the first International AI Safety Report authored by 96 independent experts who had full discretion over its content. A good supplement to this reading, listed in the optional readings below, is the overview of catastrophic AI risks (2023).

  • Link

    Read: 

    • Executive Summary (p.1-2) 

    • Introduction (p. 2-3) 

    Then, select one of these three sections to read: 

    • Section 2: Misaligned Economy (p. 3-6) 

    • Section 3: Misaligned Culture (p. 6-10)

    • Section 4: Misaligned States (p. 10-13)

    The paper introduces the concept of "gradual disempowerment", arguing that incremental AI advancements - even without sudden breakthroughs or coordinated AI takeovers - could erode human control over key societal systems like the economy, culture, and governance.

  • Link

    Read only: 

    • Executive Summary (p.3-8) 

    • Table 1: Risk mitigation measures, including their description and Sources (p. 10-15)

    The systemic risks posed by general-purpose AI models are a growing concern, yet the effectiveness of mitigations remains underexplored. Previous research has proposed frameworks for risk mitigation, but has left gaps in our understanding of the perceived effectiveness of measures for mitigating systemic risks.

    This piece, incorporating the qualitative contributions from experts, offers concrete policy solutions for systemic risks that could be implemented through EU regulatory frameworks.

  • Link

    The first article explains the EU AI Act's Code of Practice for general-purpose AI (GPAI) model providers, which serves as a non-binding but crucial interim compliance framework bridging the gap between GPAI obligations taking effect in August 2025 and formal European standards arriving in 2027 (or later).

  • Link

    Download the Safety and Security chapter and focus on reading Commitment 2 Systemic risk identification and Appendix 1.

    Since the Code’s text is quite technical, only read the required sections and skim through the rest of the chapter to get a general overview.