AI development timelines and scaling laws

  • Shrinking AGI timelines: a review of expert forecasts - 80,000 Hours by Benjamin Todd (2025)

  • The case for AGI by 2030 - 80,000 Hours (2025)

    • The article reviews forecasts from five expert groups about when AGI might arrive, finding that all groups have shortened their estimates in recent years with AGI before 2030 now considered a realistic possibility by many experts. This comprehensive article argues that AGI could arrive by 2030 due to four continuing drivers of AI progress: larger base models through increased computational power, teaching models to reason using reinforcement learning, increasing models' thinking time with test-time compute, and building better agent scaffolding for multi-step tasks.

  • AI 2025 Forecasts - May Update - AI Digest

    • This article reports on a survey of 421 AI experts and forecasters tracking key AI progress indicators for 2025, finding that many forecasted benchmarks are already being exceeded halfway through the year, suggesting AI capabilities are advancing faster than many experts anticipated.

  • Machine Learning Trends | Epoch AI (last updated on Jan 13, 2025)

    • Epoch’s dashboard provides visualisations and insights into the growth and impact of AI, offering valuable data for understanding current trends and future directions in machine learning.

  • Can AI Scaling Continue Through 2030? | Epoch AI (2024)

    • Epoch AI examines whether the current rapid scaling of AI training (4x per year) can continue through 2030 by investigating four potential bottlenecks: power constraints, chip manufacturing capacity, data scarcity, and latency limitations.

  • Scaling up: how increasing inputs has made artificial intelligence more capable (2025) Our World in Data (10 minutes)

    • This article argues that recent advances in AI have come less from scientific breakthroughs and more from simply scaling up existing systems by using more computational power, larger datasets, and bigger models with more parameters.

  • Chapter 1 on ‘From GPT-4 to AGI: Counting the OOMs’ in Situational Awareness by Leopold Aschenbrenner (2024)

    • The most detailed, up-to-date account of how we got from GPT-2 to GPT-4 and how those same trends predict AGI by 2027.

  • How fast is AI improving? - AI Digest A new Moore's Law for AI agents - AI Digest Timeline of AI forecasts - AI Diges

    • Short explainers covering AI trends up to 2023 and visualising evidence of exponential gains in AI capabilities, along with projections of where these trends could take us.

Risks of advanced AI

  • Compare the framing of specific risks you chose to read from the international scientific report and this overview of catastrophic AI risks (2023) – are there any differences?

    • This paper provides an overview of the main sources of catastrophic AI risks, organized into four categories. For each category of risk, the authors describe specific hazards, present illustrative stories, envision ideal scenarios, and propose practical suggestions for mitigating these dangers.

  • Examining Popular Arguments Against AI Existential Risk: A Philosophical Analysis (2025)

    • This paper reconstructs and evaluates three common arguments against the existential risk perspective: the Distraction Argument, the Argument from Human Frailty, and the Checkpoints for Intervention Argument. By systematically reconstructing and assessing these arguments, the paper aims to provide a foundation for more balanced academic discourse and further research on AI.

  • Two types of AI existential risk: decisive and accumulative | Philosophical Studies by Atoosa Kasirzadeh

    • This philosophical paper by Atoosa Kasirzadeh contrasts two pathways to AI existential risk: the conventional "decisive" view where superintelligent AI directly causes sudden catastrophic events, versus a proposed "accumulative" view where multiple smaller AI-induced social risks compound over time to gradually weaken societal resilience until a triggering event causes irreversible collapse.

  • A Taxonomy of Systemic Risks from General-Purpose AI (2024)

    • Uses the EU AI Act's definition of systemic risks to provide a more detailed taxonomy of systemic risks from AI, such as rapid labor market disruption and inequality, concentration of power in private companies, mass surveillance against public interest, and more.

Risk mitigation approaches

  • AI Security Institute Research Agenda

    • The UK AISI outlines their research priorities, their approach to developing technical solutions to the most pressing AI concerns, and the key risks that must be addressed as AI capabilities advance.

  • The Singapore Consensus on Global AI Safety Research Priorities

    • Inspired by the 2025 International AI Safety Report, this document groups technical AI safety research topics into three broad areas from risk assessment that informs subsequent development and deployment decisions, to technical methods in the system development phase, and tools for control after a system has been deployed.