Question 1 (~40 mins): The readings present two framings around risk: the International Scientific Report mainly focuses on discrete risk categories (e.g. malicious use, malfunctions), while Kulveit et al. discuss the concept of "gradual disempowerment", the idea that incremental AI development could erode human control without sudden breakthroughs. Compare these risk frameworks and assess their policy implications. In your analysis, address:
In your opinion, which framework better captures the realistic pathways through which AI could pose important risks to society?
How do mitigation strategies differ depending on whether you're addressing “discrete” risks versus gradual disempowerment?
Question 2 (~40 mins): The EU AI Act's Code of Practice identifies four key systemic risks from advanced AI: CBRN (chemical, biological, radiological, nuclear) risks, loss of control, cyber offence, and harmful manipulation. Choose one of these risks and explain why it poses a systemic threat to society and how it might be effectively mitigated.
Define your chosen risk using examples from the Code of Practice (Appendix 1.4) and explain why it qualifies as "systemic".
Evaluate potential solutions by discussing which mitigation approaches experts consider most effective (Uuk et al. reading), such as pre-deployment assessments, third-party audits, or incident reporting systems