BEGIN:VCALENDAR
METHOD:REQUEST
PRODID:Microsoft Exchange Server 2010
VERSION:2.0
BEGIN:VTIMEZONE
TZID:GMT Standard Time
BEGIN:STANDARD
DTSTART:16010101T020000
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
RRULE:FREQ=YEARLY;INTERVAL=1;BYDAY=-1SU;BYMONTH=10
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:16010101T010000
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
RRULE:FREQ=YEARLY;INTERVAL=1;BYDAY=-1SU;BYMONTH=3
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
ORGANIZER;CN=Daniele  Quercia:mailto:[email protected]
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN=smartdater
 [email protected]:mailto:[email protected]
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN=members@sm
 artdata.polito.it:mailto:[email protected]
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN=MINDS:mail
 to:[email protected]
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN=HCI:mailto
 :[email protected]
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN=nexa@serve
 r-nexa.polito.it:mailto:[email protected]
DESCRIPTION;LANGUAGE=en-US:Systemic Risks from General-Purpose AI\n\nRisto 
 Uuk\, Future  of Life Institute<https://futureoflife.org/>\n\n\nJoin the m
 eeting<https://teams.microsoft.com/l/meetup-join/19%3ameeting_OGIxYWZjY2Qt
 YTFlYi00MTk0LThlMzUtMjU1ZGE4NDJiYmFj%40thread.v2/0?context=%7b%22Tid%22%3a
 %225d471751-9675-428d-917b-70f44f9630b0%22%2c%22Oid%22%3a%221e405340-2229-
 4554-b37f-b193c118d70e%22%7d>\n\n\nFormat: 35 min talk + 25 min Q&A\n\n\n\
 n\n\nThis talk will first give an overview of systemic risks from general-
 purpose AI under the EU AI Act\, then present early findings from research
  on a taxonomy of systemic risks\, and finish with ideas on how to mitigat
 e such risks. Using a systematic review of 1\,781 scholarly works\, we sel
 ected 86 papers that yielded 13 risk categories and 50 contributing source
 s. Systemic risks – defined by the EU AI Act as large‑scale threats to
  societies or economies –span environmental damage\, structural discrimi
 nation\, governance failures\, and loss of control. Prominent drivers incl
 ude knowledge gaps\, difficulty recognizing harm\, and the unpredictable e
 volution of AI systems. To assess mitigation measures\, we surveyed 76 exp
 erts from AI safety\, critical infrastructure\, democratic governance\, CB
 RN\, and bias disciplines. From 27 literature‑derived mitigation measure
 s\, three emerged as both highly effective (expert agreement > 60 %)
  and technically feasible: (1) safety incident reporting and security info
 rmation sharing\, (2) third‑party pre‑deployment model audits\, and (3
 ) pre‑deployment risk assessments. Experts emphasized external scrutiny\
 , proactive evaluation\, and transparency as core principles.\n\n\n\nRisto
  Uuk is the Head of EU Policy and Research at the Future  of Life Institut
 e<https://futureoflife.org/> in Brussels. He contributed to the collaborat
 ive\, evidence‑driven process that led to the inclusion of general‑pur
 pose AI and systemic‑risk provisions in the EU AI Act  – the world
 ’s first comprehensive AI‑governance framework\, designed to foster re
 sponsible innovation in Europe. He’s currently co-authoring a book\, The
  AI Safety Endgame\, with Professor Lode Lauwaert (Wiley\, forthcoming) 
 – a strategic guide to long-term AI safety. As a PhD Researcher at KU Le
 uven\, Risto focuses on systemic risk assessment and mitigation for genera
 l-purpose AI systems. He recently served as a Visiting Researcher at Stanf
 ord’s Digital Economy Lab\, examining the economic and societal implicat
 ions of advanced AI. He founded and leads the biweekly EU AI Act Newslette
 r<https://artificialintelligenceact.substack.com/> with nearly 50\,000 sub
 scribers\, and created a website<https://artificialintelligenceact.eu/> th
 at ranks among the top search results for the AI Act globally. Previously\
 , Risto worked for the World Economic Forum on a project about positive AI
  economic futures<https://www3.weforum.org/docs/WEF_Positive_AI_Economic_F
 utures_2021.pdf> together with Stuart Russell\, Daniel Susskind and others
 \, and did research for the European Commission on trustworthy AI<https://
 ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-a
 i>. He holds a master’s degree in Philosophy and Public Policy from the 
 London School of Economics and a bachelor’s degree from Tallinn Universi
 ty. Risto was selected as a Fellow of the International Strategy Forum\, a
 n initiative by former Google CEO Eric Schmidt (in partnership with Europe
 an Council on Foreign Relations)\, and received the Global Priorities Fell
 owship from the Forethought Foundation for Global Priorities Research.\n\n
 \n\nSubscribe to future talk announcements: Anyone outside Bell Labs can r
 eceive talk announcements by subscribing to the mailing list. To subscribe
 \, send an empty email with the subject line "Subscribe RAI” to daniele.
 [email protected]\n\n\n\n\n\n\n\n
UID:040000008200E00074C5B7101A82E00800000000E9402E204C64DC01000000000000000
 01000000099171A426D0DDE41979E481909A08651
SUMMARY;LANGUAGE=en-US:[Responsible AI] Systemic Risks from General-Purpose
  AI\, Risto Uuk\, Future  of Life Institute
DTSTART;TZID=GMT Standard Time:20251208T153000
DTEND;TZID=GMT Standard Time:20251208T163000
CLASS:PUBLIC
PRIORITY:5
DTSTAMP:20251203T120115Z
TRANSP:OPAQUE
STATUS:CONFIRMED
SEQUENCE:0
X-MICROSOFT-CDO-APPT-SEQUENCE:0
X-MICROSOFT-CDO-OWNERAPPTID:2124160489
X-MICROSOFT-CDO-BUSYSTATUS:TENTATIVE
X-MICROSOFT-CDO-INTENDEDSTATUS:BUSY
X-MICROSOFT-CDO-ALLDAYEVENT:FALSE
X-MICROSOFT-CDO-IMPORTANCE:1
X-MICROSOFT-CDO-INSTTYPE:0
X-MICROSOFT-DONOTFORWARDMEETING:FALSE
X-MICROSOFT-DISALLOW-COUNTER:FALSE
X-MICROSOFT-REQUESTEDATTENDANCEMODE:DEFAULT
X-MICROSOFT-ISRESPONSEREQUESTED:TRUE
BEGIN:VALARM
DESCRIPTION:REMINDER
TRIGGER;RELATED=START:-PT15M
ACTION:DISPLAY
END:VALARM
END:VEVENT
END:VCALENDAR

Reply via email to