AI Seoul Summit Rundown

AI Seoul Summit Rundown

Whilst the AI Seoul Summit (21-22 May) was unwittingly overshadowed by the General Election announcement, a number of major international commitments were made. This blog examines the important points from the second international AI safety summit. Whilst we patiently await the release of Labour’s hotly anticipated AI plan, it is important to understand how the UK fits into the international landscape of AI governance. The Conservatives look set to continue to champion their “pro-innovation approach to regulation”, and appear equally wedded to their ambition for the UK to be a “world leader” in AI policy, as is reflected in the commitments made in Seoul. This raises the question as to how a possible Labour Government will reconcile their proposed AI policy with the international commitments made in the final days of the outgoing Government.  

Monday 20 May  

First Overseas AI Safety Institute

Ahead of the first day of the AI Seoul Summit, the Department for Science, Innovation and Technology (DSIT) announced the first overseas AI Safety Institute, which is due to open this summer in San Francisco.

It is hoped that this “pivotal step” will:

● Allow the UK to tap into the tech talent available in the Bay Area.  

● Engage with the world’s largest AI labs in London and San Francisco.  

● Cement relationships with the US to advance AI safety for the public interest.

This will be a complementary branch of the London HQ, which will continue to “scale and acquire the necessary expertise to assess the risks of frontier AI systems”.  

The Institute will both further the strategic partnership with the US and will facilitate the sharing of research and the joint evaluation of AI models to inform AI safety policy across the world.

AI Safety Institute’s Fourth Progress Report  

The announcement of the San Francisco AI Safety Institute coincided with the publication of the AI Safety Institute’s Fourth Progress Report, which provided the results from safety testing five publicly available advanced AI models.  

The Institute assessed AI models against four key risk areas, and found that:

● Several models completed cyber security challenges, while struggling to complete more advanced challenges.  

● Several models demonstrate similar to PhD-level knowledge of chemistry and biology.

● All tested models remain highly vulnerable to basic “jailbreaks”, and some would produce harmful outputs even without dedicated attempts to circumvent safeguards.  

● Tested models were unable to complete more complex, time-consuming tasks without humans overseeing them.

Tuesday 21 May – Day One of the Summit

International Network of AI Safety Institutes

During the leader’s session of the AI Seoul Summit, 10 countries and the EU agreed to work together to launch an international network of AI Safety Institutes.

The Seoul Declaration, whose full title is the ‘Seoul Statement of Intent toward International Cooperation on AI Safety Science’, committed the nations to working together to launch an international network to accelerate the advancement of the science of AI safety. It will bring together the publicly backed institutions that have been created since the inaugural Bletchley Park AI Safety Summit in November.

Frontier AI Safety Commitments

With the commencement of the AI Seoul Summit, the Department for Science, Innovation and Technology announced that 16 international tech companies had signed up to the new ‘Frontier AI Safety Commitments’.

The signatories, who represent “the most significant AI technology companies around the world”, are: Amazon, Anthropic, Cohere, Google / Google DeepMind, G42, IBM, Inflection AI, Meta, Microsoft, Mistral AI, Naver, OpenAI, Samsung Electronics, Technology Innovation Institute, xAI, and Zhipu.ai.

Each AI tech company will publish safety frameworks on how they will measure the risks of their frontier AI models. These frameworks will:

● Establish how they will measure risks.

● Set out when severe risks, unless adequately mitigated, would be “deemed intolerable”.

● What companies will do to ensure thresholds are not surpassed.

● In severe circumstances, companies have committed to “not develop or deploy a model or system at all” if mitigations cannot keep risks below the thresholds.  

● To define these thresholds, companies will take input from “trusted actors” including home governments.  

Wednesday 22 May – Day Two of the Summit

£8.5m Funding for AI Safety Testing Research

On day two of the Summit, Science, Innovation and Technology Secretary, Michelle Donelan, announced the introduction of Government grants for AI safety research. These grants will be offered to researchers looking into how to protect society from AI risks, and harness its benefits. DSIT confirmed the “most promising” proposals will be developed into longer-term projects and could receive further funding.

The programme of grants will be conducted through the AI Safety Institute and will be led by Shahar Avin, AI safety researcher, and Christopher Summerfield, the Institute’s Research Director. It will be delivered in partnership with UK Research and Innovation, The Alan Turing Institute and the UK AI Safety Institute, with the aim of collaborating with other international AI Safety Institutes.  

Whilst applicants must be based in the UK, they are encouraged to collaborate internationally.

It is intended that these Systemic AI Safety Fast Grants will expand the remit of the AI Safety Institute to include ‘systematic AI safety’, which considers how to mitigate the impacts of AI at a societal level and how institutions, systems and infrastructure can adapt to the transformations this technology has brought about.

Seoul Ministerial Statement

The Seoul Ministerial Statement for Advancing AI Safety, Innovation and Inclusivity was agreed at the AI Seoul Summit Ministers’ Session, and was signed by the Ministers for 27 countries, and the representative of the European Union.

Michelle Donelan at AI Seoul Summit

Secretary of State for Science, Innovation and Technology, Michelle Donelan, delivered a speech on day two of the AI Seoul Summit.  

At the UK AI Safety Summit (Nov 2023), Donelan had referred to the swift collaboration on the Montreal Protocol, and praised the “even faster” action on AI Safety. She referred to the “remarkable strides” made by the UK AI Safety Institute, and dubbed yesterday’s agreement to bring these Institutes into a global network the “Bletchley effect” in action. She also praised the historic ‘Frontier AI Safety Commitments’ and last week’s International AI Safety Report.  

Donelan’s key message was: “we should not rest on our laurels”. As the pace of AI development accelerates, she urged to match the speed of action to “grip the risks and seize the opportunities”.  

She assured that in Phase 2, from Seoul to France, they would “push the boundaries of nascent science of frontier AI testing and evaluation”, as well as focusing on risk mitigation outside these models.  

Systematic safety involves embedding AI safety mechanisms into society’s systems, not just AI systems themselves. She stated the UK had already begun investing in systemic safety research and was “eager” to deepen global collaboration on this.

Thursday 23 May

Saqib Bhatti Statement to Parliament

The day after the conclusion of the Summit, Saqib Bhatti, Parliamentary Under-Secretary of State for Science, Innovation and Technology, delivered a statement to Parliament.  

Bhatti began by highlighting the achievements since the UK AI Summit. The UK has led by example with impressive progress on AI safety, both domestically and bilaterally. The AI Safety Institute has also built up its capabilities for state-of-the-art safety testing. It has conducted its first pre-deployment testing for potential harmful capabilities on advanced AI systems, set out its approach to evaluations and published its first full results.

Earlier in the week, the Secretary of State had announced the launch of an AI Safety Institute Office in San Francisco, whilst DSIT has announced partnerships with France, Singapore, and Canada. On Tuesday, 16 leading companies signed the Frontier AI Safety Commitments, “pledging to improve AI safety and to refrain from releasing new models if the risks are too high”. Subsequently, on Wednesday, Ministers from 28 nations, the EU and the UN came together for discussions on AI safety, which culminated in the signing of the Seoul Ministerial Statement.

On 17 May, a week prior to the Summit, the interim ‘International Scientific Report on the Safety of Advanced AI’ was published. Bhatti outlined the report’s significance and findings:

o It unites “a diverse global team of AI experts”, to bring together the best of scientific research on AI capabilities and risks.

o It provides policymakers globally a single source of information to inform their approaches to AI safety.

o Advanced AI can improve wellbeing, prosperity, and new scientific breakthroughs, whilst acknowledging that current and future developments could cause harm. Future advances in advanced AI could pose wider risks, such as labour market disruption and economic power imbalances and inequalities.

o All present methods for assessing the risk of advanced AI models have limitations.

o There is a lack of universal agreement amongst AI experts on a range of topics, including “the state of current AI capabilities and how these could evolve over time”.

At the AI Seoul Summit, countries discussed the importance of supporting AI innovation and inclusivity. The delegates recognised the transformative benefits of AI for the public sector, and committed to supporting an environment which nurtures easy access to AI-related resources for SMEs, start-ups, and academia. They also welcomed the potential of AI to provide “significant advances” to the world’s challenges. Bhatti provided the examples of climate change, global health, and food and energy security.

Whilst the AI Seoul Summit was “an important step forward”, he reminded the House that “we are only just getting started” as the “rapid pace of AI development leaves us no time to rest on our laurels”.  

The UK is read to work with France for the next summit to continue “the legacy that we began in Bletchley Park, and continued in Seoul”. Bhatti concluded, quoting the Secretary of State in Seoul: “it is our responsibility to ensure that human wisdom keeps pace with human knowledge”.  

A matter of hours after Bhatti delivered his statement, the General Election was called, stalling policy progress until after the July election. The Artificial Intelligence (Regulation) Bill did not pass in the wash up period, meaning stakeholders will be focused on the next King’s Speech, scheduled for 17 July, to see if the next Government make any headline commitments to advancing AI governance.  

It feels apt to conclude this blog post, with the words of the outoing Chair of the Science, Innovation and Technology Committee, Greg Clark, from the publication of the Committee’s report on AI governance:  

“The current Government has been active and forward-looking on AI and has amassed a talented group of expert advisers in Whitehall. Important challenges await the next administration … to attain the transformational benefits of AI while safeguarding hard-won public protections.”

Please download the full rundown here:

Download PDF
Customer Service
Book a FREE trial
Get in touch
UK customer service +44 207 593 5500
EU customer service +32 274 182 30

Follow us on social media

UK:
EU:
By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.