Start your day with intelligence. Get The OODA Daily Pulse.
The agreement will see the UK’s new AI Safety Institute and its US counterpart collaborate to formulate a framework to test the safety of large language models. The US and the UK have signed an agreement to test the safety of large language models (LLMs) that underpin AI systems. The agreement or memorandum of understanding (MoU) — signed in Washington by US Commerce Secretary Gina Raimondo and UK Technology Secretary Michelle Donelan on Monday — will see both countries working to align their scientific approaches and working closely to develop suites of evaluations for AI models, systems, and agents. The work for developing frameworks to test the safety of LLMs, such as the ones developed by OpenAI and Google, will be taken by the UK’s new AI Safety Institute (AISI) and its US counterpart immediately, Raimondo said in a statement. The agreement comes into force just months after the UK government hosted the global AI Safety Summit in September last year, which also saw several countries including China, the US, the EU, India, Germany, and France agree to work together on AI safety. The countries signed the agreement, dubbed the Bletchley Declaration, to form a common line of thinking that would oversee the evolution of AI and ensure that the technology is advancing safely. The agreement came after hundreds of tech industry leaders, academics, and other public figures signed an open letter warning that AI evolution could lead to an extinction event in May last year.
Full report : United States and United Kingdom sign an agreement to develop framework and guardrails for generative artificial intelligence.