US and UK Sign Agreement on AI Safety Testing

United States and the United Kingdom have recently signed an agreement to work together on developing tests for the most advanced artificial intelligence (AI) models. This agreement is a follow-through on commitments made at the Bletchley Park AI Safety Summit last year. The move comes as the world is trying to set guardrails around the rapid proliferation of AI systems, which offer opportunities but also pose significant threats to various societal set-ups, from misinformation to election integrity.

Key Points of the Agreement

Under the partnership, both countries will:

  • Share vital information about the capabilities and risks associated with AI models and systems
  • Share fundamental technical research on AI safety and security with each other
  • Work on aligning their approach towards safely deploying AI systems
  • Work to align their scientific approaches and work closely to accelerate and rapidly iterate robust suites of evaluations for AI models, systems, and agents

The US and UK AI Safety Institutes have also laid out plans to build a common approach to AI safety testing and to share their capabilities to ensure these risks can be tackled effectively.

Implementation and Future Plans

The agreement has taken effect immediately, and the US and UK intend to perform at least one joint testing exercise on a publicly accessible model. They also plan to tap into a collective pool of expertise by exploring personnel exchanges between the Institutes.

As the US and the UK strengthen their partnership on AI safety, they have also committed to develop similar partnerships with other countries to promote AI safety across the globe, according to a press release by the US Department of Commerce.

US Seeks Inputs on Open-Source AI Models

Since last year, the National Telecommunications and Information Administration (NTIA) in the US has separately started consultation on the risks, benefits and potential policy related to dual-use foundation models with widely available weights — parameters that AI models learn during training and processing which help them make decisions.

The development came after the President Joe Biden administration issued an executive order on the safe deployment of AI systems in 2023. The agency is seeking inputs on:

  • The varying levels of openness of AI models
  • The benefits and risks of making model weights widely available compared to the benefits and risks associated with closed models
  • Innovation, competition, safety, security, trustworthiness, equity, and national security concerns with making AI model weights more or less open
  • The role of the US government in guiding, supporting, or restricting the availability of AI model weights

Meta, which has open-sourced its Llama model, in its submission to NTIA’s consultation called open source the “foundation” of US innovation. OpenAI, the maker of ChatGPT, has taken a middle path in its comments, stating that both open weights releases and API and product-based releases are tools for achieving beneficial AI, and the best American AI ecosystem will include both.

Global Efforts to Regulate AI

As private industry innovates rapidly, lawmakers around the world are grappling with setting legislative guardrails around AI to curb some of its downsides. For example, India’s IT Ministry issued an advisory to generative AI companies deploying “untested” systems in India to seek the government’s permission before doing so. However, after criticism, the government scrapped the advisory and issued a new one which had dropped the mention of seeking government approval. The EU reached a deal with member states on its AI Act in 2022, which includes safeguards on the use of AI within the EU, including clear guardrails on its adoption by law enforcement agencies. Consumers have been empowered to launch complaints against any perceived violations.  Similarly, US White House issued an Executive Order on AI in 2023, which is being offered as an elaborate template that could work as a blueprint for every other country looking to regulate AI.


Category: 

Leave a Reply

Your email address will not be published. Required fields are marked *