Downing Street ponders creating ‘AI Safety Institute’


[ad_1]

Downing Street officials have been touring the world working to agree on a warning statement as a draft agenda points towards the possibility of international cooperation on cutting-edge AI to address the looming threat to human life, as the AI Safety Summit approaches.

The draft refers to establishing an “AI Safety Institute” which would be designed to enable the national security-relate scrutiny of frontier AI models, allowing global leaders to come together and collaborate, likely to be discusses on the final day of the summit.

Last week, the prime minister’s representative did not seem confident about the establishment of such an organisation, however, he championed the need for collaboration saying that it is “key” in managing frontier AI risks and ensuring the safe development of AI.

Companies expected to participate in the summit include ChatGPT developer OpenAI, Google and Microsoft and it is said they will publish details on AI safety commitments, agreed with the White House in July, with updates likely to include reference to safety, cybersecurity and how AI systems could be used for national security purposes.

As the UK leads the way in the development of safe AI, it is hoped that the recently announced frontier AI taskforce will evolve to become a permanent institutional structure with international presence, however it is suspected that most countries will want to develop their own capabilities in this space.

Oseloka Obiora, CTO at RiverSafe, commented: “AI can offer huge business benefits however the flip side of the coin must not be ignored, and the potential detrimental consequences must be considered carefully. Businesses must act with caution, or put themselves at risk of back lash and as the AI Safety Summit approaches, leaders must consider the regulations needed to ensure the safe use of this emerging piece of tech.”

“Establishing an AI Safety Institute will play a key role in tackling the risk posed by AI regarding the cyber threat, allowing frontier AI models to be scrutinised. This will support businesses as they consider the implementation of AI and help them to ensure robust cybersecurity measures are in place to protect themselves from risk.”

A government spokesperson said: “We have been very clear that these discussions will involve exploring areas for potential collaboration on AI safety research, including evaluation and standards.

“International discussions on this work are already under way and are making good progress, including discussing how we can collaborate across countries and firms and with technical experts to evaluate frontier models. There are many different ways to do this and we look forward to convening this conversation in November at the summit.”



[ad_2]

Source link


Administrator

0 Comments

Your email address will not be published. Required fields are marked *