The AI safety conference being hosted in Britain will try to establish a “smoke alarm” type system to stop the technology going rogue, the technology secretary has said.
Michelle Donelan was laying out the objectives for the two-day gathering of world leaders at Bletchley Park on November 1-2, which will try to draw up a new framework for keeping advanced AI safe.
The government has highlighted the risks of “frontier AI” being developed by companies such as OpenAI, Google and Anthropic. This AI can be used to develop lethal bioweapons and cyberattacks and humans could lose control of it, the Department of Science, Innovation and Technology (DSIT) said in a blog post.
• It may be too late to stop Big Tech keeping all the winnings of AI
“AI technologies are evolving with unprecedented speed. Soon, models many times more powerful than what is currently available may be released. The capabilities of these models are very difficult to predict — sometimes even to those building them — and by default they could be made available to a wide range of actors, including those who might wish us harm,” the department said.
“This pace of change means that urgent action on AI safety is needed. We are at a crossroads in human history and to turn the other way would be a monumental missed opportunity for mankind.”
DSIT added that “loss of control risks” would be one of two key categories being focused on at the conference.
Yoshua Bengio, an AI pioneer who is now a government adviser, has given an example of such “rogue” AI: “A military AI that is supposed to destroy the IT infrastructure of the enemy may figure out that in order to better achieve that goal it needs to acquire more experience and data and it may see the enemy humans to be obstacles to the original goal.”
Rishi Sunak, the prime minister, believes there is a small window of opportunity to act on these threats, given the companies are understood to be ready to spend a hundred times what they already have on the next versions of AI models powering ChatGPT.
Donelan suggested there would be a twin track at the event: “I think what we will see coming out of the summit is definitely some sort of agreement across nations and some agreement with companies as well in terms of their approach to what we see as something that we call responsible capability scaling.
“And what that means is basically that we need almost a ‘smoke alarm’ established so that not only are companies searching for the risks, but they have a response to the risk and we know at what level that responded and how they respond. That’s the type of system that we need to be seeing across the board.”
• AI does not mean our end is nigh, internet pioneer insists
Responsible capability scaling refers to companies having to make a commitment to pausing or changing the AI should certain events occur.
Fifteen of the biggest AI companies have already made voluntary commitments over the issue, pledging to increase the inspection and transparency of their models.
Donelan said the conference would “build” on those commitments and start “a global conversation”, but officials downplayed the prospect of any mandatory action being agreed against companies. They stressed that governments need to know what the AI is doing before they regulate it.
Sam Altman, chief executive of OpenAI, which developed ChatGPT, called on the tech industry to stop attacking regulation, telling an event in Taipei: “People in our industry bash regulation a lot. We’ve been calling for regulation, but only of the most powerful systems.
“Models that are like 10,000 times the power of GPT4, models that are like as smart as human civilization, whatever, those probably deserve some regulation.”
Critics of the event’s approach said too many resources were being put into “speculative” risks.
Huw Roberts, AI policy expert at the Oxford Internet Institute, said: “The lack of policy attention on these topics up until the recent AI hype cycle isn’t that surprising as the two risks are still almost entirely speculative.”
He said it was hard to find evidence of more than two deaths from a cyberattack and AI generally, adding: “It’s actually pretty surprising that the UK has been convinced to put all this resource into addressing risks that aren’t currently materialising.”
However, officials were keen to stress that the threats being addressed are already present. They pointed to evidence to the US Senate from Dario Amodei, chief executive of the AI company Anthropic, that the technology could be used to create dangerous viruses and other bioweapons in as little as two years.
The conference will not address issues such as bias and discrimination in AI, believing these are being addressed in other forums such as the G7, UN and the EU.
Some in the sector believe this to be a mistake. Sandra Wachter, professor of technology and regulation at Oxford University, said: “Misinformation, bias and discrimination and mass automation are crucial topics that also deserve a global stage, yet these are not part of the scope.
“But these issues are also safety issues and also pose existential risks. Tech does not care about geography and if we want to solve these urgent societal problems, we need international dialogue and collaboration.”
Outside the conference
The meeting will have only 100 participants and the government has laid out a series of pre-events that will attempt to include wider voices on the topic.
October 11 The Alan Turing Institute: Exploring existing UK strengths on AI safety and opportunities for international collaboration.
October 12 British Academy: Possibilities of AI for the public good: the summit and beyond.
October 17 techUK: Opportunities from AI; potential risks from AI; solutions that exist in the tech sector.
October 25 Royal Society: Horizon scanning AI safety risks across scientific disciplines.