The UK government’s AI Safety Summit, which claims to be “bringing together the world to discuss frontier AI”, has come under fire for a lack of diverse perspectives among its delegates, with critics dismissing it as little more than a photo opportunity.
Prime minister Rishi Sunak is hosting a group of 100 representatives from business, politics and academia at Bletchley Park on 1 and 2 November to discuss the risks of “frontier” AI models, defined as “highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models”. But the government hasn’t publicly revealed the list of delegates and campaigners have criticised the narrow range of voices in attendance.
Mark Lee at the University of Birmingham, UK, says the summit is a stage-managed “photo opportunity” rather than a chance for open discourse, and that it is focusing on the wrong problems.
Advertisement
“We need an open debate,” he says. “We want an interdisciplinary view of AI with people who are informed from a legal perspective, from an ethical perspective, from a technological perspective, rather than really quite powerful people from companies. I mean, they need to be in the room, but it can’t just be them.”
Lee says the summit seems to be focused on hypothetical existential risks like “robots with guns” rather than the real and pressing risks from AIs making biased decisions in medical diagnosis, criminal justice, finance and job applications. A wider variety of voices, with more diverse backgrounds, could point law-makers in a more practical direction when discussing regulation, he says.
More than 100 trade unions, charities and other groups signed an open letter to Sunak this week expressing their concerns about the lack of diversity at the event. They warn that the “communities and workers most affected by AI have been marginalised by the Summit”.
Sign up to our The Daily newsletter
The latest science news delivered to your inbox, every day.
The campaign group PauseAI, which was set up to highlight what it sees as potential existential risks of future AI models, staged a protest outside Bletchley Park. Joep Meindertsma of PauseAI says that the companies attending the summit are locked in a race to produce powerful AI even if their employees fear that rapid progress could be dangerous, and that there isn’t a variety of voices at the event.
“Polls are showing that the public wants to slow down AI development,” says Meindertsma. “AI company CEOs and government representatives want to accelerate, they want their company or country to lead. There are probably only a handful of individuals at the summit who will be pushing for concrete measures for slowing down. No company should be allowed to build a super-intelligence.”
“We should have been invited to represent all the people who believe we should just stop this dangerous race,” says Meindertsma.
Read more
UK AI summit: Countries agree declaration on frontier AI risks
Inside the summit, New Scientist‘s reporter was ushered into the high-security media centre and told that they couldn’t leave unless they had an interview arranged with a delegate. When they asked for a list of delegates, they were told this wasn’t being publicly released.
One reporter, who asked to remain anonymous, works for another outlet that covers AI but was refused entry to the summit.
“We have more than 15 people globally reporting on digital and AI regulation, and were told there was no space for us, because it would be mostly lobby, UK nationals and broadcasters,” they say. “They just think they’ll put a couple of journalists in front of Enigma (the second world war encryption device) and get the ‘oh my God, this is groundbreaking’ coverage. Getting any interviews, access, updates has been horrendous, with most emails not even getting acknowledged or responded to.”
A government spokesperson said: “The AI Safety Summit has brought together a wide array of attendees, including international governments, academia, industry and civil society, as part of a collaborative approach to drive targeted, rapid international action on the safe and responsible development of AI. These attendees are the right mix of expertise and willingness to be part of the discussions.”
Article amended on 1 November 2023
Topics:
- artificial intelligence