This week, UK prime minister Rishi Sunak is hosting a group of more than 100 representatives from the worlds of business and politics to discuss the potential and pitfalls of artificial intelligence.
The AI Safety Summit, held at Bletchley Park, UK, began on 1 November and aims to come up with a set of global principles with which to develop and deploy “frontier AI models” – the terminology favoured by Sunak and key figures in the AI industry for powerful models that don’t yet exist, but may be built very soon.
While the Bletchley Park event is the focal point, there is a wider week of fringe events being held in the UK, alongside a raft of UK government announcements on AI. Here are the latest developments.
Advertisement
Participants sign agreement
The key outcome of the first day of the AI Safety Summit yesterday was the Bletchley Declaration, which saw 28 countries and the European Union agree to meet more in the future to discuss the risks of AI. The UK government was keen to tout the agreement as a massive success, while impartial observers were more muted about the scale of its achievement.
While the politicians on stage wanted to highlight the successes, a good proportion of those who were at the summit felt more needed to be done. At 4pm yesterday, just before the closing plenary rounding up of the conclusions of the first day’s panels was due to begin, nearly a dozen civil society groups present at the conference released a communique of their own.
The letter urged those in attendance to consider a broader range of risks to humanity beyond the fear that AI might become sentient or be misused by terrorists or criminals. “The call for regulation comes from those who believe AI’s harm to democracy, civil rights, safety and consumer rights is urgent now, not in a distant future,” says Marietje Schaake at Stanford University in California, who was one of the signatories. Schaake was also keen to point out that the discussion “process should be independent and not an opportunity for capture by companies”.
Sign up to our The Daily newsletter
The latest science news delivered to your inbox, every day.
US flexes muscles further
While attention has been devoted to Bletchley Park, a good proportion of the headway made on AI has been taking place outside the conference – and we aren’t just saying that because reporters attending are locked in a media room, and only allowed out if they have a prearranged interview.
One case in point: at the US Embassy in London on 1 November, US vice president Kamala Harris unveiled a package of actions on AI that includes a political declaration signed by 30 other countries – notably more than those who signed up to the Bletchley Declaration trumpeted by the UK.
Harris carefully chose her words in her speech, saying that the US package would focus on the “full spectrum” of risks from AI. “Let us be clear, there are additional threats that also demand our action. Threats that are currently causing harm, and which to many people also feel existential,” Harris said in her speech – which could be taken as a suggestion the UK’s focus on AI gaining sentience was too myopic.
Four in 10 people say AI is moving too fast
As politicians and experts try to thrash out some form of agreement to conclude the summit, the public began to have their own say – in the form of survey data and public polling that was released to coincide with the summit.
Four in 10 people in the UK surveyed by polling company Survation believe that AI is being developed and unleashed at an unsafe pace. Respondents largely supported slowing down how the technology is rolled out to the public to prioritise safety, with 71 per cent in favour, while just 17 per cent say that the current pace of development of is safe.
The polling also highlighted the challenges of making the public aware of what AI is and how it works (we have a definition, and some guidance, that you can read here). Of those surveyed, 41 per cent admitted they don’t know much about AI – or don’t know anything about it at all. Speaking of which: Elon Musk took his time on the sidelines of the conference to warn that AI will outsmart humans.
Who is attending the AI summit at Bletchley Park and why do they matter?
Yoshua Bengio, a computer scientist professor at the University of Montreal, Canada, who is often called one of the “godfathers of AI” alongside Geoffrey Hinton and Yann LeCun (see below). Unlike Hinton, who used to work for Google, and LeCun, who still does work for Meta, Bengio has tended to steer clear of big tech’s grasp.
Elon Musk runs his own AI company, xAI, as well as owning the social media platform X. He is set to play a pivotal role in this summit – not least because he has got the ear of Sunak, who will be appearing in a livestreamed conversation on X on 2 November. That appears to be a quid pro quo for Musk being a major guest at social events the UK government is planning around the conference.
Nick Clegg was once deputy prime minister of the United Kingdom, but has since become a senior figure at Meta, the company formerly known as Facebook. He will be offering a twinned perspective at the summit from his time in politics and his new employment in tech.
Michelle Donelan is the UK’s technology secretary and her pre-politics career involved working in public relations for World Wrestling Entertainment. Donelan has said she doesn’t use ChatGPT and has made no bones of disagreeing with Musk, but has been praised for quietly meeting targets in her department.
Sam Altman is the CEO of OpenAI, the developer of ChatGPT and AI image generator DALL-E. Altman is a mercurial figure with a reputation for being something of a prepper (someone who worries about the end of the world). As early as 2016, he had drawn up plans to escape to a remote island owned by billionaire tech entrepreneur Peter Thiel in the event of a pandemic. It is believed he never made it due to border closures when the covid-19 pandemic arrived. Altman is perhaps the most powerful man in AI at present, thanks to ChatGPT’s central role in the generative AI revolution.
Ursula von der Leyen is president of the European Commission and was a welcome confirmed attendee after some uncertainty about whether she would turn up to Bletchley Park. Von der Leyen’s presence is likely to further her goal of developing a supranational group like the Intergovernmental Panel on Climate Change to focus on regulating AI across borders.
Yann LeCun is chief AI scientist at Meta and a professor at New York University. He is a proponent of open-source development in AI, which brings him into conflict with some of those at the summit who, on 1 November, said open-source development was too risky for AI. Today, he has praised the UK AI Safety Institute, which he hopes will bring hard data “to a field currently rife with wild speculations and methodologically dubious studies”.
Coming next
The final session of yesterday’s discussions felt oddly like the closing of the entire event. Donelan, the UK’s technology secretary, waxed lyrical about how the ink was still wet on a new page of history, among other images. But there is still a whole other day of discussions.
Today, Sunak wades in, convening a small group of governments, companies and experts “to further the discussion on what steps can be taken to address the risks in emerging AI technology and ensure it is used as a force for good”, at the same time as Donelan converses with her counterparts internationally to agree on next steps.
Once the summit is over, the prime minister will take part in a 45-minute conversation with Musk, which is likely to provide some fireworks. However, in an unusual step, the conversation will be streamed on X, Musk’s social network – but not live. The UK government has assured reporters nothing will be edited before transmission.
Topics:
- artificial intelligence/
- politics