This week, UK prime minister Rishi Sunak is hosting a group of 100 representatives from the worlds of business and politics to discuss the potential and pitfalls of artificial intelligence.
The AI Safety Summit, held at Bletchley Park, UK, begins on 1 November and aims to come up with a set of global principles with which to develop and deploy “frontier AI models” – the terminology favoured by Sunak and key figures in the AI industry for powerful models that don’t yet exist, but may be built very soon.
While the Bletchley Park event is the focal point, there is a wider week of fringe events being held in the UK, alongside a raft of UK government announcements on AI. Here are the latest developments.
Advertisement
Bletchley Declaration
The summit got off to a bang with the announcement that 28 countries and the European Union have agreed a declaration saying global action is needed to tamp down the risks of AI. The Bletchley Declaration included an agreement that substantial risks may arise from potential intentional misuse or unintended issues of control of frontier AI, with particular concern caused by cybersecurity, biotechnology and disinformation risks, according to the UK government, which oversaw the international consensus.
One of the clauses within the declaration includes provisions for South Korea to host a mini-summit in the next six months, with France hosting an in-person summit next year. “This is a landmark achievement that sees the world’s greatest AI powers agree on the urgency behind understanding the risks of AI – helping ensure the long-term future of our children and grandchildren,” Sunak said in a statement.
The prime minister was keen to tout this as a British achievement. “The UK is once again leading the world at the forefront of this new technological frontier by kickstarting this conversation, which will see us work together to make AI safe and realise all its benefits for generations to come,” he said.
Sign up to our The Daily newsletter
The latest science news delivered to your inbox, every day.
Attendee list announced
Around 10.40am, UK technology secretary Michelle Donelan walked on stage and said for all its boons, AI “could further concentrate unaccountable power into the hands of a few, or be maliciously used to undermine societal trust, erode public safety or threaten international security”.
Plenty of cameras focused on Elon Musk, who arrived in Luton via private jet on 31 October. His presence will be a balm to Sunak, who has tied part of the success of this week’s event to Musk’s reputation and who has agreed to participate in a discussion livestreamed on X, formerly Twitter, on 2 November. (Sunak’s Conservative party colleagues and former Twitter staff fired by Musk have called the move “idiotic” and “mad”.)
Alongside Musk’s arrival, the UK government unveiled the list of governments and organisations in attendance – but not the names of all the guests. There are 120 organisations and businesses on the list. Around a third are bunched under the “academia and civil society” banner – although some, like the RAND Corporation, which was set up in 1948 to aid the US Air Force, seem out of place on the list. China is among the 28 governments represented, while 40 companies will be present at the conclave.
Invitees start arguments
The topics up for discussion after Donelan opened proceedings today remain a hot-button issue. Companies appear to have won the initial skirmish, successfully setting the framework for debate that favours them, including what some argue is an unfounded focus on existential risk and danger.
Not all attendees are happy with what is being discussed. On 31 October, at an AI Fringe event, Fran Bennett, interim director of the Ada Lovelace Institute, delivered a keynote speech that highlighted her concerns. “While the programme has expanded somewhat over time, its focus is still a way away from the very real and present harms of today’s ‘non-frontier’ AI systems, many of which arise when an AI system is deployed in a specific context,” she said.
Bennett’s criticism was supported by Nick Clegg, head of global affairs at Meta, who said at a fringe event that governments were “spending a huge amount of time on what remains a pretty speculative risk”.
US grasps control of AI safety
The first plenary session was livestreamed online, but attracted fewer than 100 viewers at a time on X and only around 30 on YouTube. Donelan gave her opening speech, then handed over to Gina Raimondo, the US secretary of commerce, who thanked the UK for hosting the summit.
However, Raimondo made it clear that, despite Sunak’s initial hope that the AI Safety Summit would be a launchpad for a Global AI Safety Institute, the US wouldn’t be joining it. Raimondo announced that the US would be launching its own AI safety institute, she said, run by the Department of Commerce and run by the US National Institute of Standards and Technology.
Raimondo did commit to establishing a formal partnership between the two bodies, but the situation highlights the jostling going on to control oversight of AI. “We will compete as nations,” Raimondo said in her speech. “Competition is a good thing. It brings out the best of us and allows us to innovate.”
“One thing the UK could be doing is taking a leadership position on AI regulation, particularly in light of the relative lack of regulation in the US,” says Mike Katell at the Alan Turing Institute in London. “It’s interesting that the US is essentially telling the prime minister they’re going to steal some of that fire.”
It was left to the next speaker at the summit, Wu Zhaohui, China’s vice minister of science and technology, to smooth things over. “We encourage collaborative governance,” he said at the plenary.
Who is attending the AI summit at Bletchley Park and why do they matter?
Despite qualms about the guest list at the summit, there are some highly notable attendees from the world of tech. Some of them include:
Yoshua Bengio, a computer scientist professor at the University of Montreal, Canada, who is often called one of the “godfathers of AI” alongside Geoffrey Hinton and Yann LeCun. Unlike Hinton, who used to work for Google, and LeCun, who still does work for Meta, Bengio has tended to steer clear of big tech’s grasp.
Elon Musk runs his own AI company, xAI, as well as owning the social media platform X. He is set to play a pivotal role in this summit – not least because he has got the ear of Sunak, who will be appearing in a livestreamed conversation on X on 2 November. That appears to be a quid pro quo for Musk being a major guest at social events the UK government is planning around the conference.
Nick Clegg was once deputy prime minister of the United Kingdom, but has since become a senior figure at Meta, the company formerly known as Facebook. He will be offering a twinned perspective at the summit from his time in politics and his new employment in tech.
Michelle Donelan is the UK’s tech secretary and her pre-politics career involved working in public relations for World Wrestling Entertainment. Donelan has said she doesn’t use ChatGPT, and has made no bones of disagreeing with Musk, but has been praised for quietly meeting targets in her department.
Sam Altman is the CEO of OpenAI, the developer of ChatGPT and AI image generator DALL-E. Altman is a mercurial figure with a reputation for being something of a prepper (someone who worries about the end of the world). As early as 2016, he had drawn up plans to escape to a remote island owned by billionaire tech entrepreneur Peter Thiel in the event of a pandemic; it is believed he never made it due to border closures when the covid-19 pandemic arrived. Altman is perhaps the most powerful man in AI at present, thanks to ChatGPT’s central role in the generative AI revolution.
Coming next
Much of the news is likely to emerge in the end-of-day closing speeches, which begin at 4.15pm, as well as any rumblings and grumblings from the sidelines of the conference. Reporters on the ground say they have been trapped in a press room, and their ability to leave is restricted.
Tomorrow, we can expect the prime minister himself to attend, convening a group of government and business leaders to discuss the existential risks the pre-meeting agenda highlighted.
Previous update: 31 October
G7 agrees AI code of conduct and principles
Members of the global community have decided that the week of the UK summit is a ripe time to announce their own AI developments. Alongside US president Joe Biden’s executive order on AI, announced yesterday, the G7 group of industrial nations has published a joint statement agreeing to a code of conduct and set of guiding principles for the development of generative AI models.
This agreed text isn’t drastically different to what is expected from the UK summit, nor from the US executive order. It compels organisations to “take appropriate measures” when developing AI tools to “identify, evaluate, and mitigate risks across the AI lifecycle” – such as bias and discrimination.
Anyone who knows anything about diplomacy knows that global consensus isn’t an accident. And Deb Raji at the Mozilla Foundation is glad to see action being taken in step. “This has become mainstream in a way that has caught the eye of policy-makers and alerted them to the reality of the fact that this is technology that needs to be regulated in some comprehensive way,” she says. She would rather it focused on all AI, rather than specifically on generative AI tools, but is happy something is happening.
UK names 12 AI training centres
Overnight, the UK government announced what it calls “a £118 million boost to skills funding” in the field of AI. That isn’t quite true: £117 million of it, earmarked for 12 Centres for Doctoral Training in AI, was announced last month. There is £1 million of new funding for an AI Futures Grants scheme that will help ease the cost of moving to the UK for leading AI researchers who want to migrate to the country.
But what is new are the location and specialisms of the 12 doctoral training centres. To list a few, the University of Oxford will focus on the environment, the University of Edinburgh is working on responsible and trustworthy natural language processing, and Northumbria University will look at AI through a citizen-centred lens.
It is worth noting that the UK isn’t the only country seeking to attract AI talent: yesterday’s US executive order included similar provisions to ease tough US immigration laws for AI specialists.
Fringe events kick off with criticism for summit
Yesterday, we highlighted the paucity of civil society representatives invited to the Bletchley Park summit. But just because they weren’t on the guest list, doesn’t mean they aren’t making their voices heard.
A number of unofficial fringe events are taking place to capitalise on the press attention. On 30 October, campaign group The Citizens held what it calls the People’s AI Summit, where Safiya Umoja Noble at the University of California, Los Angeles, spoke about her worries over what is to come. Noble is the author of Algorithms of Oppression, and said she fears that AI will only amplify discrimination and oppression.
“There’s just an overwhelming mountain of evidence (of bias) here,” she said at the event. “I think the bigger question now before us is: why do we have public officials who lack the moral character and courage to confront these companies and their leadership and hold them accountable? We’re talking about maybe a thousand people on planet Earth who are making decisions that will affect billions of people.”
Who is attending the AI summit at Bletchley Park and why do they matter?
Despite qualms about the guest list at the summit, there are some highly notable attendees from the world of tech. Some of them include:
Yoshua Bengio, a Canadian computer scientist who is often called one of the “godfathers of AI”, alongside Geoffrey Hinton and Yann LeCun. Unlike Hinton, who used to work for Google, and LeCun, who works for Meta, Bengio has traditionally steered clear of big tech’s grasp and is currently a professor at the University of Montreal, Canada.
Elon Musk rarely needs any introduction, but he is set to play a pivotal role in this summit – not least because he has got the ear of Sunak, who will be appearing in a livestreamed conversation on X on Thursday. That appears to be a quid pro quo for Musk being a major guest at social events the UK government is planning around the conference. Musk runs xAI, which you can read about here.
Nick Clegg was once the UK’s deputy prime minister, but has since become a senior figure at Meta. He will be offering a twinned perspective at the summit from his time in politics and tech.
Coming next
We have had two days of anticipation and endless column inches devoted to the summit’s goals and aims, but tomorrow the waiting is over.
Michelle Donelan, the UK’s technology secretary, will kick off proceedings, with round-table discussions among the attendees focusing on the potential risks – from bioterrorism and cybersecurity, from losing control of AI and from integrating it into society. There is still no plan to discuss AI’s environmental impact, which we have highlighted previously.
We are expecting to be drip fed gossip from the discussion, and New Scientist’s Matthew Sparkes will be filing dispatches from the ground.
30 October
UK testing government chatbot
The UK government is testing a large language model chatbot called Gov.uk Chat that can answer questions citizens may have about tax, student loans and benefits, according to The Telegraph.
Gov.uk Chat will be trained on millions of pages hosted on the Gov.uk website, which includes advice on housing, immigration and taxation. The privacy notice for the chatbot says: “GOV.UK Chat is designed to help users to navigate information on GOV.UK, similar to a search function, so in order to provide answers to users it needs all the data it has to provide the most accurate answer.”
However, the newspaper reported that the chatbot wouldn’t be trained on citizens’ private data and users would be prompted not to share such information with the chatbot for data privacy issues. The pilot project is already being tested with businesses and, if successful, could be in the public’s hands shortly.
£100m fund announced for AI healthcare
Sunak has announced a £100 million fund that will aim to promote the development of AI tools in healthcare. The AI Life Sciences Accelerator Mission will focus on efforts to treat cancer and slow the onset of dementia by using AI to pore through potential treatments and novel drugs, rather than spending years on laboratory tests.
Members of academia, industry and front-line clinicians will soon be invited to propose projects for funding under the scheme. “Safe, responsible AI will change the game for what it’s possible to do in healthcare, closing the gap between the discovery and application of innovative new therapies, diagnostic tools, and ways of working that will give clinicians more time with their patients,” said Michelle Donelan, the UK’s science and technology secretary, in a statement.
Warnings of industry capture at summit
With the guest list for the Bletchley Park summit limited, those left out have raised their concerns about industry capture of the event. In response to the 100 people gathering at the summit, an equal number have signed an open letter to Sunak warning that “communities and workers most affected by AI have been marginalised by the summit”.
The letter echoed the concerns of many academics ahead of the summit that the agenda and discussion would be too dominated by industry interests. “What I fear and suspect is that it will be a meeting dominated by men, many of whom have financial interests that disqualify them from defending the public good, and that it will focus on long-term risks that don’t make big tech uncomfortable, rather than present harms that would force companies to change the way they design and implement AI,” says Carissa Véliz at the University of Oxford, who wasn’t one of the signatories of the letter.
Coming next
Observers will be watching to see who makes the final list of attendees at the summit. Reuters reports that China is sending along its vice minister of science and technology, despite some calls from people in Sunak’s own Conservative party to ban the country from participating.
We are expecting plenty more announcements from the UK government, although they will have to compete with an executive order, announced today by US president Joe Biden’s White House, focused on AI. US vice president Kamala Harris will also attend the Bletchley Park summit.
Politico, which saw a draft copy of the executive order, reports that the document’s scope is broad, with every federal agency compelled to appoint a chief AI officer, whose job will be to ensure that AI discrimination isn’t encoded into the parts of government they oversee.
Topics:
- artificial intelligence/
- futurology