Peter Kyle, the UK’s secretary of state for science, innovation and technology, has said he uses ChatGPT to understand difficult concepts
Ju Jae-young/Wiktor Szymanowicz/Shutterstock
The UK’s technology secretary, Peter Kyle, has asked ChatGPT for advice on why the adoption of artificial intelligence is so slow in the UK business community – and which podcasts he should appear on.
This week, Prime Minister Keir Starmer said that the UK government should be making far more use of AI in an effort to increase efficiency. “No person’s substantive time should be spent on a task where digital or AI can do it better, quicker and to the same high quality and standard,” he said.
Now, New Scientist has obtained records of Kyle’s ChatGPT use under the Freedom of Information (FOI) Act, in what is believed to be a world-first test of whether chatbot interactions are subject to such laws.
Advertisement
Read more
The AI expert who says artificial general intelligence is nonsense
These records show that Kyle asked ChatGPT to explain why the UK’s small and medium business (SMB) community has been so slow to adopt AI. ChatGPT returned a 10-point list of problems hindering adoption, including sections on “Limited Awareness and Understanding”, “Regulatory and Ethical Concerns” and “Lack of Government or Institutional Support”.
The chatbot advised Kyle: “While the UK government has launched initiatives to encourage AI adoption, many SMBs are unaware of these programs or find them difficult to navigate. Limited access to funding or incentives to de-risk AI investment can also deter adoption.” It also said, concerning regulatory and ethical concerns: “Compliance with data protection laws, such as GDPR (a data privacy law), can be a significant hurdle. SMBs may worry about legal and ethical issues associated with using AI.”
The latest science news delivered to your inbox, every day.
“As the Cabinet Minister responsible for AI, the Secretary of State does make use of this technology. This does not substitute comprehensive advice he routinely receives from officials,” says a spokesperson for the Department for Science, Innovation and Technology (DSIT), which Kyle leads. “The Government is using AI as a labour-saving tool – supported by clear guidance on how to quickly and safely make use of the technology.”
Kyle also used the chatbot to canvas ideas for media appearances, asking: “I’m Secretary of State for science, innovation and technology in the United Kingdom. What would be the best podcasts for me to appear on to reach a wide audience that’s appropriate for my ministerial responsibilities?” ChatGPT suggested The Infinite Monkey Cage and The Naked Scientists, based on their number of listeners.
Read more
How does ChatGPT work and do AI-powered chatbots “think” like us?
As well as seeking this advice, Kyle asked ChatGPT to define various terms relevant to his department: antimatter, quantum and digital inclusion. Two experts New Scientist spoke to said they were surprised by the quality of the responses when it came to ChatGPT’s definitions of quantum. “This is surprisingly good, in my opinion,” says Peter Knight at Imperial College London. “I think it’s not bad at all,” says Cristian Bonato at Heriot-Watt University in Edinburgh, UK.
New Scientist made the request for Kyle’s data following his recent interview with PoliticsHome, in which the politician was described as “often” using ChatGPT. He said that he used it “to try and understand the broader context where an innovation came from, the people who developed it, the organisations behind them” and that “ChatGPT is fantastically good, and where there are things that you really struggle to understand in depth, ChatGPT can be a very good tutor for it”.
DSIT initially refused New Scientist‘s FOI request, stating: “Peter Kyle’s ChatGPT history includes prompts and responses made in both a personal capacity, and in an official capacity”. A refined request, for only the prompts and responses made in an official capacity, was granted.
The fact the data was provided at all is a shock, says Tim Turner, a data protection expert based in Manchester, UK, who thinks it may be the first case of chatbot interactions being released under FOI. “I’m surprised that you got them,” he says. “I would have thought they’d be keen to avoid a precedent.”
Read more
The future of AI: The 5 possible scenarios, from utopia to extinction
This, in turn, poses questions for governments with similar FOI laws, such as the US. For example, is ChatGPT more like an email or WhatsApp conversation – both of which have historically been covered by FOI based on past precedent – or the results of a search engine query, which traditionally have been easier for organisations to reject? Experts disagree on the answer.
“In principle, provided they could be extracted from the department’s systems, a minister’s Google search history would also be covered,” says Jon Baines at UK law firm Mishcon de Reya.
“Personally, I wouldn’t see ChatGPT as being the same as a Google search,” says John Slater, an FOI expert. That is because Google searches don’t create new information, he says. “ChatGPT, on the other hand, does ‘create’ something based on the input from the user.”
With this uncertainty, politicians might want to avoid using privately developed commercial AI tools like ChatGPT, says Turner. “It’s a real can of worms,” he says. “To cover their own backs, politicians should definitely use public tools, provided by their own departments, as if the public might end up being the audience.”
Topics:
- politics/
- UK/
- ChatGPT