
The Trump administration wants to streamline the US government, using AI to boost efficiency
Greggory DiSalvo/Getty Images
What is artificial intelligence? It is a question that scientists have been wrestling with since the dawn of computing in the 1950s, when Alan Turing asked: “Can machines think?” Now that large language models (LLMs) like ChatGPT have been unleashed on the world, finding an answer has never been more pressing.
While their use has already become widespread, the social norms around these new AI tools are still rapidly evolving. Should students use them to write essays? Will they replace your therapist? And can they turbocharge government?
That last question is being asked in both the US and UK. Under the new Trump administration, Elon Musk’s Department of Government Efficiency (DOGE) taskforce is eliminating federal workers and rolling out a chatbot, GSAi, to those that remain. Meanwhile, the UK prime minister, Keir Starmer, has called AI a “golden opportunity” that could help reshape the state.
Advertisement
Certainly, there is government work that could benefit from automation, but are LLMs the right tool for the job? Part of the problem is we still can’t agree what they actually are. This was aptly demonstrated this week, when New Scientist used freedom of information (FOI) laws to obtain the ChatGPT interactions of Peter Kyle, the UK’s secretary of state for science, innovation and technology. Politicians, data privacy experts and journalists – not least us – were stunned that this request was granted, given similar requests for a minister’s Google search history, say, would generally be rejected.
That the records were released suggests that the UK government sees using ChatGPT as more akin to a ministerial conversation with civil servants via email or WhatsApp, both of which are subject to FOI laws. Kyle’s interactions with ChatGPT don’t indicate any strong reliance on the AI for forming serious policy – one of his questions was about which podcasts he should appear on. Yet the fact that the FOI request was granted suggests that some in government seem to believe the AI can be conversed with like a human, which is concerning.
As New Scientist has extensively reported, current LLMs aren’t intelligent in any meaningful sense and are just as liable to spew convincing-sounding inaccuracies as they are to offer useful advice. What’s more, their answers will also reflect the inherent biases of the information they have ingested.
Indeed, many AI scientists are increasingly of the view that LLMs aren’t a route to the lofty goal of artificial general intelligence (AGI), capable of matching or exceeding anything a human can do – a machine that can think, as Turing would have put it. For example, in a recent survey of AI researchers, about 76 per cent of respondents said it was “unlikely” or “very unlikely” that current approaches will succeed in achieving AGI.
Instead, perhaps we need to think of these AIs in a new way. Writing in the journal Science this week, a team of AI researchers says they “should not be viewed primarily as intelligent agents but as a new kind of cultural and social technology, allowing humans to take advantage of information other humans have accumulated”. The researchers compare LLMs to “such past technologies as writing, print, markets, bureaucracies, and representative democracies” that have transformed the way we access and process information.
Framed in this way, the answers to many questions become clearer. Can governments use LLMs to increase efficiency? Almost certainly, but only when used by people who understand their strengths and limitations. Should interactions with chatbots be subject to freedom of information laws? Possibly, but existing carve-outs designed to give ministers a “safe space” for internal deliberation should apply. And can, as Turing asked, machines think? No. Not yet.
Topics:
- artificial intelligence/
- government/
- ChatGPT