Amid the many debates about the potential dangers of artificial intelligence, some researchers argue that an important concern is being overlooked: the energy used by computers to train and run large AI models.
Alex de Vries at the VU Amsterdam School of Business and Economics warns that AI’s growth is poised to make it a significant contributor to global carbon emissions. He estimates that if Google switched its whole search business to AI, it would end up using 29.3 terawatt hours per year – equivalent to the electricity consumption of Ireland, and almost double the company’s total energy consumption of 15.4 terawatt hours in 2020. Google didn’t respond to a request for comment.
On one hand, there is good reason not to panic. Making that sort of switch is practically impossible, as it would require more than 4 million powerful computer chips known as graphics processing units (GPUs) that are currently in huge demand, with limited supply. This would cost $100 billion, which even Google’s deep pockets would struggle to fund.
Advertisement
On the other hand, in time, AI’s energy consumption will present a genuine problem. Nvidia, which sells 95 per cent of the GPUs used for AI, will ship 100,000 of its A100 servers this year, which can collectively consume 5.7 terrawatt hours a year.
Things could, and probably will, get much worse in time as new manufacturing plants come online and dramatically increase production capacity. Chip maker TSMC, which supplies Nvidia, is investing in new factories that could provide 1.5 million servers a year by 2027, and all that hardware could consume 85.4 terawatt hours of energy a year, says de Vries.
With businesses rushing to integrate AI into all sorts of products, Nvidia will probably have no problem clearing its stock. But de Vries says it is important for AI to be used sparingly, given its high environmental cost.
Sign up to our The Daily newsletter
The latest science news delivered to your inbox, every day.
“People have this new tool and they’re like, ‘OK, that’s great, we’re gonna use it’, without regard for whether they actually need it,” he says. “They forget to ask or wonder if the end user even has a need for this in some way or will it make their lives better. And I think that disconnect is ultimately the real problem.”
Sandra Wachter at the University of Oxford says consumers should be aware that playing with these models has a cost. “It’s one of the topics that really keeps me up at night,” says Wachter. “We just interact with the technology and we’re not actually aware of how much resources – electricity, water, space – it takes.” Legislation to force transparency about the models’ environmental impact would push companies to act more responsibly, she says.
A spokesperson for OpenAI, the developer of ChatGPT, tells New Scientist: “We recognise training large models can be energy-intensive and is one of the reasons we are constantly working to improve efficiencies. We give considerable thought about the best use of our computing power.”
Read more
AI shows no sign of consciousness yet, but we know what to look for
There are signs that smaller AI models are now approaching the capabilities of larger ones, which could bring significant energy savings, says Thomas Wolf, co-founder of AI company Hugging Face. Mistral 7B and Meta’s Llama 2 are 10 to 100 times smaller than GPT4, the AI behind ChatGPT, and can do many of the same things, he says. “Not everyone needs GPT4 for everything, just like you don’t need a Ferrari to go to work.”
A Nvidia spokesperson says running AI on its GPUs is more energy-efficient than on an alternative type of chip called a CPU. “Accelerated computing on Nvidia technology is the most energy-efficient computing model for AI and other data centre workloads,” they say. “Our products are more performant and energy efficient with each new generation.”
Journal reference:
Joule DOI: 10.1016/j.joule.2023.09.004
Topics:
- AI/
- energy efficiency