In recent years, ChatGPT has exploded in popularity, with nearly 200 million users pumping a total of over a billion prompts into the app every day. These prompts may seem to complete requests out of thin air.

But behind the scenes, artificial intelligence (AI) chatbots are using a massive amount of energy. In 2023, data centers, which are used to train and process AI, were responsible for 4.4% of electricity use in the United States. Across the world, these centers make up around 1.5% of global energy consumption. These numbers are expected to skyrocket, at least doubling by 2030 as the demand for AI grows.

“Just three years ago, we didn’t even have ChatGPT yet,” said Alex de Vries-Gao, an emerging technology sustainability researcher at Vrije Universiteit Amsterdam and founder of Digiconomist, a platform dedicated to exposing the unintended consequences of digital trends. “And now we’re talking about a technology that’s going to be responsible for almost half of the electricity consumption by data centers globally.”

But what makes AI chatbots so energy intensive? The answer lies in the massive scale of AI chatbots. In particular, there are two parts of AI that use the most energy: training and inference, said Mosharaf Chowdhury, a computer scientist at the University of Michigan.

Related: Why does electricity make a humming noise?

To train AI chatbots, large language models (LLMs) are given enormous datasets so the AI can learn, recognize patterns and make predictions. In general, there is a “bigger is better belief” with AI training, de Vries-Gao said, where bigger models that take in more data are thought to make better predictions.

“So what happens when you are trying to do a training is that the models nowadays have gotten so large, they don’t fit in a single GPU [graphics processing unit]; they don’t fit in a single server,” Chowdhury told Live Science.

To give a sense of scale, 2023 research by de Vries-Gao estimated that a single Nvidia DGX A100 server demands up to 6.5 kilowatts of power. Training an LLM usually requires multiple servers, each of which has an average of eight GPUs, which then run for weeks or months. Altogether, this consumes mountains of energy: It’s estimated that training OpenAI’s GPT-4 used 50 gigawatt-hours of energy, equivalent to powering San Francisco for three days.

Inference also consumes a lot of energy. This is where an AI chatbot draws a conclusion from what it has learned and generates an output from a request. Although it takes considerably fewer computational resources to run an LLM after it’s trained, inference is energy intensive because of the sheer number of requests made to AI chatbots.

As of July 2025, OpenAI states that users of ChatGPT send over 2.5 billion prompts every day, meaning that multiple servers are used to produce instantaneous responses for those requests. That isn’t even to consider the other chatbots that are widely used, including Google’s Gemini, which representatives say will soon become the default option when users access Google Search.

“So even in inference, you can’t really save any energy,” Chowdhury said. “It’s not really massive data. I mean, the model is already massive, but we have a massive number of people using it.”

Researchers like Chowdhury and de Vries-Gao are now working to better quantify these energy demands to understand how to reduce them. For example, Chowdhury keeps an ML Energy Leaderboard that tracks the inference energy consumption of open-source models.

However, the specific energy demands of the other generative AI platforms are mostly unknown; big companies like Google, Microsoft, and Meta keep these numbers private, or provide statistics that give little insight into the actual environmental impact of these applications, de Vries-Gao said. This makes it difficult to determine how much energy AI really uses, what the energy demand will be in the coming years, and whether the world can keep up.

People who use these chatbots, however, can push for better transparency. This can not only help users make more energy-responsible choices with their own AI use but also push for more robust policies that hold companies accountable.

“One very fundamental problem with digital applications is that the impact is never transparent,” de Vries-Gao said. “The ball is with policymakers to encourage disclosure so that the users can start doing something.”

Share.

Leave A Reply

Exit mobile version