← Return to search results
Back to Prindle Institute
EnvironmentFeaturedTechnology

Is Artificial Intelligence Sustainable?

By Daniel Davis
11 Aug 2025

A recent advertisement for Google’s “Gemini” artificial intelligence (AI) model shows users engaged in frivolous, long-form conversations with their AI personal assistant. “We can have a conversation about anything you like,” Gemini cheerfully informs one user, who is unsure of how to approach this new technology. Another user asks Gemini, “how do you tell if something is spicy without tasting it?” to which Gemini responds (without any hint of the stating-the-obvious sarcasm with which a human may be expected to reply such an inane question) “have you tried smelling it?” What is clear from this advert, and other similar adverts produced by companies such as Meta, is that the companies designing and selling AI intend for its adoption to be ubiquitous. The hope of “big tech” is that AI will be used liberally, for “anything” as the advert says, becoming part of the background technological hum of society in just the same way as the internet.

Awkwardly for these companies, this push for the pervasive adoption of AI into all realms of life is coinciding with a climate and ecological crisis that said technologies threaten to worsen. “Data centers,” the physical infrastructure upon which AI systems depend, are predicted by the IEA to double in their energy consumption from 2022 levels by 2026, consuming around 4.5% of total electricity generated globally by 2030 – which would rank them fifth in the list of electricity usage by country, just behind Russia and ahead of Japan. This of course comes with a significant carbon footprint, driving up global energy demand at precisely the moment that frugality is required if countries are to meet their net-zero goals. Such a significant increase in electricity usage is likely to extend our dependency on fossil fuels as efforts to decarbonize supply can’t keep up with demand.

Beyond electricity usage, data centers also require both vast amounts of water for cooling and rare-earth minerals to produce the hardware components out of which they are built. Google’s data centers consumed (that is, evaporated) approximately 31 billion liters of water in 2024 alone. This at a time when water scarcity is already a serious problem throughout much of the world, with two-thirds of the global population experiencing severe water scarcity during at least one month of the year. Similarly, the mining of rare-earth minerals such as antimony, gallium, indium, silicon, and tellurium is another aspect of the AI supply chain known to wreak both ecological and social havoc. China, by far the world’s largest processor of rare-earth minerals, having realized the heavy environmental toll of rare-earth mines, have now mostly outsourced mining to countries such as Myanmar, where the mining process has poisoned waterways and destroyed communities.

Given the vast resources required to build, train, and maintain AI models, it is fair to question the wisdom of asking them “anything.” Do we really need power-hungry state-of-the-art algorithms to tell us that we can smell an ingredient to check whether it’s spicy?

In response to such sustainability concerns, Google has pointed out that alongside the more mundane uses of AI displayed in its advertisement, the implementation of AI throughout industry promises a raft of efficiency savings that could result in an overall net-benefit impact on global emissions. In its 2025 environmental report, Google describes what it calls an “optimal scenario” based on IEA research stating that the widespread adoption of existing AI applications could lead to emissions reductions that are “far larger than emissions from data centers.” Although, some of the IEA’s claims are based on the somewhat spurious assumption that efficiency savings will be converted into reduced emissions rather than simply lowering prices and increasing consumption (for example, some of the emissions reductions predicted by the IEA’s report come from the application of AI to the oil and gas sector itself, including helping to “assess where oil and gas may be present in sufficiently large accumulations”).

Even granting a level of skepticism here, the potential of AI to produce positive outcomes for both the environment and humanity shouldn’t be overlooked. Initiatives such as “AI for Good,” that seeks to use AI to measure and advance the UN’s Sustainable Development Goals, and “AI for the Planet,” an alliance that explores the potential of AI “as a tool in the fight against climate change,” illustrate the optimism around AI as a tool for building a more sustainable future. In fact, a 2022 report produced by “AI for the Planet” claims the technology could be implemented in three key areas in the fight against climate change: mitigation, through measuring and reducing emissions; adaptation, through predicting extreme weather and sea-level rise; and finally, research and education.

There is also potential to use AI as a tool for biodiversity conservation. Research carried out by The University of Cambridge identified several applications for AI in conservation science, including: using visual and audio recognition to monitor population sizes and identify new species; monitoring the online wildlife trade; using digital twins to model ecosystems; and predicting and mitigating human–wildlife conflicts. However, the authors also point to the significant risk of eroding support and funding for smaller scale-participatory research in favor of the larger and wealthier institutions able to carry out AI-based research. Additionally, they highlight the risk of the creation of a colonial system whereby data is extracted from lower-income countries to train models in data centers in North America and Europe, resulting in the export of AI-driven mandates for the use of resources and land back to those lower-income countries.

Such risks indicate the need to consider an important distinction that has been made in the field of AI ethics. Philosophers such as Aimee van Wynsberghe and Henrik Skaug Sætra have argued for the need to move from an “isolationist” to a “structural” analysis of the sustainability of AI technologies. Instead of thinking of AI models as “isolated entities to be optimized by technical professionals,” they must be considered “as a part of a socio-technical system consisting of various structures and economic and political systems.” This means that the sustainability of AI doesn’t come down to a simple cost-benefit analysis of energy and resources used versus those saved through greater efficiency and sustainability applications. In order to fully understand the indirect and systemic effects of AI on environmental sustainability, these philosophers argue, we need to consider AI models in their social and political context.

A structural analysis must begin by pointing out that we live in a system characterized by immense inequalities of both wealth and power. As it stands, most AI models are owned and operated by tech companies whose billionaire CEOs have been described as oligarchs. These companies are the principal beneficiaries of a political system driven by economic growth and fueled through resource extraction. We should expect the AI models they produce to propagate this system, further concentrating power and capital to serve the narrow set of interests represented by these companies and their owners. A purely “isolationist” focus suits these interests as AI’s positive applications can be emphasized, while any negative effects, such as vast levels of resource usage, can be presented as technical problems to be ironed out, rather than systemic issues requiring political reform.

To take some examples already touched upon in this article, an isolationist approach can highlight the efficiency savings that are made possible by using AI models to streamline industry, while a structural approach will point out the economic reality that efficiency-savings tend to be harnessed only to ramp up production, lowering prices and leading to increased consumption, and therefore, higher profits. An isolationist approach can view the dependence of AI on large quantities of rare-earth minerals as a technical problem to be solved through more efficient design, whereas the structural approach will point to the need to address the immense injustices that are intrinsic to the rare-earth supply chain. An isolationist approach will tout the potential for AI models to guide ecological restoration in lower-income countries, while a structural approach will point out how this echoes the colonial history of conservation science.

Once we start to consider AI within its political and socio-economic context rather than as an isolated technological artefact, we can look beyond its direct applications for sustainability so that its many troubling indirect and systemic implications come into sharper focus. It becomes apparent that, rather than promoting sustainability, there is a far greater propensity for AI to enable further resource extraction, evade environmental regulations, and manipulate public debate and opinion on environmental issues.

A striking example of this is the way that AI is being used to undermine public trust in climate science. A report authored by the Stockholm Resilience Centre argues that the ability to generate synthetic text, images, and video at scale could fuel a “perfect storm” of climate misinformation, whereby AI models produce vast amounts of climate denial content that is then disseminated through social media algorithms already geared towards bolstering controversial and polarizing content. Consider this faux-academic paper recently written by Elon Musk’s Grok 3 model that casts doubt on the science of anthropogenic global warming. The paper was widely circulated on social media as an example of the first “peer-reviewed” research led by AI. Of course, claims of “peer-review” are unfounded. Neither the publisher nor the journal are part of the Committee of Publication Ethics and the paper was submitted and published within just twelve days, with no indication of whether it underwent open, single, or double-blind review. It should come as no surprise that one of the co-authors, astrophysicist Willie Soon, is a climate denier known to have received millions in funding from the fossil fuel industry, and whose contested research was referenced by the AI-generated paper. Despite such an obvious conflict of interest, a blog-post by the COVID-19 conspiracy theorist Robert Malone gathered more than a million views, claiming that the use of AI meant that the paper was free from the biases of what he describes as “the debacle of man-made climate change.”

From a “structural” perspective then, ensuring that AI models are sustainable is not merely a technical issue but a political issue of confronting the systems and power-structures within which AI technologies are built and utilized. One step in the right direction is to democratize AI governance such that ultimate control over AI’s direction and implementation is wrestled from the the hands Silicon Valley oligarchs and given to democratically elected governments so that regulation can be imposed to promote AI’s sustainability, both in terms of its physical infrastructure and its applications. However, so long as AI remains enmeshed within the power structures responsible for creating the environmental crisis, it will never truly be a force for advancing sustainability.

Daniel has a PhD in Philosophy from the University of Leeds. His research focuses on the role of values in biodiversity conservation and the implications of value-ladeness for objectivity in conservation science. Daniel’s writing also engages with the importance of procedure and deliberation for establishing more legitimate forms of scientific inquiry, the use and abuse of scientific findings in devising social policy, and the ways in which technology both shapes and is shaped by ideology. He has a strong belief that the truth emerges through the process of debate and deliberation.
Related Stories