AI ethics is not only about bias, safety, and privacy. It’s also about watts, water, wires and waste. If we ignore the physical footprint of “intelligent” systems, we risk building a smarter digital world on an unsustainable foundation.
Over the last two years, generative and agentic AI have leapt from labs into daily life. That surge has a material cost, electricity to train and serve models, water to cool data centers, specialized chips to run them, and eventually electronic waste when hardware turns over. MIT researchers recently summed it up bluntly, we are improving AI faster than we are measuring the trade-offs, and our governance is struggling to catch up.
At the same time, major tech firms have disclosed that AI is complicating their climate pledges. Google’s emissions rose 13% in 2023 and are ~48% higher than 2019, largely because AI drove more data-center energy use exactly the opposite direction of its 2030 net-zero ambition.
The ethical question is not only “what did the model predict?” It’s also “what did the model consume to predict it?”
Training frontier models requires vast compute clusters running for weeks. While exact numbers vary by setup, studies and disclosures converge on the same story, large models consume large amounts of energy and produce non-trivial emissions especially when grids are fossil-intensive. MIT’s explainer highlights rising electricity demand from both training and deployment, with significant uncertainty because measurement is still maturing. MIT News
Once a model is public, the real footprint begins: billions of queries mean billions of inference runs, 24/7, on fleets of accelerators. Even modest per-query energy can scale to enormous totals at global usage. Again, MIT notes that the operational phase (serving users) is a major share of generative AI’s overall impact and one organizations often underestimate.
Data-center electricity use is climbing with AI. Reports and analyses around Google’s 2024 environmental report link the 13% YoY emissions rise and 48% five-year increase to AI-driven compute expansion. This aligns with wider concerns that data-center energy demand could double mid-decade. Data Center Dynamics
Cooling high-density AI clusters takes water directly at sites using evaporative cooling and indirectly through water used in power generation.
One widely cited case, training GPT-4 on Microsoft’s Iowa supercomputing cluster coincided with a 34% jump in Microsoft’s global water consumption (2021→2022), and reporting described millions of gallons used for cooling during peak summer training. Local stories and the AP’s coverage made the “hidden water cost” legible to the public. Iowa Public Radio
For ethics teams, that reframes “responsible AI.” A system that treats users fairly but draws substantial water from stressed watersheds raises a different kind of harm one felt by surrounding communities and ecosystems. MIT’s two part series explicitly calls out water as a key impact vector that needs better measurement.
AI’s footprint begins before the first line of code runs. Manufacturing advanced GPUs/accelerators is energy- and water-intensive and depends on minerals whose extraction can be environmentally damaging. Then, because AI evolves rapidly, expensive hardware turns over quickly feeding a rising e-waste stream.
Analyses from IEEE Spectrum and others warn that generative AI’s pace could add millions of tons of additional e-waste annually by the end of the decade if current refresh cycles persist. The waste includes not just chips but memory, boards, power systems, and batteries often containing hazardous substances. IEEE Spectrum
This is the part of “AI ethics” that almost never makes the slide deck but it should.
Ethics teams have matured on topics like fairness, explainability, and human oversight. The environmental dimension adds three more pillars to your governance stack:
OECD’s recent work on the “AI footprint” urges governments and companies to standardize measurement, improve transparency, and look beyond just operational electricity to lifecycle impacts (manufacturing through disposal). That’s the blueprint to turn good intentions into comparable numbers and eventually, accountability. OECD
The EU AI Act is the first comprehensive AI law. Its core focus is risk to people, but the final text and subsequent guidance are beginning to pull in sustainability especially for foundation models and general-purpose AI, where transparency around resource use is emerging. Observers still call the Act a missed opportunity on environment, but the door is open via codes of conduct and delegated acts to strengthen energy and transparency provisions. Clifford Chance
UNESCO’s 2021 Recommendation on the Ethics of AI…. adopted by 193 member states explicitly elevates environmental and ecosystem well-being as a core value alongside human rights. While non-binding, it gives countries a common language to integrate sustainability into national AI strategies and procurement. UNESCO
The OECD’s 2025 work program on the AI footprint pushes for standardized metrics, broader data collection, and AI-specific impact tracking across energy, water, and materials so policies can target AI as AI, not just as generic “ICT.” OECD AI
Bottom line: policy is moving but measurement and disclosure are prerequisites. Without them, legislating effectively is guesswork.
As scrutiny grows, advocates are flagging PFAS (“forever chemicals”) in cooling systems/electronics and f-gases used in HVAC or chipmaking. These persistent substances pose health and environmental risks if leaked or poorly handled, adding another layer to the AI-infrastructure footprint. Expect transparency and phase-down debates to accelerate with the AI data-center boom. The Guardian
You don’t need to run a hyperscaler to act. Here’s a pragmatic checklist you can adopt (and signal publicly):
If your responsible-AI program ends at model cards and bias audits, it’s incomplete. The environmental dimension is now table stakes:
AI can help solve climate problems from grid optimization to materials discovery. But the means should match the ends. When we make how AI lives on the planet as important as what AI does for people, we move from “responsible AI” in theory to responsible AI infrastructure in practice.
What would you add to this playbook? If your org is measuring (or struggling to measure) AI’s footprint, I’d love to hear what’s worked and what hasn’t.