By: Ashima Shukla, Peak Associate
Content warning: brief mentions of death, suicide, torture, and sexual assault.
Recently, Billie Eilish shared a post on Instagram about artificial intelligence that has haunted me ever since: “AI consumed more water this year than the global bottled water industry.” The post was based on a study that drew from the average metrics from global datacentres. Although these Big-Tech-sanctioned datacentres only provide vague estimates, we know the rate at which AI is depleting our environmental resources is massive, and growing. The post shook me because AI is so often normalized in the public consciousness. We are encouraged to experience it as wonder. A magic trick, a new game.
I hear of people using generative AI to write books, design lesson plans, build training modules for workplaces, plan meals, and set budgets. As a teaching assistant, I hear of my students using it to summarize lectures and explain concepts quickly. Each click results in an ontological rupture. We’re rapidly losing our ability to differentiate between real and AI-generated visuals, and with it, our grasp on reality.
It isn’t just being used by young people. Visiting home for the holidays, I have noticed my grandmother watching hours of YouTube videos about Hinduism, where every image and animation is generated by AI. Blue-skinned Krishna and a surprisingly blue-eyed Lakshmi stared back at me from the screen. Gods are rendered by machines. Mythologies and cultures are filtered through code. Devotion is now a humming server somewhere far away.
But for all its pervasiveness, do we actually know how it works? And more importantly, what it costs?
When you type a query into ChatGPT, the system doesn’t “think.” Each word is broken down,, and mapped in relation to others across datasets. This process happens somewhere physical: inside datacentres. These are industrial-scale buildings that house servers, storage devices, and networking equipment. Hyperscale facilities, like those owned by Amazon and Google, can span over 30,000 square feet. This isn’t happening in some faraway nation. Bell Canada is building six datacentres in BC. Northern Virginia, the most densely concentrated datacentre spot in the world, hosts more than 250 of them.
These datacentres require staggering amounts of electricity to work. Each ChatGPT query uses ten times more energy than a Google search, not to mention the loads of energy used to train these AI models to begin with. Most power grids rely on fossil fuels, emissions which degrade air quality and release toxic pollutants into the air, harming the health of nearby communities. That isn’t all — as the datacentres disrupt local electric grids, one report estimates that residential electricity bills could more than double in Virginia by 2039. Innovation for some, once again, makes basic amenities unaffordable for others.
Then there is water. Powering AI models generates immense heat, which is cooled using freshwater.
A 100-word email generated by ChatGPT comes at the expense of 16.9 ounces of water, in a world where nearly a quarter of humanity lacks access to clean water.
Meanwhile, the extraction of rare earth elements required to build AI hardware is mined through processes that are environmentally destructive and often exploitative.
The environmental footprint of GenAI is immense, yet the information about it remains strategically opaque. This is because Big Tech benefits from selling AI as efficient, creative, and inevitable, while obscuring the material and ecological violence that sustains it. We haven’t even seen the full story. Behind the clean minimalist interfaces and cheerful subservient AI chatbots lies another hidden cost: that of human labour.
Picture Mercy, a content moderator in Nairobi, working for Meta through an outsourced firm where her role is to process one piece of flagged content every 55 seconds over a 10-hour shift. Her life revolves around mundane, repetitive labour that helps train AI and yields little over a dollar an hour. Because of a lack of other opportunities, she persists. While reviewing a video of a fatal car crash uploaded to Facebook, familiar scenes flash before her eyes. Her neighbourhood, and the victim? Her grandfather. The same footage floods her screen from different angles, reposted endlessly.
Think about Oskarina Fuentes, who joins one such outsourcing firm while finishing her master’s in engineering in Venezuela. Her country’s economic collapse forces her and her husband to cross over to Colombia. Hopes for good jobs disintegrate rapidly, as does her health, and soon, the task of labelling data is all that is keeping her afloat. The erratic nature of this task-based work controls her life, as she stops leaving the housing during weekdays and begins sleeping with her computer on full volume nearby.
Then there is Sunita, sitting on the floor of her home in Jharkhand, India, drawing digital boxes marking up traffic lights and cars she has never seen in real life. Her clicks train Tesla AI systems that recognize objects in cities she will never visit. Do you notice a pattern?
Across the Global South, these stories converge. Moderators and data labellers perform mundane tasks for hours for pennies, while others witness graphic violence, suicides, torture, and sexual assault for hours every day to make our feeds and our experiences with GenAI cleaner and more appropriate. Many have no idea how their work will be used, or other options to put food on the table, labelling data for military software for targeting killings in Gaza or helping Russian surveillance companies train facial recognition software.
The AI economy mirrors colonial hierarchies, as data and labour flows from the Global South to fuel “innovation” and profit in the Global North. The work is framed as an “opportunity” for these workers, yet it remains underpaid, precarious, and largely unregulated. In the absence of unions and protections, these workers become trapped in exploitative conditions. Anthropologist Mary Grey has called it “ghost work,” essential and systemically invisiblized. If you knew that data labellers are committing suicide because of the harrowing content they label to make our experiences with GenAI safe, would you reach for it as easily as you do now?
If AI feels inevitable, it is because we are trained to encounter it only at the level of convenience. Because magic works best when no one asks what is happening behind the curtain.
This is why education about AI cannot be reduced to technical literacy. We must confront its environmental costs, its labour supply chains, and its geopolitical consequences. We must ask where the data comes from, whose knowledge is erased, whose labour, water, land, and time are hidden behind this “innovation.” Regulation should be a fundamental part of this reckoning. But without public understanding, regulation can be brittle — to be delayed and lobbied away.
Education, in contrast, builds the conditions for sustained resistance. It gives people language. It gives us context. It gives us the ability to choose otherwise. And this discomfort might just be what is needed for imagining ethical technological futures.



