Home Opinions Who is exploited under the development of AI?

Who is exploited under the development of AI?

Unregulated AI is harming the most vulnerable

0
PHOTO: Sanket Mishra / Unsplah

By: Kelly Chia, Editor-in-Chief

Content warning: mentions of sexual exploitation of women and children, revenge pornography, racism

What infuriates me about conversations revolving around artificial intelligence (AI) is how content we are with how much it costs us. The ethical risks associated with AI are treated far too casually. There are plenty of cases where artificial intelligence can expedite research, and overall improve our society. Sure. For example, The Peak previously reported on AI projects which help protect wildlife habitats. However, this technology’s lack of legislation and regulation makes it incredibly easy for bad actors to exploit, and there are plenty of bad actors. 

Too many AI enthusiasts treat these exploitations as an inevitability — and it’s not. It’s a consequence of loose regulations. Tech companies brush the costs under their carpets, hoping we’ll simply be content with admiring what AI can do. So, before we start fussing about dismissing this new technology as though it’s some Promethean miracle, we need to pay attention to the problems AI are currently causing with no legal barriers.

First, these machines aren’t sentient — however, they have faces and corporations behind them. These corporations can be held accountable. They develop AI by exploiting millions of underpaid workers, who are often recruited out of impoverished populations and paid as little as $1.46/hour after tax,” compared to “AI researchers” who are paid up to six figures. 

These workers are paid menial wages while undertaking tedious tasks, like combing through thousands of pages of data and labelling them. They have no job protections. Content moderators on Facebook from Sama cite being surveilled, and having to make decisions on graphic and disturbing content that are uploaded onto the platform in 50 seconds, or risk being fired. Similar stories exist across big tech companies like Amazon, where data labelers reportedly make less than a dollar an hour. These corporations purposely hire “refugees, incarcerated people, and others with few job options.” AI networks can grow at the rate they do because of this unimpeded exploitation. 

There are also environmental consequences unaddressed by the fast development of training AI. MIT reports that to train just one AI model would produce the equivalent of “more than 626,000 pounds of carbon dioxide,” and the cloud services storing that environmentally costly data now “has a larger carbon footprint than the entire airline industry.” In addition, oil companies like Shell are working with tech companies using AI to dramatically boost fossil fuel productions and profits by extracting gas and oil at a higher rate, even deposits previously considered too dangerous. 

In 2018, David Dao, a PhD candidate researching AI, alongside a cohort of contributors, started creating a list on how AI is being exploited. His list is expansive, and finds companies using AI to surveil, discriminate, and spread disinformation. 

Consider the program, Lensa. You may have seen it go viral last year for creating fairy-like avatars based on the users’ submissions of their likenesses. While that seems innocuous, the app proved to steal art, and has a tendency to sexualize and undress women — particularly racialized women. Lensa is trained and built using a large, openly accessible data set that scrapes images from the internet. This allows Lensa to indiscriminately steal art without permission, as even copyrighted images are legal to scrape in the UK and US. It also consequently means that, Lensa, and other AI models trained like it, inherit a dataset filled with descriptions and images of sexual assault, racist and ethnic slurs, and more.

Alarmingly, journalist Melissa Heikkilä, who is of Asian heritage, noted the app created far more sexualized avatars for her than her white counterparts. Further, it picked up on her racial features and hypersexualized them, even producing avatars that “appeared to be crying.” Heikkilä’s Chinese colleague also reported finding “reams and reams of pornified avatars.” This means anyone can generate non-consensual nude images of women and children with practically no obstacle. These explicit images can easily be weaponized and held against victims without their knowledge, harming their careers, personal lives, and welfare. 

These networks are both sophisticated and accessible enough that there are already thousands of examples and cases. In 2020, a cybersecurity company investigating manipulated media, found that an app had targeted and stripped the clothing of “at least 100,000 women, the majority of whom likely had no idea” and reported that many of these girls were likely underage. 

Let me reiterate: although some laws against revenge pornography and defamation could protect the victims, there’s no clear legislation punishing anyone from creating deepfakeblending your facial features with another body — pornography without your consent. 

Alongside this, there is a privacy risk for facial recognition apps to collect your stored data and sell it to third-parties. ExpressVPN remarks that Prisma Labs, the developers of Lensa, don’t specify what they do with user data. This means they could “share user data with advertisers, log file information, and register user information to gather data.” We should be wary of empowering such exploitative technology, especially when it’s marketed as a fun tool. Younger people are especially vulnerable to using, and being exploited by facial recognition apps. 

Dao notes that law enforcement makes frequent use of risk assessment AI technology, like this neural network learning to infer criminality using facial images. Although criminality based on facial features has been proven to be an ineffective tool that only aggravates racial discrimination, this technology still runs rampant. 

While this particular network is a university project, it’s not difficult to imagine how risk assessment technology could create a feedback loop where overpoliced people of colour are identified as criminals. In Wisconsin, Correctional Offender Management Profiling for Alternative Sanctions is a risk assessment computer program. It’s used by legal courts to determine the rate of recidivism based on a private algorithm. Notably, Black people were “77% more likely to be pegged as criminals,” even with no previous criminal history. 

You may notice how I’ve drawn attention to the hidden costs of AI development that hurt marginalized peoples the most. This is no coincidence. The utopia of AI may seem promising, but it’s currently being exploited by corporations on the backbone of millions of underpaid workers, to profit off of databases built on human biases. It’s unconscionable to let these details be incidental, folded under the false flag of human progress. 

It might feel helpless to fight against this, but remember our enemies aren’t amorphous, sentient computers. It’s corporations that can and should be held to legal scrutiny, and they are. 

In Europe, the EU Artificial Intelligence Act is being legislated into law after an open letter calling for pauses in AI development. This comes after Italy banned ChatGPT for a month, requesting actions like user data controls being more visible, opt-out options, a transparent privacy policy, and more, before lifting the ban. In the US, OpenAI CEO Sam Altman cautions that without regulation, the development of these technologies can easily impede on and compromise elections. Tri Ta R-Westminster, an assembly member in Sacramento, California, is pushing forward legislation to criminalize the use of AI to make and share pornography “using someone’s likeness” without consent. Writers in the US are striking to protect their livelihoods, and one of the things they are pushing is regulating AI to maintain their careers. Artists also recently took a class action lawsuit against artificial intelligence companies. 

Here in Canada, a man was sentenced for creating child pornography using AI technology in April 2023, a case which paves a precedent on which we can punish wrongdoers. We’re currently waiting for Bill C-27 to pass. Introduced in June 2022, this bill promises to regulate AI systems by surveying and enacting risk assessments of the technology. Although it has been criticised for being vague because it doesn’t provide specific guidelines on how to govern AI, this is not a bill that can wait until 2025 to be enacted into law. Still, we have the ability as citizens to pressure our Senators and Members of Parliament to bring the issue of artificial intelligence and Bill C-27 to Parliament. 

This is an ongoing fight, and that fight underscores the ethics we need AI to be grounded in to proceed. While there’s so much good AI could accomplish, we must be diligent in preventing the harm it can inflict, too. I also recommend going through lists like Dao; this technology is unimpeded in the harm it can cause, partly because people don’t understand the extent of what it can do. Education empowers more people to stand in solidarity against these corporations. 

NO COMMENTS

Leave a ReplyCancel reply

Exit mobile version