Facial recognition technology has gone too far

Regulations should be implemented to avoid safety threats

0
328
This is a collage of different people’s facial features all pasted together to form one face.
COLLAGE: Gudrun Wai-Gunnarsson / The Peak

By: Hailey Miller, Staff Writer

Facial recognition technology (FRT) is a type of artificial intelligence that identifies people by their unique facial features. It’s being used for everything from unlocking your phone, making purchases, and boarding passes. Our province uses the BC Services Card app that requires you to submit a video of yourself to verify your identity when applying for certain financial assistance and other applications. FRT is also used in seemingly harmless photo filters online, but those put our identity at risk, too.

Despite FRT becoming popular recently, its inception began in the 1960s. By the 2000s and 2010s, the technology became more prominent and focused on more precise features. This includes various types of artificial intelligence that further improve the accuracy of FRT and use of FRT for security purposes, by authorities, and for overall identification and verification.

With the widespread use of FRT comes the concerns of safety, privacy, bias, and theft. Some of these concerns include the unauthorized storage and sharing of personal facial identification and information, access to verification and data without consent, and, in extreme cases, could involve tracking and stalking individuals, especially in military attacks. Another issue with FRT is that it’s often biased against gender identity and race, as AI models are greatly influenced by human bias in both the data input and output. For instance, a 2018 study concluded that FRT systems had a higher error rate for women and people of colour.

“There needs to be more secure ways in which FRT is used and information stored, so that it can’t be accessed or shared by unauthorized individuals without consent.”

The problem with the abundance of FRT is that it’s becoming unavoidable. Although it may be convenient in situations like unlocking your phone and verifying your identity in secure environments, this doesn’t stop the fact that FRT still puts people at risk for identity theft, violation of privacy, and fraud. What happens if your identity ends up in the hands of someone who’s unauthorized to use or share your information? Not only does this jeopardize people’s identities and cause security and privacy threats, but it also causes undue stress and harm in a world that’s already overreliant on technology.

My phone is over 10 years old, and I’ve never had to use facial recognition to unlock it. When Apple’s face ID was introduced in 2017, I remember thinking how much it seemed like we were living in a dystopian future — since then FRT has only become more abundant. This isn’t necessarily a good thing. As great as technology may be for some things, it can also be disruptive, unreliable, and a threat to our safety and information. In the case of FRT, the concerns are clear — our privacy is at risk, and with ever-evolving technology, who knows what could be next? 

In order to alleviate these issues, regulations need to be implemented so that FRT has a safer standard. Individuals should have the choice of whether or not they want to use facial recognition in certain instances — whether this be on your phone, by media and app usage, through governments, airports, and policing, and other authorities. This includes what happens to our FRT data when it’s shared or in storage. There needs to be more secure ways in which FRT is used and information stored, so that it can’t be accessed or shared by unauthorized individuals without consent. So, if and when using FRT, be aware of how you’re using it, where the information is going, and what the consequences may be.

Leave a Reply