Home News Clearview AI and 48 Canadian law enforcement agencies violate Canadian privacy laws

Clearview AI and 48 Canadian law enforcement agencies violate Canadian privacy laws

Report finds that agencies illegally used Clearview AI’s facial recognition technology

1
PHOTO: Scott Web / Unsplash

Written by: Harvin Bhathal, Features Editor

Clearview AI, the US technology company, recently came under scrutiny for their violation of Canadian privacy laws. A joint investigation from four Canadian privacy commissioners (BC, Alberta, Quebec, and federal) discovered the company indiscriminately compiled a database of over three billion images, including those of Canadians. 

Illegally obtained photos, as well as biometric facial arrays, were collected without consent, and then disclosed to law enforcement agencies for them to compare and crossmatch with a suspect’s face. 

In an interview with The Peak, Sun-ha Hong, assistant professor at SFU and expert in data and surveillance, shared his thoughts. 

“I wasn’t surprised, but I despaired,” he said. “There is a long track record with surveillance technologies where we break things fast and then worry about the consequences later. Clearview AI is so well known for its extraordinary privacy risks, as well as the company’s numerous questionable ties to white nationalists. The default should have been to approach this with extreme caution.”

Founded in 2017, the company operated in near secrecy until an exposé from the New York Times in January 2020 first made the company’s practices public. The current investigation set these practices within Canada.

The findings of the privacy commissioners’ report show that at least 48 law enforcement agencies across the country used their facial recognition technology, in violation of provincial and federal privacy laws. Agencies such as the RCMP, VPD, and many more admitted to their use of the technology after hackers obtained the company’s client list and leaked it to BuzzFeed

“The fact that 48 Canadian agencies have already used Clearview AI shows us how quickly [technology is] used in real life situations — certainly much faster than any rigorous, independent test of whether these tools actually work, and whether they cause more harm than good,” Hong said.  

Clearview AI avoided the existing frameworks of privacy law in Canada by claiming this technology was “being trialed or not fully implemented,” though its use was continuous in active operations. 

Doug Mitchell, lawyer for Clearview AI, said the company “only collects public information from the Internet which is explicitly permitted under PIPEDA [Personal Information Protection and Electronic Documents Act].”

While Clearview AI is no longer permitting the use of their technology in Canada, they refuse to delete existing photos of Canadians in their database, unless individuals apply through a lengthy process. 

Hong said, “I think it’s also a reminder to us that surveillance is now a truly global market, and we Canadians aren’t immune [to] all this.”

The federal government is examining its privacy laws, including Bill C-11, a proposed data privacy legislation. However, Clearview AI’s violations do not constitute financial penalties outlined in C-11. 

Hong added, “The key here is that these technologies are currently incredibly under-regulated. 

“The industry has spent millions on the messaging that any and all regulation will destroy our technologies, but of course they would.”

Hong said the impacts surveillance technologies are reaching across many facets of society. From being “used by police, or used in courts to recommend jail or parole, or to select candidates in job interviews,” surveillance technologies “are starting to have life changing impact[s] on ordinary people.”  

He added, “But it’s often very hard to actually see the algorithms and the data these companies use” as it’s unknown whether Clearview AI uses independent audits properly. 

“There are many good things about C-11 and Canada has an opportunity to lead other nations on properly regulating harmful technologies. The key bottleneck is enforcement and accountability. It is about establishing clear precedents that when you build and sell unproven, harmful surveillance tech, there are strong consequences.”

1 COMMENT

Exit mobile version