By: Mason Mattu, Section Editor
A document published by Elections Canada wrote that various social media advertisements aimed at getting Canadians to buy into cryptocurrency scams were not directly related to electoral misinformation. Yet, this description misses the mark regarding the political implications of these ads. It’s time to regulate social media giants to ensure that voters make their decisions based on accurate information.
Advertisement thumbnails are not uncommon. The ones I have come across often showcase a flashy headline and an artificial intelligence (AI) generated image of a political leader caught in a scandalous or illegal act. If viewers click on these advertisements, they are directed to a webpage that mimics a credible news source.
These ads are not just financial scams — they defame political figures. One example of such defamation can be seen with former federal New Democratic Party (NDP) leader Jagmeet Singh, who has been falsely labelled as a terrorist in 15 countries and as a mansion owner by social media advertisements since 2018. Another politician that has fallen prey to these advertisements is Prime Minister Mark Carney, whose face has been used in deepfake YouTube ads to promote cryptocurrency scams.
Even if we concede to Elections Canada’s conclusions regarding these advertisements, that they are not created to generate electoral misinformation, we must recognize the role they are playing in shaping public opinion about political figures. Many voters, if viewing a clickbait AI-generated graphic on a site like YouTube, might believe the information without even clicking the ad and buying into the financial scam. Not everyone has the same digital literacy, especially in the age of AI. I have seen firsthand how these advertisements impact voter sentiment. In the recent federal election, I came across online comments rallying against Singh, disapproving of his policies simply because people believed he is a terrorist/criminal. In the previous federal election, the Security and Intelligence Threats to Elections Task Force reached out to social media platforms when misinformation campaigns were spotted. Alongside this, Elections Canada made itself available for social media companies to flag concerns about misinformation (accidental) or disinformation (deliberate) campaigns.
We are relying on the good faith of these companies to voluntarily report — yet we know that their commitment isn’t enough.
Clearly, misinformation and disinformation are still spreading rampant on these sites. The government needs to ditch optional cooperation and actually get into the game of enforcement by regulating social media giants to ensure that misinformation and potential disinformation aren’t amplified through paid advertising programs.
The framework is already in place for proper regulation and enforcement. Elections Canada, for example, recommends that the Canada Elections Act (CEA) be amended to bar the misrepresentation of individuals in images and voice. This requires social media companies to become more stringent about the advertisements they are accepting and filters that help detect AI-generated content. Furthermore, the body recommends the CEA to require tags under advertisements that consist of AI-generated elements. This will ultimately help weed out misrepresentation in both paid and unpaid content. Finally, the government needs to create monetary punishments for big tech companies for failing to spot misinformation in paid content that is supposed to be reviewed before approved.
Ultimately, the distinction between different kinds of scam doesn’t matter when looking at how voters perceive them. Allowing social media platforms to profit from Canadian data while feeding misinformation to the masses is a gross error. The government must act to preserve our democratic institutions and the pure choice voters make, based on fact, at the ballot box.
EXAMPLE IMAGES:




