Mo Dhaliwal, CEO and Founder of Skyrocket Digital Inc., shared his expertise on the evolving technology of deepfakes and its implications in a recent interview with CanadianSME Small Business Magazine. Mo highlighted the significant advancements in deepfake technology, which now allow for high-resolution, real-time face swapping and voice synthesis that are almost indistinguishable from real videos. He discussed the dangerous potential of deepfakes to manipulate public opinion and undermine democratic processes, citing examples from the United States and Slovakia. Mo emphasized the importance of AI-powered detection tools and the need for a combination of technological solutions, regulatory measures, and public education to mitigate the risks associated with deepfakes, particularly in the context of elections. Mo Dhaliwal, an entrepreneur and business strategist, has spent over two decades shaping movements that promote inclusivity and break cultural barriers. His tech career began in Silicon Valley as a software developer leveraging open source technologies for major brands.
Mo Dhaliwal, an entrepreneur and business strategist, has spent over two decades shaping movements that promote inclusivity and break cultural barriers. His tech career began in Silicon Valley as a software developer leveraging open source technologies for major brands.
After honing his skills in brand development and creative strategy for major Canadian brands in 2011, Mo founded Skyrocket, an agile creative agency, where he serves as the Director of Strategy. Day-to-day, Mo continues to come up with high-level business solutions for a global roster of clients and oversee the development team working to create cutting-edge apps and digital experiences
Under Mo’s guidance, Skyrocket has successfully delivered over 70 brands, 100 web properties and dozens of technologically complex projects, including a sport recruiting web app for Scout Zoo, an online reading environment for Simbi, a cryptocurrency exchange brand strategy for IDEX and a digital home base for the medical VR startup Precision OS.
Most of all, Mo is passionate about making a real-world impact on fast-growing startups and established organizations alike.
How has the technology behind deepfakes evolved over the years, and what are the most significant advancements you’ve observed?
Deepfakes, a blend of “deep learning” from artificial intelligence and “fake” utilize AI to produce exceptionally realistic videos, audio, and images that mimic and distort real individuals. Deepfakes have followed the evolution of AI we’ve seen in general. Initially, deepfakes were lower resolution, difficult to generate, and easy to spot due to the idiosyncrasies of their models — regularly messing up human details such as eyes, lips or the number of fingers on each hand. Today, we are able to generate high-resolution deepfakes with real-time face swapping and voice synthesis that is mostly indiscernible from authentic videos of the same people.

Could you provide specific examples of how deepfakes have been used to target public figures or influence public opinion in recent times?
Recent events have demonstrated how deepfakes can target public figures and influence public opinion, posing serious threats to the integrity of democratic processes. For instance, in the United States, an AI-generated robocall impersonating President Biden discouraged voters from participating in the Democratic Primary in New Hampshire, directly attempting to manipulate election outcomes. Another stark example occurred in Slovakia, where a viral deepfake audio clip featured Michal Šimečka of Progressive Slovakia boasting about rigging the 2023 election, leading to significant political fallout and his eventual defeat.
In what ways do you foresee deepfakes impacting the upcoming provincial and federal elections in Canada, and how do they threaten the credibility of democratic institutions and processes?
The integrity of democratic institutions hinges on the trust and confidence of the population. Deepfakes erode this trust, creating a climate where genuine footage and audio can be dismissed as fabricated, a phenomenon known as the “liar’s dividend”. This skepticism extends beyond political figures to the institutions themselves, including the media, which traditionally plays a critical role in informing voters. As faith in these bodies diminishes, so does civic engagement and voter turnout, weakening the very foundation of the democratic process.

Deepfakes are particularly dangerous in a politically polarized environment. By exploiting existing divisions, they can deepen social and political rifts, leading to civil unrest. The tailored nature of these manipulations means they can be used to target specific groups, amplifying fears and prejudices to manipulate electoral outcomes. This capacity to feed and exploit divisions disrupts immediate electoral outcomes and has long-term implications for social cohesion.
What are some of the most effective tools currently available for detecting deepfakes, and how do they work?
AI-powered detection tools, which include AI algorithms, facial landmark analysis, temporal consistency checks, and multimodal detection, are key in identifying manipulated media. These tools analyze videos and images to uncover patterns that are present in generative media. Major firms such as Microsoft, Amazon, and Google, through initiatives like the AI Elections Accord, are at the forefront of developing these technologies. They focus on real-time detection and sharing information on emerging threats to combat deceptive AI use effectively. Unfortunately, these tools face limitations, such as difficulty in detecting new, unseen samples and vulnerability to adversarial attacks, which pose challenges to their effectiveness and widespread implementation.
What strategies and regulatory measures would you recommend for organizations and governments to mitigate the risks associated with deepfakes, especially in the context of elections?
To mitigate the risks associated with deepfakes, a combination of technological solutions, regulatory frameworks, and public education is essential. Governments need to enforce strict regulations on the creation and dissemination of AI-generated content, requiring disclosure when images or videos are altered. In addition, developing public awareness campaigns about the nature and dangers of deepfakes can help the electorate become more critical of the media they consume. Perhaps, in partnership with the tech giants, the government can work on the establishment of encryption-based certification systems to authenticate media — without that, millions are left to their own devices in deciding whether or not a message they have received is legitimate.

