A modern and slightly ominous digital image that represents the impact of deepfake technology on elections.

The Rise of Political Deepfakes in Election Campaigns: A New Threat to Democracy?

Deepfake technology, once just a fascinating gimmick for swapping faces in viral videos, has taken a darker turn. What started as a bit of fun online is now a powerful tool that could shake the very foundations of democracy. Imagine a video of a politician saying something shocking, only to find out it was all fake—but by then, the damage is done. As we head into major elections around the world, the potential for deepfakes to mislead voters and sow chaos is a threat we can’t afford to ignore.


The Recent Incident: A Glimpse into the Future

In the 2020 Delhi elections, the Bharatiya Janata Party (BJP) used deepfake technology to make their candidate, Manoj Tiwari, “speak” in multiple languages. The aim was to connect with more voters by breaking down language barriers, which on the surface seems harmless, even clever. But this incident showed just how easily deepfakes could be used in politics—and not always with good intentions.

Looking ahead to the 2024 U.S. Presidential elections, experts are increasingly concerned. “We’re on the brink of a new era in political misinformation,” says Dr. Hany Farid, a digital forensics expert at UC Berkeley . “The ability to create hyper-realistic deepfakes is advancing faster than our ability to detect them, and that’s a recipe for disaster in a politically charged environment.”

The concern is backed by data: a study by Deeptrace found that the number of deepfake videos online nearly doubled in just nine months, from 7,964 in December 2018 to 14,678 by July 2019, with a significant portion of these videos being politically motivated .


Impact on Public Perception

Deepfakes are like misinformation on steroids. They don’t just trick the mind—they trick the eyes and ears too. And in a world where social media can spread a video to millions within minutes, deepfakes have the power to sway public opinion faster than truth can catch up.

According to a survey conducted by Pew Research Center, 63% of Americans believe deepfake videos will create significant confusion about what is true and what is false during elections . The problem is that once a deepfake goes viral, the damage is done, even if it’s quickly debunked. People may be left questioning what’s real and what’s not, and trust in the political process can take a serious hit. In an age where voters are already skeptical of the media, deepfakes could push that distrust to dangerous levels.


The Technological Arms Race: Creation vs. Detection

The technology behind deepfakes is advancing rapidly, thanks to something called Generative Adversarial Networks (GANs). These are AI systems that compete against each other to create incredibly realistic fake images and videos. The result? Deepfakes that are so convincing, even experts have a hard time telling them apart from real footage.

But it’s not all bad news. Tech companies and researchers are working hard to stay one step ahead. Microsoft, for example, has developed a tool called Video Authenticator that can analyze videos and flag potential deepfakes . Facebook, in partnership with various organizations, has launched the Deepfake Detection Challenge, which aims to improve the accuracy and availability of detection tools .

Still, it’s a bit of an arms race. “For every advancement in detection, there’s a corresponding leap in deepfake creation,” says John Villasenor, a technology policy expert at UCLA . This constant game of cat and mouse is likely to continue, with the stakes only getting higher as we approach critical elections.


Legal and Ethical Concerns

When it comes to deepfakes, the law is struggling to keep up. In some places, like the U.S., states are starting to pass laws that specifically target deepfakes used to mislead voters. California, for instance, has a law that bans the distribution of malicious deepfakes in the 60 days leading up to an election .

But the challenge goes beyond just passing laws. There’s a fine line between protecting free speech and preventing harm. Deepfakes could be used as a form of political satire or commentary, which is protected under free speech laws. But what happens when a deepfake causes real damage—like ruining someone’s reputation or influencing an election? The ethical questions are tricky, and so far, there are more questions than answers.

“Legislation is crucial, but it’s only part of the solution,” argues Danielle Citron, a law professor and deepfake expert at the University of Virginia . “We need a combination of public awareness, technological solutions, and international cooperation to truly address the threat posed by deepfakes.”

In Europe, the General Data Protection Regulation (GDPR) offers some level of protection, especially against the unauthorized use of personal data in deepfakes. However, even there, experts agree that current legal frameworks are insufficient to tackle the unique challenges deepfakes present.


A Global Perspective: Lessons from Abroad

The threat of deepfakes isn’t limited to any one country. In Brazil, during the 2018 presidential election, manipulated videos were widely used to spread false information. While these weren’t deepfakes in the strictest sense, they were a warning sign of what could happen when deepfake technology is added to the mix .

In the Philippines, where political rivalries run deep, manipulated videos have been used to attack opponents and confuse voters . As deepfake technology becomes cheaper and easier to use, there’s a real fear that it could destabilize elections in countries where democracy is already fragile.

“These are not just isolated incidents,” says Sam Gregory, program director at Witness, a human rights organization that focuses on video and tech . “What we’re seeing is the early stages of a global phenomenon, where the misuse of deepfake technology could seriously undermine public trust in democratic institutions.”

These examples show that the threat of deepfakes is global, and so too must be our response. Democracies around the world need to learn from these cases and start preparing now, before it’s too late.


Conclusion: A Call to Action

Deepfakes are more than just a tech trend—they’re a clear and present danger to democracy. As we move forward, it’s crucial that governments, tech companies, and everyday people work together to address this threat.

We need better detection tools, yes, but we also need to educate the public. According to a study by the AI Foundation, 70% of people believe they could spot a deepfake, but in reality, the technology is advancing faster than most people realize . This highlights the need for increased media literacy and public awareness campaigns to ensure voters are better prepared to question what they see online.

We also need stronger laws that can keep up with the pace of technology without stifling free speech. This will require international cooperation because the challenges posed by deepfakes are too big for any one country to tackle alone.

If we act now, we can protect our democracies from the worst effects of this technology. But if we ignore the threat, we risk allowing deepfakes to undermine trust in our political systems and, ultimately, in each other.


Further Reading:

  1. Pew Research Center: Americans’ Views on Deepfakes and their Impact on Society
  2. Deeptrace: The Growing Threat of Deepfakes
  3. Microsoft’s Video Authenticator: Analyzing the Fight Against Deepfakes
  4. Witness.org: Understanding Deepfakes in a Global Context

Back to blog