Oli and Oskar here, diving into a story that feels like it has leapt straight out of science fiction: the rapid rise of deepfake election ads. They are slick, convincing, and increasingly hard to spot – and they are already changing how campaigns are fought and how voters see the world.

What are deepfake election ads, really?
Deepfakes use artificial intelligence to copy a person’s face, voice, or mannerisms and then generate new video or audio that looks and sounds real. When you plug that into political messaging, you get deepfake election ads: clips that appear to show a candidate saying or doing things they never actually did.
Some are obvious satire, but the worrying ones are subtle: a slightly altered speech, a fabricated phone call, a short clip timed to drop just before a big vote. In an age where most of us scroll quickly and rarely double check sources, a convincing 20 second video can do serious damage.
Why deepfake election ads are so dangerous
The real threat is not just that people might believe one fake clip. It is that repeated exposure to manipulated content undermines trust in everything. If any video could be fake, then every video becomes questionable. That is a gift to anyone who wants to dismiss genuine evidence as fabricated.
There are three big dangers that keep experts awake at night. First, targeted disinformation: tailored videos designed to inflame specific groups of voters. Second, last minute smears: a fake scandal dropped hours before polls open, leaving no time for fact checking. Third, plausible deniability: real recordings can be brushed off as deepfakes, giving politicians a handy escape hatch.
How easy is it to make a deepfake now?
Only a few years ago, you needed serious computing power and technical skills to generate a passable fake. Now, user friendly tools can create eerily convincing results from a handful of photos and a few minutes of audio. Quality still varies, but the trend is clear: the barrier to entry is falling fast.
Campaigns do not even need to produce the videos themselves. Supporters, trolls, or foreign actors can do the dirty work, while official teams keep their hands technically clean. That creates a murky ecosystem where responsibility is hard to pin down and accountability is even harder.
Can we spot and stop deepfake election ads?
Governments and tech platforms are scrambling to respond, but they are playing catch up. Some countries are pushing rules that require political ads to disclose when AI has been used. Others are considering outright bans on synthetic media in campaign material. The challenge is writing laws that are tough on deception without crushing satire, art, or legitimate commentary.
On the tech side, researchers are developing tools to detect manipulated content by looking for tiny inconsistencies in lighting, reflections, or audio patterns. Platforms are experimenting with labels and automated filters. But detection is an arms race: as the tools improve, so do the fakes.
What voters can do right now
While the law and technology catch up, ordinary voters are the last line of defence. A few simple habits can make a big difference. Be sceptical of emotionally explosive clips that appear from nowhere, especially close to an election. Check whether reputable outlets are covering the same story. Look for the original source of a video, not just a repost.
It also helps to slow down. Deepfake election ads thrive on speed and outrage. If you pause before sharing, you cut off a major route for misinformation to spread. Talk to friends and family about the existence of deepfakes too – awareness alone makes people less likely to be fooled.
The future of truth in politics
We are heading into a period where seeing is no longer believing by default. That sounds bleak, but it could also push us towards healthier habits: checking sources, valuing trustworthy journalism, and demanding transparency from platforms and politicians alike.



Leave a Reply