Oli here, with Oskar peering over my shoulder, and today we are diving into something that suddenly feels everywhere: deepfake scams. From fake celebrity investment videos to cloned voices demanding urgent bank transfers, the line between real and fabricated has never been thinner.

What are deepfake scams and why are they exploding?
Deepfakes are synthetic audio or video clips created using artificial intelligence to mimic real people. In the early days, they needed serious computing power and technical skill. Now, off-the-shelf tools and apps can generate convincing fakes in minutes, which is why deepfake scams are spreading so quickly.
Scammers use cloned voices, faces and even mannerisms to trick people into sending money, sharing passwords, or revealing sensitive information. The tech has improved faster than most people’s ability to recognise it, and that gap is exactly where criminals thrive.
How deepfake scams work in real life
To understand the threat, it helps to look at how these cons actually unfold. Oskar has been tracking the most common patterns:
- CEO voice scams: An employee gets a call that sounds exactly like their boss, urgently asking for a confidential transfer. The number may even be spoofed to look genuine.
- Family emergency calls: A parent receives a panicked phone call from what sounds like their child, claiming to be in trouble and needing money immediately.
- Fake celebrity endorsements: A video appears on social media showing a familiar public figure praising a new investment, crypto platform or miracle product.
- Romance and dating cons: Scammers enhance or entirely fabricate video calls to appear more trustworthy or to pretend to be someone else.
In each case, the scam relies on emotion and urgency. The tech is impressive, but the psychology is classic: rush people so they do not stop to think.
Red flags that a deepfake might be targeting you
You do not need to become a digital forensics expert to protect yourself from deepfake scams. A handful of practical red flags can go a long way:
- Odd eye and face movement: Blinking that seems off, eyes not quite tracking naturally, or expressions that do not match the words.
- Strange audio quality: The voice may sound slightly robotic, with odd pauses or unnatural emphasis, especially on certain words or names.
- Low resolution or compression: Scammers often use slightly blurred or compressed video, which conveniently hides the glitches.
- Refusal to switch channel: If someone will not move from one app to a normal phone call, or refuses a quick video chat from another angle, be suspicious.
- High pressure and secrecy: Demands to act immediately, keep things confidential, or bypass normal procedures are classic warning signs.
Practical ways to protect yourself from deepfake scams
Oli’s rule of thumb: never rely on a single channel of communication when money or sensitive data is involved. Here are simple habits that make a huge difference:
- Use a second verification step: If your “boss” calls about a transfer, hang up and call them back on a known number. If a family member sounds in trouble, message them separately or contact another relative.
- Agree code words with loved ones: Families can set a simple phrase or question that only they know, to confirm identity in emergencies.
- Slow everything down: Say you need 10 minutes to check something. Genuine people will understand. Scammers will push harder.
- Follow official processes at work: Stick to documented approval chains for payments, even if a senior person appears to be insisting otherwise.
- Be sceptical of viral videos: Treat sensational clips of politicians, celebrities or business leaders as suspect until verified by trusted news outlets.
What governments and platforms are doing about deepfake scams
Policymakers are scrambling to catch up. Many countries are exploring rules that would require clear labelling of AI-generated media, especially in political advertising and financial promotions. Social platforms are rolling out tools to detect and flag suspected fakes, although the tech is still far from perfect.
There is also a growing push for companies to protect employees with training on these solutions, particularly in finance, HR and customer support roles. The idea is to treat synthetic media as a standard security risk, just like phishing emails.


Deepfake scams FAQs
How common are deepfake scams now?
Deepfake scams are still less common than traditional phishing emails or text fraud, but they are growing quickly as the tools become cheaper and easier to use. Criminals are starting to combine voice cloning, video fakes and number spoofing, which makes the attacks feel very convincing. You are most likely to encounter them in high value situations, such as business payments, investment pitches or urgent family money requests.
Can normal people realistically spot deepfake scams?
Yes, in many cases. While the technology is improving, most deepfake scams still have small giveaways: slightly off lip sync, strange lighting, odd pauses in speech, or a refusal to switch to another communication channel. The strongest defence is not perfect detection, but process: always verify important requests through a second trusted route before acting.
What should I do if I think I have been targeted by a deepfake scam?
First, stop all communication with the suspected scammer and do not send any money or personal details. Take screenshots or recordings if possible, then contact your bank immediately if financial information was shared. Report the incident to the relevant fraud reporting service in your country and to the platform where the deepfake appeared. Sharing your experience can help others recognise similar deepfake scams in future.
