V-Techtips: Seeing Isn’t Believing: A Practical Guide to Spotting AI Deepfake Videocall Scammers 

A video call comes in. It’s your spouse or child, and they’re in a panic—a car accident or an arrest. They need money wired now.

It looks real. It sounds real. But it’s a sophisticated fake.

It’s the call you pray you never get.

In 2025, AI like OpenAI’s Sora 2 can create these perfect, convincing deepfakes. A new NewsGuard report just found Sora 2 can be easily tricked into making “false claim videos” a high percentage of the time. For US families, the “seeing is believing” rule is now broken.

Here is a revised and edited version of your text, optimized for clarity, structure, and impact.

Key Takeaways:

  • AI deepfake video generation tools, exemplified by Sora 2’s 80% failure rate in a NewsGuard test, have dangerously easy-to-bypass safety rules.
  • The rise of deepfake scams has created a “trust tax,” making all online video verification unreliable and causing massive losses, including one $25.6 million corporate fraud.
  • The core problem is an industry “Race to the Bottom” on safety, where companies prioritize creative power over necessary core guardrails, like watermarks stripped in seven days.
  • Since there is no simple tech fix, personal defense relies on a human “Question, then trust” mindset and using the low-tech “Call-Back Test” and “Safe Word” strategy.

A Heads-Up: OpenAI’s New Sora 2 Video Tool Can Be Fooled

We’re all seeing the amazing videos made by new AI tools, and OpenAI’s Sora 2 is one of the most powerful. But just like any brand-new tech, it’s good to know its weak spots.

A new report from NewsGuard, a group that checks AI safety, took a close look at Sora 2. They found that its safety rules, which are meant to stop the tool from making harmful content, are surprisingly easy to get around.

How Easy Is It to Trick?

The good news is, you don’t need to be a hacker to test this. The researchers at NewsGuard simply fed the AI 20 different statements that are known to be false. The test was to see if Sora 2 would say, “No, I can’t make a video of that,” or if it would go ahead and create the fake content.

Here’s What They Found

The results are a bit of a wake-up call. The AI created misleading videos for 16 of the 20 false stories. That’s an 80% failure rate.

These weren’t just silly videos. They were realistic-looking fakes of serious topics, including:

  • Political Fakes: A video showing an election official in Moldova destroying ballots.
  • Humanitarian Fakes: A hyper-realistic video of a toddler being detained by U.S. immigration officers.
  • Business Fakes: A fake news report of a Coca-Cola spokesperson announcing a Super Bowl boycott.

Why Is This Happening?

The report shows that the AI’s safety rules seem to be a bit “fragile.”

For example, if you ask the AI to make a video of a famous person by name, it will usually refuse. But the testers found a simple workaround. When they asked for a video of “Ukraine’s wartime chief” (instead of “Volodymyr Zelensky”), the AI created a video of a man who looked just like him.

This suggests the safety features are more like a simple list of “bad words” rather than a deep understanding of what’s harmful. The AI is built to follow its user’s instructions first, and it seems the safety rules were “bolted on” at the end, not built in from the start.

What’s the Takeaway?

This is a great reminder for all of us to be careful. This technology is powerful, but it’s still new. The next time you see a video online that seems shocking or too wild to be true, it’s worth taking an extra second to think about where it came from.

Would you like to know some tips on how to spot an AI-generated video?

So, Why Is This a Problem for You?

These test results might seem a bit abstract, but they point to a big shift in our daily security. The core issue is that faking reality is now fast, cheap, and easy for everyone. What used to take a Hollywood-level special effects budget can now be done in minutes on an app.

This new power is supercharging old scams and creating new digital threats. Here’s a friendly heads-up on what to look out for.

1. Scams Are About to Get Way More Personal

The most immediate danger is a new wave of super-realistic fraud that targets individuals and families.

  • Fake Emergency Scams: We’ve already seen scams where criminals use AI to clone a person’s voice. In one case, a mom in Arizona got a call from “kidnappers” using an AI version of her daughter’s voice to demand ransom. Now, imagine that scam with a realistic video of your loved one. It makes it much harder to stay calm and spot the lie in a moment of panic.
  • Fake Boss Scams: This is already happening to businesses, with huge losses. In one famous case, a finance worker in Hong Kong was tricked into sending $25.6 million to scammers. How? They joined a video call and saw and heard people who looked and sounded exactly like their company’s CFO and other executives. The employee’s gut feeling that something was wrong was overruled by the “proof” they saw with their own eyes.

2. Tricking the System

It’s not just about tricking people; it’s about tricking computers, too.

  • Fake Verification: Many banks and apps use “liveness checks” to make sure you’re you, often by asking you to turn your head on camera. Scammers are now using AI to create deepfakes that can pass these security checks, letting them break into bank accounts or steal social media profiles.
  • Fake Reviews: That amazing 5-star video review for a product you’ve never heard of? It might be AI. This tech can create thousands of fake video testimonials, making it almost impossible to know if a product is real or a scam.

3. The New “Trust Tax” on Everything

All of this leads to a simple, frustrating problem: we can no longer automatically trust what we see or hear online.

In the past, a video call was solid proof you were talking to the right person. That assumption is now gone. This creates a “trust tax” on all our digital communication. We all have to be a little more paranoid and take extra steps to verify everything, which slows down both our personal lives and the speed of business.

A Big, Real-World Example: The “Deepfake Election”

This isn’t just a future problem; it’s happening right now. The rise of AI-generated video is a serious, immediate threat to elections and democracy everywhere. It turns political misinformation from a cottage industry into a giant, automated factory for lies.

The New “October Surprise”

The biggest fear is what’s called an “October Surprise” 2.0.

Imagine this: It’s the night before a big election. A hyper-realistic video suddenly goes viral. It appears to show a candidate confessing to a crime or an election official tampering with ballots. 

The goal isn’t just to make one candidate look bad. It’s to cause pure chaos and make people lose faith in the entire system.

The core problem here is speed. A lie can circle the globe before the truth can even get its boots on. A fake video spreads to millions in minutes, pushed by social media algorithms. The experts who fact-check and debunk the video are slow and careful (as they should be). By the time they prove it’s a fake, the damage is often done.

This Is Already Happening

This isn’t a “what if” scenario. We’re already seeing these tactics in the wild:

  • In the United States: You might remember the AI-generated robocall that impersonated President Joe Biden. It called voters in New Hampshire and told them not to vote in the primary.
  • In India: In recent elections, AI-generated fake images and videos have been used to damage the image of political candidates and mislead voters.

The “Liar’s Dividend”: The Scariest Part of All

Here’s the biggest long-term danger, and it’s a tricky one. It’s called the “liar’s dividend.”

The problem isn’t just that you might believe a fake video. The problem is that you might stop believing real ones.

When it becomes common knowledge that any video can be faked, a new problem starts. A politician who gets caught on a real video saying or doing something terrible can now just shrug and say, “That’s a deepfake. It’s not me.”

This is the “liar’s dividend.” It’s the benefit a liar gets in a world where nothing is provable. It breaks our trust in all evidence, makes holding people accountable almost impossible, and pushes us all into our own information bubbles where we only trust what we already believe.

An Industry-Wide Race for “Realism”

Let’s be clear: this isn’t just a Sora 2 problem. It’s a symptom of a much bigger issue happening across the entire tech industry.

Right now, the world’s biggest tech companies are in a fierce race for the best generative AI. But here’s the catch: they’re competing on power, not on safety.

It’s About “Cool Features,” Not Guardrails

You can see this just by looking at their announcements. Google’s new Veo 3.1, which just came out on October 15, 2025, is a direct competitor to Sora 2. When they launched it, their entire focus was on its powerful new creative features:

  • Richer, more realistic audio
  • Making sure a character looks the same from one scene to the next
  • Giving you more “cinematic” control over the video

What you don’t hear about is a breakthrough in safety. Across the board, safety is being treated as a checkbox, something to add on later, instead of being a core part of the design from day one.

This Creates a “Race to the Bottom” for Safety

This is the classic “move fast and break things” motto that Silicon Valley is famous for, but now the stakes are much higher. This intense competition is creating a dangerous “race to the bottom.”

Think about it from their point of view:

  • Being too safe is a disadvantage. The first company that adds really strong, restrictive safety rules will just make its AI look less capable and more “censored” than its competitors.
  • The incentives are all wrong. The smart business move is to build the bare minimum of safety (just enough to avoid a massive scandal) while pouring all the real money into the cool features that get people to sign up.

This is a classic case where what’s best for a company’s stock price isn’t what’s best for the public. And it all but guarantees that these incredibly powerful tools for creating misinformation will continue to be released to the public.

Why Are These AI Fakes So Hard to Stop?

So, you’re probably wondering, “Why can’t we just build a good ‘fake detector’ to stop all this?” It’s a great question, but the answer is a little tricky. The problem is a total mismatch: the technology to make fakes is getting better way faster than the technology to catch them.

Here’s a simple breakdown of why it’s such a hard problem.

1. The “Cops and Robbers” Problem

The way these AIs are built is the first challenge. Many of them use a system where two AIs are in a constant battle.

  • One AI, the “Generator,” creates a fake video.
  • Another AI, the “Discriminator,” tries to spot the fake.

The Generator’s only goal is to get so good that the Discriminator is fooled. This means the AI is literally training itself to be undetectable. It’s constantly learning to erase the little flaws and “tells” that a detector would look for.

This creates a permanent game of catch-up. By the time security experts build a new detector, the next version of the AI has already learned to beat it. They are always one step behind.

2. “But What About Watermarks?”

Watermarks are another good idea, but they’re not a perfect fix. A watermark is an invisible signal in the video file that says “I was made by AI.”

Unfortunately, they are very fragile.

  • They Break Easily: Just uploading a video to a social media site compresses the file, which can damage or completely erase the watermark.
  • They Can Be Stripped: Malicious actors are already on it. In the case of Sora 2, tools to strip its watermark were available online just seven days after its release.
  • The “DIY” Problem: Watermarks only apply to big, commercial AI models. A bad actor can just download a free, open-source AI model from the internet that has no watermarks and no safety rules in the first place.

3. The “Flood” Problem: Too Much to Check

Finally, there’s the biggest problem of all: the sheer volume.

Billions of photos and videos are uploaded to social media every single day. The automated systems that are supposed to catch bad content are already totally overwhelmed (which is why you still see spam and scams). Now, imagine adding millions of super-realistic deepfakes into that flood. The system just can’t handle it.

This all leads to a tough conclusion: there is no simple tech fix for this. The challenge is changing from cybersecurity (protecting our computers) to epistemic security (protecting our shared ability to know what’s real).

The real solution has to be human. It’s about all of us learning to be more skeptical and building better, more human-centric ways to verify who we’re talking to.

Seeing Isn’t Believing: A Practical Guide for Everyone

In this new environment where video can be faked instantly, the most important tool you have is your mindset. The old rule of “seeing is believing” is now dangerous. The new golden rule must be: “Question, then trust.”

This is a practical guide for everyone on how to protect yourself and your family from AI-powered fraud.

1. The AI Fake Detection Checklist

While AI-generated videos are becoming incredibly realistic, they are not perfect. Current models still make mistakes that a careful observer can spot. Use this checklist to look for red flags that show a video is a deepfake.

Category Red Flag to Look For Why the AI Struggles
Face & Eyes  Unnatural or infrequent blinking or an odd, rigid stare. AI struggles to replicate the subtle, random movements of real human eyes.
Facial expressions don’t match the voice’s emotion (e.g., a panicked voice with a waxy, neutral face). The model struggles to perfectly synchronize the emotion in the voice with the movement of the facial muscles.
Hands & Body  Warped, extra, or missing fingers; hands melting into objects. Hands are complex and notoriously difficult for AI to render correctly due to their intricate structure and movement.
Stiff, unnatural body movements or jerky gestures. AI-generated figures may lack the subtle, natural shifts in posture of a real person.
Background  Blurry, “shimmery” edges where the person meets the background (often around the hair). This is often a sign of a digital composite or where the AI struggled to stitch two images together.
Objects in the background bend or morph unnaturally. The AI may struggle to maintain the physical integrity of the entire scene over time.
Voice & Audio  Robotic, flat tone with strange pauses or rhythms. Even good voice clones can lack the natural cadence and emotional range of human speech.
Audio is slightly out of sync with the lip movements. Perfect, continuous lip-syncing remains one of the hardest challenges for current AI video models.

2. How to Protect Your Family: The 3-Step Plan

If you receive a frantic video call or voice message from a “family member” or “friend” asking for money, do not panic. Scammers rely on urgency. Follow this simple, low-tech plan to expose the fraud.

Step 1: The Emergency Call Rule: HANG UP. 

The scammer’s greatest weapon is emotion. The first and most important step is to break their control. Do not engage, do not ask questions, and do not make any decision. Just hang up the call immediately.

Step 2: The Call-Back Test.

After hanging up, immediately call the person back on their normal, trusted phone number—the one you already have saved in your phone’s contacts. Do not use a number they gave you in the suspicious message. A quick call to their established number will almost always confirm that they are safe and that the emergency call was a fake.

Step 3: The “Safe Word” Strategy. 

This is the most powerful low-tech security tool available.

  • Set It Up: Agree on a simple, weird “safe word” or “secret question” with your close family members (children, parents, spouse).
  • The Rule: The word must be something a scammer could never guess or find on social media. Avoid pet names or birthdays. Choose a specific inside joke or a random memory (e.g., “What was the name of the broken-down car in college?”).
  • The Test: If you are ever in doubt during a call or text, ask for the safe word. If the person on the other end can’t provide it, you know it’s a fake.

A Call to Action for You: Spread the Word

The most effective defenses against high-tech AI deception are human and low-tech. You can make a difference right now by sharing this knowledge.

Talk to your family and friends about these threats. Make sure your parents, your children, and your colleagues know about the “Call-Back Test” and the “Safe Word” strategy. In a world where our digital senses can be so easily fooled, our reliance on analog, human-to-human verification methods becomes our greatest strength.

You could be the one who saves someone you care about from a devastating scam. Be skeptical, be careful, and spread the word.

Conclusion

In a world where AI can create convincing fakes, your mindset is your best defense. The old rule of “seeing is believing” no longer applies. Now, you must question before you trust.

Talk to your family and friends about these threats. Share the “Call-Back Test” and the “Safe Word” strategy. These simple, human methods are strong defenses against AI deception. Your actions can protect those you care about from devastating scams. Be skeptical, be careful, and spread the word.

Categories: AI Cyber Security
jaden: Jaden Mills is a tech and IT writer for Vinova, with 8 years of experience in the field under his belt. Specializing in trend analyses and case studies, he has a knack for translating the latest IT and tech developments into easy-to-understand articles. His writing helps readers keep pace with the ever-evolving digital landscape. Globally and regionally. Contact our awesome writer for anything at jaden@vinova.com.sg !