Key takeaways: Deepfakes
- As AI tech evolves, the usual way of spotting deepfakes is a recipe for disaster.
- New deepfake detection tools can uncover the latest deceptions, but your critical analysis is the secret weapon scammers hope you won’t bring to the fight.
- The heart of the modern deepfake is a deep learning algorithm called GAN (generative adversarial network), which not only creates stunning forgeries but also trains itself to get better each day.
- Fighting today’s deepfakes means a layered defense, such as out-of-band verification methods for sensitive transactions, behavioral checks during team calls, tabletop exercises, and simulation training.
- Layer on LastPass SaaS Monitoring & Protect identity and access controls to ensure only the right people get in through the gate.
Can you spot a deepfake? The usual advice is to check for facial quirks, lip-sync issues, and unnatural blinking. But thanks to advances in deep-fake tech, gut instincts and visual cues are no longer enough. Your best defense is combining critical analysis with specialized detection tools.
So, how does that work?
This was the question I asked myself as I added a session on deepfakes to my itinerary at the recent ISC2 Security Congress (Oct. 28-30, 2025). And it looks like others had the same question.
Because the room was packed. Not “good turnout” packed. Standing room only.
I’d arrived “early” to the virtual session on deepfakes, expecting the usual: Data-heavy slides. Security jargon. Another expert warning about Y2Q (Years to Quantum) breaking protections like cryptographic watermarking.
You know: The kind you see in tools like Truepic, where cryptographic metadata is attached to images. This provides an easy way to check whether an image has been AI-generated or manipulated.
But here’s what I didn’t expect when I attended Dr. Felix Hernandez’s presentation on “Deepfakes and Corporate Cybersecurity.”
A published author, professor, and sought-after security consultant with top-tier cybersecurity certifications and 25+ years of IT/IS experience, Dr. Hernandez took one look at the crowd.
And ripped up his original presentation.
“So, today’s presentation is about understanding AI-powered threats, from voice cloning to video forgeries and implementing effective detection and mitigation strategies. That’s a mouthful. I was going to go through the motions. Talk to you about deepfakes the way I always do,” he said.
Then came the bombshell.
“But the conversations I had with you as you walked in? They were enlightening. I didn't understand how deepfakes were really affecting your companies until I spoke with you one-on-one. So, let’s rip that up and have a real conversation with each other, can we do that?"
Suddenly, I knew this wasn’t going to be another dry, technical session.
What followed wasn’t so much a presentation but a revelation. And an uncomfortable one at that.
Which brings us to the question.
How good are people at detecting deepfakes?
According to an iProov study of 2,000 UK and US consumers, only 0.1% of participants could accurately distinguish between real and fake content across various media such as audio, video, and images.
The key findings will shock you:
- Grandma is at greater risk than your teen: 30-39% of older adults (55 & up) have never even heard of deepfakes
- Deepfake videos are harder to detect than deepfake images: Only 36% could identify them
- Despite poor performance, over 60% trust their deepfake detection skills
- Only 11% said they actively verify the source and context of media content
Meanwhile, a systematic analysis of 56 papers involving 86,155 participants found overall deepfake detection accuracy at just 55.54%, with detection rates not significantly above random chance.
Breaking this down by media type:
- Audio deepfakes: 62.08% detection accuracy
- Video deepfakes: 57.31% detection accuracy
- Image deepfakes: 53.16% detection accuracy
- Text deepfakes: 52% detection accuracy
Research suggests that the heuristic - a mental shortcut that helps us make quick, split-second decisions - of "seeing is believing" is to blame.
When it comes to audiovisual content, people tend to trust it because the combination of sight and sound creates an emotional reality that’s hard to resist.
This means most people are vulnerable to deepfakes.
During his presentation, Dr. Hernandez demonstrated how easy it was to create deep-fake video content.
In several stunning demos, he "became" (in successive order), George Clooney, Eddie Murphy, and Secretary of State Marco Rubio.
Dr. Hernandez even superimposed Eddie Murphy’s face onto President Barack Obama’s body in a video.
So, we saw Eddie Murphy but heard President Obama.
From where I sat, the deep-fake video looked disturbingly real. I didn’t see any face glitches, unnatural shadows, or irregular blinking.
On screen, Eddie Murphy looked eerily normal as he wished supporters of the Obama Foundation “happy holidays” -— in President Obama’s voice.
This brings us to the question: So how did Dr. Hernandez pull this feat off?
How does deep-fake technology work?
Dr. Hernandez reveals that the heart of deepfakes is a deep learning algorithm called GAN (Generative Adversarial Network).
Here’s how it works.
GAN consists of two neural networks:
- A generator that creates fake images or videos based on learned data
- A discriminator that tries to distinguish whether the images are real or fake
The two networks stay in constant interaction or a competition of sorts.
- The generator learns how to create images that will deceive the discriminator.
- Meanwhile, the discriminator learns how NOT to be deceived.
- They "push" each other to get better, again and again, until the fakes become so convincing that even experts struggle to tell the difference.
This is why you see the word "adversarial" in GAN.
The two networks are "locked" in a battle to create deep-fake content that’s virtually indistinguishable from real content.
Now let’s look at how deepfakes are made.
How are deepfake videos made?
GANs are combined with face swapping technology (like DeepFaceLab) and voice synthesis APIs to create modern deepfakes.
Dr. Hernandez explains that face swapping ML algorithms extract facial features from source and target videos.
Then, they reconstruct and blend the target face onto the source body, taking care to preserve expressions and movements.
This means if the original person smiles, blinks, or turns their head, these movements and expressions remain intact.
So, although the face looks like Eddie Murphy, the smiles, eye blinks, and other expressions come from President Obama’s original video. This makes the deepfake appear synchronized and natural.
Meanwhile, voice synthesis APIs generate synthetic speech trained on audio samples and sync it with the manipulated visuals. This is how you see Eddie Murphy but hear President Obama.
According to Dr. Hernandez, deepfakes first became viral in 2017, when an anonymous Reddit user (with the handle “deepfakes”) weaponized face swap technology to create porn videos featuring female celebrities like Gal Gadot and Scarlett Johansson (neither of whom consented to their images being used).
Deepfakes became commercialized at the height of the COVID-19 pandemic and entered the corporate environment in 2022.
And that’s not all. Dr. Hernandez says that commercial smartphone apps like ZAO2 and FaceApp3 are bringing deepfake creation to the masses.
This means anyone can create deepfakes today.
In fact, it’s now possible to make a convincing deepfake based on a mere three seconds of recorded audio, using off-the-shelf, publicly available software.
Algorithms like StyleGAN and StyleGAN2 are increasingly used for image synthesis and are very hard for the human eye to detect. Many parody and prank videos utilize them on streaming sites like YouTube, TikTok, and Instagram, fueling the spread of disinformation and social upheaval.
What are some possible reasons cybercriminals might use deepfakes?
According to Dr. Hernandez, cybercriminals are increasingly using deepfakes to enhance social engineering attacks, spread disinformation through executive impersonations, and commit financial fraud.
He shares how cybercriminals used deepfake phishing to trick the CEO of a UK energy company into transferring $243,000 to their account. This was in 2019.
With AI-enabled voice spoofing, the scammers were able to duplicate the voice of the parent company's top executive, which led the CEO to think he was speaking to his colleague across the ocean.
And in January 2024, fraudsters used deepfake technology to impersonate a company's CFO on a video call, tricking an employee into transferring $25 million to their account.
Dr. Hernandez stresses that CEOs are in the crosshairs, due to their visibility and access privileges.
During his presentation, he demonstrated how easy it was to weaponize the likeness of a CEO to spread misinformation.
In a demo, he used voice cloning to create a highly realistic video of the CEO of a Fortune 500 company advocating unsafe practices like:
- Clicking on unknown links or attachments in emails
- Accepting wire transfer requests without verifying their authenticity
As I caught up with him on LinkedIn after the session, Dr. Hernandez revealed that the deepfake was so convincing the company actually reached out to him.
They wanted him to present his findings to their C-suite staff and educate them about the impact deepfakes can have on corporate cybersecurity.
And for good reason: A 2024 simulation campaign by a French penetration testing firm showed that C-suite staff would benefit from enhanced phishing training. In recent testing, 40% engaged with simulated threats, highlighting how convincing these attacks have become.
Another area malicious actors are weaponizing deepfake videos is in healthcare.
Imagine waking up one day to find social media videos of yourself promoting “health” products you would never recommend in a million years.
That’s exactly what happened to a physician known as the "Medical Mythbuster."
In early 2025, a fan alerted him to a video featuring his likeness and promoting an unknown product.
The voice? Not his.
When interviewed by CBS News, he admitted that his immediate response was frustration and fear. Fear that his hard-earned reputation could be weaponized against the very people who trusted him. And fear that unproven claims could lead to a public health emergency.
But that’s not the worst of it.
Earlier this year, Dr. Hernandez warned about cybercriminals generating fake “evidence” to support insurance fraud. To illustrate the gravity of the situation, he used deepfake tech to generate:
- A lifelike bruised face of himself to “show” his injuries
- A convincing receipt to “prove” his location at the time of the “accident”
- A “car accident scene” (involving his own car), down to the damaged areas
The effects were so convincing it led one reader to proclaim, “The fact that fake evidence can now look more real than the real thing is a serious red flag. It’s not just a challenge for tech — it’s a challenge for trust, security, and even justice.”
Precisely: Deloitte predicts that generative AI could drive U.S. fraud losses from $12.3 billion to $40 billion by 2027 — a 32% annual growth rate.
Dr. Hernandez also shares these frightening statistics:
- The average cost of successful deep-fake fraud against enterprises is $35 million.
- 43% of organizations hit by deep-fake fraud saw their stock prices decline.
- Yet, 67% of organizations lack the legal frameworks or policies to effectively detect and respond to deep-fake fraud.
According to UNESCO, we are fast approaching a “synthetic reality threshold,” a point beyond which humans can no longer distinguish between fake and real content without technological assistance.
This brings us to an important question.
How do I protect against deep-fake phishing scams?
According to Dr. Hernandez, a layered defense is critical to protecting against deep-fake phishing scams.
His 90-day implementation plan includes:
- Auditing existing verification procedures for payments, data access, and executive communications and implementing out-of-band verification protocols (first 30 days).
- Beginning employee awareness campaigns, deploying AI-powered enterprise deepfake detection tools, and updating incident response playbooks (60 days)
- Implementing digital signatures for official communications, conducting tabletop exercises simulating deepfake incidents, and preparing a communications strategy for potential reputation damage (90 days)
Dr. Hernandez’s 90-day rollout strategy is your blueprint for success, but what sets his approach apart is something far more simple and powerful: the discerning eye of a human mind.
What is the role of human oversight in deepfake detection?
So, let’s say you're staring at your screen, watching your CFO on a video call asking you to wire a quarter-million dollars.
How do you know it's really them?
The answer may surprise you: bringing people into the process.
Dr. Hernandez warns that deep-fake detection tools, although useful, aren’t always reliable.
He argues that the authentication of media content requires both human input and context review.
This is similar to human-in-the-loop controls when it comes to agentic systems, where a human actively participates in the decision-making process.
Dr. Hernandez recommends creating open channels, where employees can comfortably challenge suspicious requests without fear of repercussion.
He calls this fostering a culture of organizational skepticism, where raising doubts is encouraged. This includes:
- Using pre-shared secrets or implementing personal verification questions during sensitive calls.
- Implementing a "prove you're live" challenge, such as asking call participants to perform an action like waving their hands in front of their face. Such actions, although not foolproof, can often (still) reveal tell-tale signs of AI manipulation.
This is what researchers call "epistemic agency in action,” where individuals are empowered to act when navigating uncertainty, instead of just relying on passive analysis of digital content.
But individual tactics are just the beginning.
If you’re doing business, you know how high the stakes are.
In April 2024, a deepfake video of Ashishkumar Chauhan, the CEO of India’s National Stock Exchange (NSE), appeared online recommending certain stocks. The NSE confirmed the video was fake, but not before it shook investor confidence to the core.
Here's the sobering truth: Human verification is critical. Dr. Hernandez reveals that 67% of deepfake attacks target companies without verification protocols. So, your business must integrate deep-fake risk into your incident response plans.
Dr. Hernandez says the key worry, of course, is the investment needed for forensic algorithms, audit processes, computing capacity, and skillsets.
He recommends that you do what you can to ensure your budget is consistent with the level of threat your business faces.
In parallel, he also advises strengthening identity and access controls, like implementing MFA for all sensitive transactions.
#1 Strengthen your human firewall: Your team is your front-line defense. Dr. Hernandez recommends regular simulation exercises, executive impersonation scenarios, and interactive drills with feedback and increasing difficulty levels.
#2 Monitor what matters: Use response time metrics and post-training assessments to measure the effectiveness of your training program.
- Work with an industry-trusted deep-fake expert like Dr. Hernandez to get an expert risk assessment, strategic defense planning, and regulatory and compliance guidance.
- Implement secure access controls with LastPass SaaS Monitoring. Uncover critical risks like rogue logins, apps that bypass MFA, and the lack of overall app visibility -— which leads to gaps in security audits and the potential exposure of sensitive data.
- Trust requires proof. LastPass provides a Secure Access Experience with FIDO2 MFA and continuous monitoring of SaaS logins.
- By tracking logins, you can catch attackers who attempt to exploit vulnerabilities in popular tools like Microsoft Teams to impersonate executives, manipulate messages, and forge identities during video and audio calls.
#3 Layer on the right technology, as recommended by Dr. Hernandez:
- Manual detection techniques like looking for audio-visual sync mismatches and micro-expression inconsistencies
- Digital watermarking in audio, video, or image data to identify ownership of the copyright
- Liveness detection software to confirm whether the person is real and physically present in front of the camera
But — and this is crucial — technology alone won't save you.
What are the limitations of current deep-fake detection tools?
Deepfake detection tools that perform well under lab conditions often suffer 45-50% accuracy degradation when faced with real-world deepfakes, making them unreliable outside controlled environments.
Dr. Hernandez warns that we are in an ongoing arms race, where detection tools struggle to keep up with emerging deep-fake generation tech.
Enterprise Deepfake Detection Tools Comparison 2025: The Reality Check
When evaluating the best deepfake detection tools, whether you're looking at open source or enterprise deepfake detection, it’s important to first understand their fundamental limitations:
|
Detection approach |
What it analyzes |
The limitations |
|
|
|
|
|
Facial Landmark Analysis |
Uses algorithms to review facial dimensions in media content to uncover inconsistencies or unnatural movements |
Real-world variables like variations in facial expressions across ethnicities, ages, environmental conditions, and health conditions can affect accuracy detection. |
|
Lip Synchronization Detection |
Examines the lip movements of call participants to see if it matches with the spoken audio |
Advanced deepfakes now sync mouth movements and audio in a way that looks completely real, without visual or timing errors |
|
CNN-Based Tools |
Employs a deep learning model called a convolutional neural network (CNN) to analyze images and videos for visual inconsistencies, such as unnatural blending (that humans may overlook) |
Fails against newer deepfake models it hasn't been trained on |
|
Micro-Expression Analysis |
Detects subtle facial expressions that reveal genuine emotions, which deepfakes fail to replicate |
Requires high-quality video to detect; more advanced deepfake models are starting to replicate micro expressions more convincingly |
|
Digital Watermarking |
Acts as markers in audio, video or image data to identify ownership |
Only protects content that's watermarked at creation |
Now, let’s look at real-world AI tools for detecting deepfake social engineering:
|
Tool |
Who it’s for |
What it analyzes |
Its limitations |
|
|
|
|
|
|
Microsoft Video Enterprise |
Enterprise |
Employs machine learning algorithms to assess videos for subtle visual artifacts like blending boundaries, fading, or grayscale elements the human eye may miss |
Requires access to Microsoft’s platform; accuracy degrades with compressed or low-quality videos |
|
Deepware Scanner |
Free/consumer |
Scans videos for facial artifacts and signs of AI manipulation
|
Limited to analyzing video, not voice deepfakes; struggles with the latest GAN-generated deepfakes |
|
Sensity AI |
Enterprise |
AI-powered detection across multiple media types, such as video, images, and audio |
Subscription-based; detection lags as new techniques emerge; struggles to identify psychological manipulation |
|
Intel FakeCatcher |
Enterprise |
Analyzes blood flow in real-time with 96% accuracy |
Requires high-resolution video; may not always conclusively prove manipulation due to evolving deepfake tech |
|
Reality Defender |
Enterprise |
Multi-model approach enabling the detection of deepfakes in audio, video, text, and images |
Expensive; reduced accuracy when faced with real-world, dynamic scenarios; detection quality impacted by noise, video compression, and poor lighting |
|
Amber Video |
Consumer |
Adds cryptographic watermarking to videos you record |
Useful for creators but may miss sophisticated deepfakes |
As can be seen, both consumer and enterprise deepfake detection tools often fail due to three (3) key reasons:
- Limited training data: When a detection system encounters a generation method it hasn't seen before, results become "no better than random guesses.”
- Lab testing versus real-world conditions: Detection systems are trained in pristine lab conditions with good lighting and consistent audio. When faced with compressed videos, noisy, or poorly lit environments, detection accuracy often falls.
- Attackers who adapt faster than defenses: Deepfake attackers are now testing their work against known detection tools before launching attacks. When they do this, detection performance can fall by over 99%.
Dr. Hernandez stresses: Perfect detection doesn't exist.
The detection tools will increase in sophistication, but so will the deepfakes.
Your best defense isn't a single tool or technique. Not the latest enterprise solution, and not even the most sophisticated open source deepfake detection tools.
It's a system that assumes deception is a matter of “when, not if,” verifies everything that matters, and never, ever relies on “seeing-is-believing.”
Because in a world of deepfakes, believing what you see may be the most dangerous thing you do.
I came away from Dr. Hernandez’s presentation thoroughly impressed.
And I wasn’t the only one. As the session ended, I could hear loud applause on the floor.
Packed with actionable insights backed by real-world expertise, Dr. Hernandez’s presentation inspires us all to rethink old, tired approaches to corporate cybersecurity.
Don’t miss the chance to reach out to him on LinkedIn and his website for exclusive insights and ongoing expertise:
- https://www.linkedin.com/in/felixhern/
- https://felixhernspeaks.com/
- You can also find Dr. Hernandez’s newest books on Amazon.
Sources
Deepfakes and the crisis of knowing
Deepfake Technology in Corporate Cybersecurity: Emerging Threats and Defense Mechanisms (Dr. Felix Hernandez)
Fooled twice: People cannot detect deepfakes but think they can
Deepfakes are Hijacking Video Calls
Human performance in detecting deepfakes: A systematic review and meta-analysis of 56 papers
Real people in fake porn: How a federal right of publicity could assist in the regulation of deepfake pornography
Sophisticated Phishing Attacks Targeting Decision-Makers Including CEOs and CTOs
Phishing campaign: Are decision-makers vulnerable?
Why detecting dangerous AI is key to keeping trust alive in the deepfake era
Why Deepfake Detection Tools Fail in Real-World Deployment
FAQs: Deepfakes and corporate cybersecurity
In 2025, it is projected that about 8 million deepfake files will be created and shared, a massive increase from 500,000 in 2023. This growth shows how rapidly deepfake technology has become mainstream.
According to Ballotpedia, 47 states in the U.S. have enacted laws regulating deepfakes, covering areas like nonconsensual explicit content, political deepfakes, and tech entity obligations when hosting deepfake content. Only Ohio, Missouri, and Alaska lack such legislation.
The cost of generating deepfakes has decreased dramatically. According to Dr. Hernandez, you can make deepfakes for as little as $29/month.
Meanwhile, other platforms provide free credits (or free trials), so everyday users, educators, and content creators can experiment with voice cloning or face swapping technology for fun projects.
Remember: While nearly anyone can learn how to make deep-fake videos for free, it’s important to never use these tools to mislead, invade privacy, or spread harmful content.
Yes, tools like Pindrop Pulse for Meetings are designed to detect deepfakes in live video meetings.
Pindrop’s detection model is trained on more than 350 deep-fake generation tools, 20 million unique utterances, and over 40 languages.
It also covers more than 90 percent of languages spoken online.
Pindrop can detect subtle acoustic and behavioral traits in audio that AI can’t readily replicate, such as frequency distortions, voice variances, unnatural pauses, and temporal deviations.
The system offers real-time detection in just two (2) seconds with over 99% accuracy on known deepfake engines and over 90% on new deepfake generation methods.
While ChatGPT isn’t as accurate as state-of-the-art detection models, it holds some promise in detecting deepfakes.
In 2024, a University of Buffalo research team found that ChatGPT achieved about 79.5% accuracy in detecting AI content and 77.2% on images generated by StyleGAN.
The major advantage of ChatGPT is its ability to explain in plain, human language why an image might be fake, such as an abrupt transition between a subject and background.


