The New Face of Social Engineering
Is your organization ready for AI Impersonation in Video Conferencing?
One of my talents is being able to see how technology evolves from its infancy to broader adoption. But as I heard Garth Brooks once say, “For every blessing, there is a curse.” Having this talent while working in cybersecurity means I often find myself worrying about how to defend against attacks before the rest of the world even sees them coming.
Having seen how effective phishing can be using nothing more than text, one of my biggest concerns now is AI impersonation during live video calls. I found one major example, where a finance worker in Hong Kong transferred 25 million dollars during a video call with what he thought were co-workers and executives.
The most chilling part of the report:
The elaborate scam saw the worker duped into attending a video call with what he thought were several other members of staff, but all of whom were in fact deepfake recreations, Hong Kong police said at a briefing on Friday.
Imagine getting a meeting invite from a spoofed email address that appears to come from your boss. You join the call and see your manager and several colleagues, all speaking normally. What would you do?
I would like to believe I could spot subtle cues. Maybe the lack of micro-expressions or off cadence in speech. But the truth is, this technology is getting dangerously good.
This Is Not Difficult to Do
So how easy is it to pull this off? Let me walk you through a simple demo I created using ChatGPT and HeyGen.
In this case, I asked ChatGPT:
"What is the single most important thing organizations can do to secure themselves in cybersecurity?"
ChatGPT responded with a pretty good answer.
I then used HeyGen to clone my voice and mimic my behavior based on a video I recorded months ago. I pasted the AI-generated script into the HeyGen tool and produced a polished, believable video without speaking a single word into a microphone.
Now imagine this process happening in real time.
Real Time AI Integration into Video Calls is the risk
While my example was built manually, we are quickly moving toward real-time integration. Tools like ChatGPT are already being embedded into voice assistants and live chat platforms. Soon, we will see AI agents join video meetings, responding and adapting on the fly.
In the Hong Kong attack, the threat actor built an elaborate multi-person deepfake simulation. But they did not have to. A simpler approach would be to use an AI agent to join the meeting silently, just to listen in and collect intelligence.
There are already many legitimate services that do this for note-taking. I use one. But it is not a stretch to imagine those same tools being used by attackers.
How Companies Can Protect Themselves
Here are six practical strategies organizations should adopt right now:
1. Implement Multi-Channel Verification for Sensitive Actions
If a request comes through video, confirm it via another medium. Require a phone call or secure messaging confirmation for tasks involving financial transfers or sensitive information.
2. Lock Down Calendar Access and Meeting Links
Ensure calendars are not publicly indexed or visible. Disable "join before host" and always verify who is joining.
3. Train Teams on Deepfake Awareness
Just like phishing simulations, run video-based social engineering drills. Teach employees how to question even familiar faces when high-stakes decisions are involved.
4. Use Code Words or Security Phrases
For executive-level communication, establish internal passphrases or behavioral cues that AI would struggle to replicate. I’m sure many of my former colleagues and I would have some fun with this one!
This is far from a complete list but it’s a start. We’re entering a brave new world with AI being part of lives and our jobs. It’s best to prepare rather than react!