Organizations Confront AI Social Engineering

As Deepfakes Proliferate, Organizations Confront AI Social Engineering
Deepfakes are a growing problem for any organization jacked into the internet. They can be especially dangerous when weaponized by nation-states and cybercriminals.
“When people think about deepfakes, they often picture fake videos or voice-cloned calls,” noted Arif Mamedov, CEO of Regula Forensics, a global developer of forensic devices and identity verification solutions. “In reality, the bigger risk runs much deeper. Deepfakes are dangerous because they attack identity itself, which is the foundation of digital trust.”
“Unlike traditional fraud, which relies on stolen or leaked data, deepfakes allow criminals to recreate existing or create entirely new people — complete with faces, voices, documents, and believable behavior,” he told TechNewsWorld. “These identities can look legitimate from the very first interaction.”
He explained that deepfakes create three significant risks. First, authentication breaks down when facial recognition, voice authentication, or document scanning relies on static or replayable signals. Second, fraud scales fast. AI enables the generation of thousands of fake identities simultaneously, turning fraud into an industrial process. And third, deepfakes create false confidence. They often pass existing controls, so organizations think they’re protected while fraud quietly grows.
“Our 2025 research shows that deepfakes don’t replace traditional fraud — they amplify it, exposing old weaknesses and making them far more expensive,” he added.
How Deepfakes Undermine Human Judgment
Mike Engle, chief strategy officer for 1Kosmos, a digital identity verification and passwordless authentication company headquartered in Iselin, N.J., explained that traditional security assumes that once someone is authenticated, they are legitimate. “Deepfakes break that assumption,” he told TechNewsWorld.
“AI can now convincingly impersonate executives, employees, job candidates, or customers using synthetic voices, faces, and documents, allowing attackers to bypass onboarding and help desk and approval workflows that were never designed to detect manufactured identities,” he said. “Once a fake identity is enrolled, every downstream control — MFA, VPNs, SSO — ends up protecting the attacker instead of the organization.”
Deepfakes don’t break systems first — they break human judgment, maintained David Lee, Field CTO of Saviynt, an identity governance and access management company in El Segundo, Calif.
“When a voice or video sounds right, people move quickly, skip verification, and assume authority is legitimate,” he told TechNewsWorld. “That’s what makes deepfakes so effective. A believable executive voice can authorize payments, override processes, or create urgency that short-circuits rational decision-making before security controls ever come into play.”
“As with any fraud or scam, a deepfake-driven scam puts any business at risk, but especially smaller or thin-margined businesses, where financial impacts can have a disproportionate affect on the health and viability of the entity,” added James E. Lee, president of the Identity Theft Resource Center (ITRC), a nonprofit organization devoted to minimizing risk and mitigating the impact of identity compromise and crime, in San Diego.
“Deepfakes can lead to data breaches; loss of control of processes, systems, and equipment; and ultimately financial impacts in the form of actual losses, as well as unbudgeted expenses,” he told TechNewsWorld.
Deepfake Attacks Accelerating
The proliferation of AI appears to have increased adversary activity. “Cybersecurity reports and regulatory warnings all indicate an exponential rise,” observed Ruth Azar-Knupffer, co-founder of VerifyLabs, a developer of deepfake detection technology, in Bletchingley, England.
“Threat actors are increasingly leveraging accessible AI tools, such as open source deepfake generators, to create convincing fakes efficiently,” she told TechNewsWorld. “The proliferation of digital communication, such as video calls and social media, has expanded attack opportunities, making deepfakes a growing vector for scams and disinformation.”
Regula’s Mamedov added that the reason deepfake use is accelerating is simple. “The tools are cheap or free, the models are widely available, and the quality of output now exceeds what many verification systems were built to handle,” he explained.
“What used to be an individual effort to craft a convincing deepfake is now a plug-and-play ecosystem,” he continued. “Fraudsters can buy complete ‘persona kits’ on demand: synthetic faces, deepfake voices, digital backstories. This marks a shift from small-scale, manual fraud to industrial-scale identity fabrication.”
He cited Regula data showing that about one in three organizations has already experienced deepfake fraud. “That’s the same frequency as long-standing threats such as document fraud or social engineering,” he said. “Identity spoofing, biometric fraud, and deepfakes now sit firmly in the mainstream fraud playbook.”
New Tool, Old Deception
One way organizations are addressing the deepfake problem is through training. For example, KnowBe4, a well-known cybersecurity training company based in Clearwater, Fla., rolled out new training on Monday aimed at defending organizations from deepfakes.
KnowBe4 Chief Human Risk Management Strategist Perry Carpenter explained that the training focuses on employee interaction with deepfakes.
“The single best thing that anybody can do is if they feel like there’s an emotion that’s being pulled in some way, some emotional lever that’s being touched, whether that is fear or urgency or authority or hope or anything else, that should actually be a signal for them to slow down, and start to analyze the story, the thing that’s being asked of them, and ask does it raise any red flags?” he told TechNewsWorld.
“You’ll notice, I’m not talking about looking at the deepfake to say, does the mouth look right or does the voice sound right?” he continued. “Those are all things that we can do, but those are things that will go away within the next six months to a year, as the technology gets better.”
“So, the last thing I want somebody to do is to believe that there will always be a visual or audio tell that they can figure out,” he said. “The best thing is always going to be, am I feeling manipulated in some way? Is this asking me to do something out of the ordinary? Is it touching on an emotion in some way? Then how can I verify this through another channel?”
“Deepfakes are just the newest tech tool in the attacker’s toolbox,” he added. “The mode of deception and the narrative attack and the emotions are age-old.”
Never Trust, Always Verify
Rich Mogull, chief analyst with the Cloud Security Alliance, a not-for-profit organization dedicated to cloud best practices, agreed that employees shouldn’t rely on visual or audio artifacts to identify deepfakes. “Instead of relying on looking for visual or auditory signs, I recommend looking for behavioral signs and having process controls to prevent the kinds of fraud they are used for,” he told TechNewsWorld.
He recommended requiring multiple checks before issuing a bank transfer and implementing internal controls that block attempts to circumvent them. He also suggested training employees to validate CEO calls via an out-of-band channel, such as Slack/Teams, and to look for social engineering signals, such as “we don’t have time for that, just do it now.”
While acknowledging that employees can be trained to combat deepfakes, Saviynt’s Lee argued that training alone isn’t enough. “Awareness helps people pause, but it doesn’t replace verification,” he said. “The real shift is teaching employees to stop asking ‘Is this real?’ and start asking ‘What confirms this?’ That means callback procedures, secondary approval paths, and removing voice or video as standalone trust signals.”
“If your control depends on someone recognizing a fake, you don’t have control-you have a gamble,” he noted.
“Deepfakes aren’t the core problem. They’re a stress test,” Lee added. “They expose how many organizations still rely on recognition instead of verification.”
“The long-term solution isn’t better human detection,” he continued. “It’s treating identity as something that must be explicitly validated and continuously enforced by systems. When trust is no longer implicit, deepfakes lose their power.”




