Skip to main content
SPARCS - Topic Of The Week

Is Anything Real at This Point?

How fake job seekers are scamming companies

Just a few weeks ago, we discussed the growing challenge job seekers face in determining whether a job posting is even legitimate. But the script has flipped… Now employers are the ones asking if the candidates they're meeting are even real. Welcome to the unsettling new trend of deepfake job applicants.

So, yes, if identifying a real job posting wasn’t already a task, now we have to compete against a robot impersonating somebody else. It seems that these don’t have a name just yet, but we can refer to them as “Deepfake Candidates.”

Deepfake candidates are job applicants either completely generated by AI or having their appearance significantly altered using deepfake technology. This technology creates highly realistic fake videos, images, or audio, and the candidates appear legitimate at first glance, with polished resumes, LinkedIn profiles, and even video interview capabilities that can fool recruiters and hiring managers.

Around 17% of hiring managers say they’ve encountered candidates using deepfake technology to alter their video interviews, a recent survey from career platform Resume Genius found. Another top executive, who decided to dig into the issue at his own company, found that out of 827 applications for a software developer job, about 100 were attached to fake identities.

So, what’s the big deal, you might ask. This isn’t just a clever prank or resume padding; it’s a serious security threat. Hiring a deepfake candidate can have severe and far-reaching consequences beyond simple deception and pose elevated financial, operational, security, and reputational risks, making it a critical concern for organizations across industries. It’s predicted that by 2028, one in four candidate profiles worldwide will be fake.

The rise of deepfake candidates isn’t just about falsified identities, it’s a direct threat to cybersecurity, corporate espionage, and data protection. These are sophisticated attacks leveraging AI to bypass identity checks, deceive human resource departments, and gain access to sensitive corporate systems.

Consider this: If a company unknowingly hires "Ivan X," a convincing deepfake with a fake identity, what might he access? Internal databases? Source code? Customer information? Critical infrastructure? You can read more about “Ivan X.”

These incidents aren't isolated. The Justice Department has uncovered multiple networks in which North Koreans used fake identities to land remote jobs in the U.S. They often use AI to build fake identities and work U.S.-based IT jobs to funnel U.S. dollars to their home country. 

The Justice Department estimates these schemes generate hundreds of millions of dollars annually, with much of those funds going directly to the North Korean Ministry of Defense and the country's nuclear missile program.

If you are still thinking, well, what does that have to do with me? These frauds could shake the very foundation of the remote work revolution. If businesses can’t trust who’s on the other side of the screen, they may retreat back to in-person hiring models, undermining the flexibility and accessibility that made remote work so popular post-2020.

So, what now? The rise of deepfake candidates signals a turning point in hiring, cybersecurity, and even trust in digital identity. It raises profound questions:

  • Can you trust who you’re interviewing?
  • What safeguards are in place to verify digital identities?
  • Are we prepared for a world where seeing, or hearing, is no longer believing?

In a hiring landscape increasingly shaped by AI, truth is becoming negotiable. I guess I’ll see you at the office!

On
Back to Top