Fighting the rise of fake candidates
What happens when the “perfect candidate” turns out to be fake? From bad actors trying to steal company data to AI candidates successfully getting hired, the pressure is on for recruiting teams to adapt fast.
Fake candidates aren’t just a hypothetical risk anymore. In a recent episode of 10x Recruiting, Metaview CEO Siadhal Magos and host Nolan Church unpacked what they’re hearing across the hiring world: AI candidates are already showing up in interviews, and the problem is picking up speed.
These aren't just keyword-stuffed resumes. We're talking about sophisticated deepfakes and avatars that look, sound, and respond like real people—until they don't. And the industry isn’t ready for what happens next.
Nolan and Siadhal explore what’s working (and what’s not) when it comes to fighting back, and what it could mean for AI agents to become a legitimate part of interviewing.
Key takeaways:
- By 2028, an estimated 25% of candidates worldwide could be fake, requiring new detection methods and verification processes.
- The fake candidate problem exists at two levels: applicants using AI to create thousands of profiles, and AI avatars showing up to interviews.
- Simple tactics can help for now, but the long-term solution will require systemic change across the recruiting industry.
Related resources
- Article: Fake job seekers are flooding U.S. companies (CNBC)
Listen, like, and subscribe on:
What are fake candidates?
Fake candidates are applicants who intentionally misrepresent their identity, experience, or capabilities during the hiring process. This can include AI-generated resumes, automated applications, exaggerated or fabricated work histories, or candidates using tools to assist them during interviews.
In more extreme cases, fake candidates may involve impersonation or third parties standing in for the real applicant. The result is the same: hiring teams receive distorted signal and spend time evaluating candidates who were never truly qualified or authentic.
The fake candidate problem comes in two main flavors:
1. AI-generated applicants
The first issue is high-volume, AI-generated applications. Job seekers are increasingly using AI to mass-apply with optimized resumes and templated responses.
Some of these candidates are real. Some aren’t. In either case, they clog up valuable space in your recruitment pipeline.
"There are literally real applicants using AI to make thousands of applications as well,” Siadhal noted. “That’s very hard to discern."
This creates what Nolan calls a "polluted ATS." Recruiters are already overburdened. When bots and AI-optimized applications dominate inbound volume, it slows down real candidates, buries high-quality talent, and forces teams to rely more heavily on surface-level signals like LinkedIn polish or resume formatting.
And for high-volume recruiting teams, this problem scales fast. Nolan recalled being pinged by multiple recruiters, each describing eerily similar patterns: great-looking resumes, then strange energy on the call. Something was off. It wasn’t just the voice or vibe — it was an uncanny mismatch between behavior and expectations.
2. AI avatars in interviews
The more sophisticated (and concerning) problem involves AI avatars that can actually conduct interviews. These aren't just resume enhancements — they're fully digital imposters.
Nolan shared a real example he heard about from a former colleague at Transform. Katelyn Halbert noticed something strange during an interview at her current company, Pindrop: The candidate’s mouth movements didn’t match the audio. It turned out to be an AI avatar impersonating a real applicant.
"We’re hearing it more from our larger customers,” Siadhal said. “Someone turns up for the interview who isn’t the same person that ends up on the job."
That encounter wasn’t isolated. Companies like Pindrop are now building internal tools specifically to identify and reject deepfake candidates before they reach sensitive roles.
Siadhal noted that the frequency of these incidents appears higher in large, well-resourced organizations. It makes sense: Bigger brands have more to lose — and more appeal to attackers looking for sensitive data, insider access, or pure disruption.
Why fake candidates are on the rise
Several factors are converging to make this problem more common:
- AI tools are more accessible than ever, including generative video and voice models.
- Remote-first culture means most interviews now happen over Zoom.
- Digital-first hiring pipelines let fake applicants blend in more easily.
- High-value companies like banks or enterprise SaaS firms are especially attractive targets for bad actors.
According to one study cited in the episode, one in four candidates worldwide could be fake by 2028. “We thought this was just a pipeline problem,” Siadhal explained, “but now we're seeing it all the way through the funnel.”
There’s even a geopolitical angle. The CNBC article Nolan references in the episode ties some deepfake activity to North Korea and other state actors. And as both hosts point out, the goal isn't just to get a job — it’s to gain access to internal systems, customer data, or intellectual property (IP).
The impact on recruiting teams
This trend isn't just concerning from a fraud or risk standpoint. It's already adding friction and uncertainty to day-to-day recruiting workflows.
Recruiters are wasting time on conversations with fake people. And worse, these fake applicants are pushing real candidates further down the recruiting funnel, decreasing the likelihood of human interaction.
Even when the fake candidates are caught, the manual effort required to detect and remove them isn’t scalable. And in rare cases, some fake candidates even get hired.
Nolan shared a story about a company that had unknowingly extended six offers to fake candidates. “They actually ended up hiring a couple of these six people,” he said. “That was wild.”
The situation was part of a broader deepfake scam. According to the company’s CHRO, the fraudsters weren’t just pretending to be applicants — they were impersonating the company itself. They sent fake offer letters to real job seekers in an effort to steal sensitive information like bank account details and Social Security numbers.
It’s a chilling reminder that the harm runs both ways. Employers can be defrauded. And candidates can be exploited.
What teams can do now
While there’s no universal solution yet, Nolan and Siadhal offered several ways teams can start protecting themselves immediately:
Add friction early—but explain why
Asking candidates to record a short video about why they’re interested in the role can surface fakes quickly. But teams have to frame this friction carefully.
"You need to clearly articulate to candidates why you're doing it,” Nolan explained. “It’s about making decisions faster."
Explain that your team receives thousands of applications and wants to move quickly—but only with candidates who are serious and real.
And done well, this actually improves the candidate experience. Candidates who feel they’ve earned their spot in the process are more likely to accept offers. They feel chosen. They feel seen.
Siadhal illustrated that people are more likely to accept the job when they feel like they go “through a process where they feel like they’ve been selected.”
Train recruiters to spot red flags
Siadhal pointed out that AI avatars still struggle with:
- Moving their hands or bodies naturally
- Looking left or right on command
- Responding authentically to unexpected questions
Recruiters can start incorporating basic, low-lift tests into video screens: Asking someone to gesture or look away from the camera can reveal limitations in current avatar technology.
“It might be a bit weird,” Siadhal explained, “but if you asked the candidate to look to their left and look to their right, the AI avatar would not do a good job of this.” So although it’s “not a panacea” because the AI will soon have these capabilities, it’s “definitely something you can do,” Siadhal said.
Consider in-person or hybrid interview stages
For companies with physical locations or hybrid setups, adding an in-person round for high-sensitivity roles can serve as a final layer of identity verification.
It won’t work for every business, especially remote-first orgs. But as Siadhal noted, “the final boss” of your interview process should be something AI can’t replicate — yet.
Screen LinkedIn profiles
Fake profiles tend to have:
- No posting history
- Very few connections
- Recently created accounts
While LinkedIn doesn’t show account creation dates publicly, most recruiters can spot suspicious patterns with a little manual review.
As Siadhal put it, once recruiters realize how often they’re manually reviewing LinkedIn profiles, automation becomes the next step: “They do that 10 times in a row, they’re like, you know what, I’m just going to build an agent to do this for me every single time someone gets on my calendar.”
We’re still early in the arms race
Both hosts agree: Things are going to get worse before they get better. But regular team discussions can help recruiters stay sharp.
Nolan recommends carving out time every six to eight weeks to talk about:
- What red flags people are seeing
- Which tools or tactics are working
- How the tech is evolving
As he put it, "I think we just need to normalize that this is going to be a problem that we need to continually try and solve."
What long-term solutions might look like
Eventually, more centralized tools will emerge to manage this threat. Siadhal suggested several possibilities:
- Identity verification tech similar to what banks use (e.g., facial capture, ID validation)
- Job boards and ATS tools that verify candidate identities before surfacing them
- Candidate scoring models based on digital footprint, past behavior, and metadata
Some of these tools already exist in other sectors. It’s just a matter of time before they cross over into recruiting workflows.
But until these solutions are widely adopted, recruiters will have to work hard on the front lines.
What if candidates use AI agenst for interviews?
Not every AI-driven application is malicious. And that’s where things get murky.
"We're probably not that far away from a world where a smart candidate literally thinks—for the recruiter screen—'I’m just going to send my AI agent to that. I think it can probably handle it.'"
This surfaces an interesting ethical question: If a candidate’s avatar is a verified likeness and is trained on their real experience, is this fraud? Or just smart time management?
Nolan and Siadhal both expect this line to blur. As AI gets better at mimicking behavior and language, teams will have to decide what counts as legitimate use versus deception.
“We may well get to the point where we are okay with [AI bots] as a way to get more information about you,” Siadhal explained. If the AI avatar is a verified representation of the candidate, Siadhal said he’s “happy to get some data from it.”
Final thoughts: Be proactive, not paranoid
There’s no perfect defense against fake candidates. But the worst approach is to pretend it’s not a problem.
Nolan and Siadhal recommend:
- Start small: Add simple checks and live asks to your process. Watch for red flags and trust your instincts.
- Create space: Talk about this regularly with your team. Normalize the conversation so people feel empowered to take action.
- Partner up: Loop in IT and security early. They can help monitor access issues, watch login behaviors, and support identity checks.
- Frame the friction: If you're adding steps like video asks or ID checks, explain why. Make sure candidates understand how it helps them move faster through a fair process.
- Look ahead: AI-generated applicants are a threat—but AI can also help recruiters work faster and smarter. The goal is balance.
This is a trust problem. And trust is something recruiters are uniquely equipped to build—one smart question, one sharp instinct, and one candidate interaction at a time.
Fake candidate FAQs
Why has the number of fake candidates increased so quickly?
Automated applications, generative AI, and remote hiring have lowered the cost and effort required to apply at scale, making fraud easier and harder to detect.
Are fake candidates always using deepfakes or advanced AI?
No. Many fake candidates rely on simpler tactics like AI-written resumes, scripted answers, or exaggerated experience rather than full video or voice deepfakes.
Which roles are most affected by fake candidates?
High-volume, remote, and well-paid roles — especially in tech, sales, and operations — tend to attract the most fraudulent applications.
How early should teams try to detect fake candidates?
As early as possible. The cost of detection rises sharply once candidates reach recruiter screens or hiring manager interviews.
Can eliminating fake candidates improve the candidate experience?
Yes. Reducing noise in the funnel helps real candidates move faster, receive better feedback, and interact with more engaged recruiters.