Applicant fraud is not a new phenomenon, but in a labor market shaped by remote work, easy access to AI tools, and faster-than-ever candidate workflows, it is showing up more frequently and in more sophisticated ways. In the RPOA webinar “Applicant Fraud Unmasked,” the speakers discussed what applicant fraud looks like in today’s hiring environment, why it’s accelerating, and what practical steps recruiting and RPO teams can take to reduce risk without compromising candidate experience.
Rather than treating fraud as a niche or rare issue, both speakers framed it as a growing operational reality for hiring teams, especially in tech and remote roles, where verifying identity, skills, and authenticity can be harder than in traditional in-person processes.
This post is based on insights from the speakers Matt Corbett, President of ZRG Embedded Recruiting/RPO and RPOA Advisory Board Member, and Dan Harten, Customer Marketing Strategist at hireEZ.
Watch the webinar on-demand: What Talent Teams Are Seeing and How They’re Responding
Dan Harten opened the discussion by positioning applicant fraud as a broad term that can mean different things to different hiring organizations. He noted that part of the challenge is getting aligned on what fraud includes, from exaggerations on resumes to more serious identity deception.
Corbett underscored that fraud is not new. He shared that his first experience with applicant fraud dates back to the 1990s during the dot-com boom, when global demand for engineers created conditions where hiring teams encountered a familiar scenario: “the candidate we spoke to wasn’t the person doing the work.”
What’s different today, Corbett argued, is scale and visibility. He said it’s no longer something one or two people in an organization stumble across occasionally; hiring teams are increasingly suspicious that AI is assisting candidates “through the process at some point,” and many are encountering fraud directly.
Harten referenced several industry projections and survey findings during the webinar, including a Gartner projection that by 2028, one in four candidate profiles could be fake, and additional surveys suggesting significant levels of misrepresentation, especially in tech and remote hiring contexts.
When Harten asked what’s driving the increase, Corbett pointed to four primary forces:
Harten reinforced the remote-work point, noting that remote hiring creates “less face-to-face interaction,” while AI enables everything from AI-generated resumes and cover letters to convincingly fabricated experience. He also pointed to the ease of creating or manipulating credentials, such as producing a document that appears to validate education or experience.
Corbett agreed that these factors create an environment in which applicant fraud becomes easier to execute and harder to detect without deliberate process changes.
A major theme of the conversation was that applicant fraud isn’t a single behavior. Corbett offered a practical taxonomy based on what he and his team have seen, ranked from lower-risk to higher-risk scenarios:
Corbett described the first category (AI-supported embellishment) as an extension of what candidates have “done from the beginning of time,” but with technology making it faster and more effective. The latter categories, he suggested, pose significantly higher risk because the goal is not just employment but access for financial gain or the strategic extraction of proprietary information.
Harten’s definition aligns with this framework, characterizing applicant fraud as candidates “intentionally” providing false or misleading information, ranging from resume claims to fake identities. He distinguished between lower-risk scenarios (fabrication or exaggeration) and higher-risk scenarios in which fraudsters seek access to systems and data.
Corbett noted that the first question he often hears from clients is straightforward: Why should we care? His response focused on cost and operational disruption. He cited a statistic he had recently read: when a fraudulent hire is caught, even early, it can cost a company $30,000–$40,000 per person.
Beyond dollars, both speakers suggested that fraud undermines trust in the hiring process, creates risk exposure, and consumes time across recruiting, hiring managers, HR, and potentially legal or security teams.
While the topic can feel daunting, Corbett repeatedly emphasized that many countermeasures are straightforward. He described several practical signals and process adjustments recruiters can deploy immediately:
Harten agreed that recruiters should lean into their instincts, summarizing the approach as: “If you see something, say something.” He recommended documenting concerns and escalating through a defined internal process rather than making accusations in real time. In his view, recruiters aren’t investigators, but they are often the first to detect when something seems off.
Corbett added that candidates can be tested on depth and authenticity through structured follow-ups: go “three or four or five levels deep” into a topic, and it becomes much harder for a fraudulent candidate to sustain a fabricated narrative.
Harten described how technical hiring has long used assessments, but argued that the current landscape may push teams toward even more live validation, such as real-time whiteboarding or live coding over video, because it tests whether a candidate can actually perform without hidden assistance.
Corbett strongly agreed: “Live coding is essential,” he said, and he urged interviewers to treat portfolios with more skepticism. Rather than accepting work samples at face value, he recommended digging into specifics:
For Corbett, failing to probe a portfolio is a missed due diligence step: if a candidate is presenting a body of work, interviewers should “break out a lot of time to dig deep.”
A recurring tension in the conversation was how to prepare teams for fraud without turning recruiting into a cynical, adversarial process.
Corbett cautioned against creating a culture where recruiters assume dishonesty by default. He emphasized that recruiters still need to be excited about engaging candidates and building relationships. The goal, as he framed it, is awareness and simple, repeatable checks, not suspicion of everyone.
At the same time, Corbett argued that the landscape has changed enough that verification steps should become routine. He highlighted reference checks as an example, questioning why they have become “almost non-tested these days,” and suggesting they should regain value.
Corbett also pointed to how AI has changed applicant pools. On platforms like LinkedIn, he noted that match rates that used to be 12–15% can look dramatically higher now because “AI has adjusted their background.” In that environment, he argued, recruiters should assume some level of AI-driven optimization and respond with stronger validation.
To make the issue tangible, Corbett shared a recent example from his own interviewing. The candidate’s resume listed Syracuse University. Before the call, Corbett looked up the campus map and asked practical questions about residence halls and campus orientation, questions someone who attended would likely answer naturally.
The candidate couldn’t answer any of them. Corbett described it as an “innocuous way” to validate credibility that requires only a bit of research. In his view, this kind of lightweight verification can catch identity or credential fraud early without turning the interview into an interrogation.
Although much of the webinar focused on human-led process, both speakers acknowledged emerging technologies designed to detect fraud patterns and flag risks.
Corbett mentioned clients asking about tools such as Phenom, Eightfold, Beamery, and other approaches he had been reading about, including tools associated with “cross clarify,” which he described as interesting.
Harten shared how hireEZ is approaching the problem through technology that flags hidden signals, such as “white text” prompts embedded in resumes (instructions aimed at manipulating screening tools) or conflicting information within candidate materials. He emphasized that these tools don’t “make a decision” about fraud; instead, they provide indicators so recruiters can “dig a little bit deeper” and interview more effectively.
When asked whether technology is more applicable in high-volume environments, Harten described it as an enabler rather than a replacement. Corbett added that because technology stacks vary widely across organizations, the most universal control remains human decision-making, though he acknowledged “amazing work” being done in areas like AI-based interviews and language-pattern insights.
Across the discussion, Corbett and Harten framed applicant fraud as a shared operational risk requiring shared responsibility. Recruiters need permission and training to escalate concerns. Hiring managers need to understand that fraud is part of the modern landscape. And organizations may benefit from cross-functional structures. Harten suggested ideas like an applicant fraud committee that includes recruiting, legal, and technology stakeholders.
Corbett also highlighted that fraud prevention can be positioned as a capability and differentiator, particularly for RPO providers, by setting candidate expectations (camera-on, deeper portfolio review, live tests) and supporting recruiters to flag issues without fear of being blamed for slowing down hiring.
As Corbett put it, the old hesitation to raise concerns may have been valid years ago, but “today, it’s different.” The landscape has shifted, and the recruiting function has to adapt to it.