These days, many professionals are increasingly cautious about online interactions, especially when receiving unsolicited meeting requests. Take Nicole Yelland, a public relations worker at a Detroit non-profit. After falling victim to a sophisticated job-seeking scam, she has adopted a meticulous verification routine for any unsolicited outreach. Yelland checks potential contacts through personal data aggregators, tests their claimed language abilities, and insists on video calls with the camera on before engaging further.
This heightened vigilance reflects a broader trend amidst growing AI-driven fraud, which has led to increased paranoia in professional settings. Reports of job-related scams soared nearly threefold between 2020 and 2024, with losses surging from $90 million to $500 million. Scammers, leveraging advanced AI tools, can easily produce convincing fake personas, blurring the line between authenticity and impersonation.
Yelland’s scammers had impersonated a legitimate company and even provided credible documentation, making their deception more effective. However, red flags emerged during the interview process when they refused to turn on their cameras and asked for sensitive personal information, eventually leading Yelland to realize the fraud.
To combat this wave of deception, startups are emerging that specialize in detecting AI-facilitated deepfakes. Meanwhile, some professionals are reverting to traditional methods of verification. For instance, Daniel Goldman, a blockchain engineer, started asking family and friends to confirm his identity through alternative communication methods, like emails or texts, even if they were engaged in a video chat.
Ken Schumacher, the head of a recruitment verification service, shares that hiring managers often ask candidates pop questions about their purported hometowns to ensure authenticity. There’s also a “phone camera trick” where individuals on video calls may be asked to hold their phone cameras up to their laptops to expose any potential deepfake technology being used.
Despite these verification techniques, concerns persist about fostering an atmosphere of distrust. Jessica Eise, an assistant professor dealing with online survey fraud, notes her research team devotes considerable time to screen respondents, often leading them to recruit participants they personally know to ensure data integrity.
Amid the distrust, a bit of common sense can prevent falling for scams. Yelland reflects on her experience with the fake job pitch, where the overly generous job offer and benefits were immediate indicators of a scam.
As technology advances, the need to differentiate between the real and the fake has never been more urgent, prompting professionals to reconsider how they engage in the digital landscape.