Picture this: In a sluggish job market, artificial intelligence is reshaping how we hunt for work and how companies fill positions, but it's turning the whole process into a frustrating nightmare for everyone involved. As the U.S. economy cools down and job growth slows, tools like AI-powered interviews and automatically generated cover letters are becoming the norm—and not always for the better.
But here's where it gets controversial: While over half of organizations surveyed by the Society for Human Resource Management relied on AI to recruit talent in 2025, about a third of ChatGPT users turned to the chatbot to spruce up their job applications. Sounds helpful, right? Not so fast. Recent studies reveal that when job seekers lean on AI to craft their applications, they're actually less likely to land the job. Meanwhile, companies are drowning in a flood of submissions, making it tougher to sort through them all.
Dive deeper, and you'll see why. Anaïs Galdin, a researcher at Dartmouth, and her Princeton colleague Jesse Silbert, conducted a detailed analysis of cover letters from tens of thousands of applications on Freelancer.com, a popular job platform. They discovered that after ChatGPT launched in 2022, these letters became noticeably longer and more polished. But here's the twist most people miss: Employers started discounting them heavily. As a result, it became nearly impossible to spot truly qualified candidates from the crowd, leading to fewer hires and even lower starting salaries. Silbert warned that without better ways for information to flow between workers and companies, we're headed for outcomes like this— a breakdown in the hiring system that hurts everyone.
And it doesn't stop there. With the surge in applications, many employers are turning to automation for the interview stage itself. A whopping 54% of U.S. job seekers polled by recruiting firm Greenhouse in October reported experiencing an AI-led interview. These virtual setups exploded during the 2020 pandemic, and now, AI often handles the questioning. But let's be clear: This hasn't eliminated the human element of subjectivity. In fact, as researcher Djurre Holtrop points out from his studies on asynchronous video interviews, algorithms, and large language models (that's the tech behind tools like ChatGPT, designed to understand and generate human-like text), these systems can actually amplify existing human biases. Every AI developer should stay alert to this risk, he says.
Daniel Chait, CEO of Greenhouse, describes a vicious 'doom loop' where job applicants use AI to blast out hundreds of applications, prompting companies to automate more processes in defense, leaving both sides exasperated. 'Both sides are saying, “This is impossible, it’s not working, it’s getting worse,”' Chait told CNN. It's a cycle that's creating misery all around.
Yet, companies are jumping on the bandwagon. The recruiting tech market is projected to hit $3.1 billion by year's end, according to Fortune Business Insights. But this enthusiasm is sparking pushback from state lawmakers, labor unions, and workers who fear AI could unfairly discriminate against candidates. Liz Shuler, head of the AFL-CIO, slammed AI hiring as 'unacceptable,' arguing it denies qualified workers opportunities based on arbitrary factors like names, zip codes, or even how often they smile in an interview.
States like California, Colorado, and Illinois are stepping in with new laws to set standards for AI use in hiring and beyond. However, a recent executive order from President Donald Trump threatens to overshadow these state efforts. Employment lawyer Samuel Mitchell from Chicago notes that while the order can't fully override state laws, it adds confusion to an already uncertain regulatory landscape. Still, he emphasizes that current anti-discrimination laws remain in force, even for AI-assisted hiring—and lawsuits are already piling up.
Take, for instance, a case supported by the American Civil Liberties Union, where a deaf woman is suing HireVue, an AI recruiting company, claiming their automated interview failed to meet legal accessibility standards. HireVue pushes back, insisting their tech reduces bias through proven behavioral science. It's a classic debate: Does AI level the playing field or introduce new inequities?
Despite these hurdles, AI in hiring isn't going anywhere. In fact, advancements are improving how we analyze resumes, potentially opening doors for candidates who might have been overlooked before—like those from underrepresented groups who get a fairer shot thanks to unbiased algorithms. But for those who cherish the personal connection in hiring, the shift feels like a loss.
Consider Jared Looper, an IT project manager in Salt Lake City, Utah, who started his career in recruiting. During his recent job hunt, he endured an AI-driven interview and found it utterly impersonal—he even hung up on his first automated call. Now, he frets about skilled individuals who might struggle to adapt to a system where mastering AI interactions is key. 'Some great people are going to be left behind,' he worries.
So, what do you think? Is AI the future of hiring, poised to make the process more efficient and fair, or is it a discriminatory tool that's widening divides? Does the loss of human touch outweigh the benefits? Share your thoughts in the comments—do you agree that we're in a 'doom loop,' or see a way out? Let's discuss: Could regulations bridge the gap, or should we embrace this tech fully?