Toey Andante/Shutterstock
Artificial intelligence has reshaped nearly every industry, and hiring is no exception. As organizations continue to explore ways to integrate AI into their recruiting workflows, many candidates have followed suit, turning to tools like ChatGPT, Gemini, and Claude to craft resumes and cover letters. At first glance, this appears to be an efficient and clever approach. But my institution’s recent experiences tell a different story.
The Flood of Applications
We recently conducted three separate searches for Data Scientist positions. Each received more than 500 applications, a staggering increase compared to the 30-70 applications we typically see for similar roles. The growth in volume alone made the review process daunting. But the problem was not just the number. It was how alike they all felt.
It was immediately apparent that many candidates had used Large Language Models (LLMs) to generate their materials. You could almost line them up and swap names without noticing. Many resumes mirrored our own job description word-for-word. Cover letters echoed each other so closely that after the first dozen, you could predict the next paragraph before reading it. In some cases, the AI-generated content crossed into misleading territory. For example, candidates with no higher education experience listed in their resumes AND cover letters “collaboration with Admissions, Registrar, and Deans” as a regular responsibility.
Even more surprising was that 85% of applicants didn’t submit a cover letter, and among those who did, the vast majority were clearly written by an AI tool. The results were bland, formulaic, and uninspired, precisely the opposite of what a cover letter is supposed to achieve.
What We Learned
These searches offered an unexpected lesson in how LLMs can erode, not enhance, human creativity and effort when misused. Instead of making people stand out, it flattened everyone into the same voice. Most claimed to be “skilled in AI and machine learning,” yet their materials showed little curiosity or personal insight. Ironically, those who claimed to understand AI best often seemed to rely on it the most blindly.
This overreliance reveals a worrying complacency. Instead of treating AI as an assistant to refine or elevate their message, many candidates used it as a substitute for personal effort. The result? Applications that were indistinguishable from one another and devoid of authenticity.
And here is the truth: experienced reviewers easily recognize content that seems to be AI-generated. The tone, structure, and excessive polish give it away. What candidates may see as efficiency, hiring committees experience as impersonality and often, inaccuracy. From numerous bolded words to the excessive use of hyphens and the beloved overuse of “leverage,” these all raise red flags for employers about AI use.
Where Do We Go from Here?
It is hard to say what the next phase will look like, but one thing is clear. Coexistence, not replacement, is the key. LLMs can be powerful tools when used thoughtfully. They can help clarify ideas, check tone, and even structure content. But when they replace our own tone and judgment, we lose the very thing hiring committees are trying to find: a person.
If you want to stand out, resist the temptation to outsource your entire application or activity to an LLM. Ironically, using AI to “sound professional” usually achieves the opposite. It makes you sound like everyone else. In a competitive job market, sameness is the last thing you want to project.
This lesson extends far beyond job searches. Whether in schoolwork, research, creative projects, or design, AI should enhance human originality, not erase it. The future belongs to those who can blend the efficiency of machines with the distinctiveness of human thought.
Ultimately, the difference between average and exceptional is not what the AI writes. It is what we add to it.


















