Parents who grew up in the ’80s and ’90s know the feeling: you’re listening to your kid’s playlist, and suddenly a song hits you with a wave of uncanny familiarity. Despite the claims by your teen that it is the latest and greatest, you know that it is just a repackaging of one of your favorite tunes from the past. I am noticing a similar trend with generative AI. It is inherently regurgitative: reshaping and repackaging ideas and thoughts that are already out there.
Fears abound as to the future of higher education due to the rise of generative AI. Articles from professors in many different fields predict that AI is going to destroy the college essay or even eliminate the need for professors altogether. Their fears are well founded. Seeing the advances that generative AI has made in just the past few months, I am constantly teetering between immense admiration and abject terror. My Chatbot does everything for me, from scheduling how to get my revise and resubmit done in three months to planning my wardrobe for the fall semester. I fear becoming too self-reliant on it. Am I losing myself? Am I turning my ChatGPT into a psychological crutch? And if I am having these thoughts, what effect is generative AI having on my students?
Remix vs. Originality: Girl Talk or Beyonce
Grappling with the strengths and weaknesses of my own AI usage, I feel I have discovered what might be the saving grace of humanity (feel free to nominate me for the Nobel Peace Prize if you wish). As I hinted earlier, AI is more like a DJ remixing the greatest hits of society rather than an innovative game changer. My ChatGPT is more like Girl Talk (who you have probably never heard of. Just ask your AI) than Beyonce (who you most definitely have heard of). Not that there’s anything wrong with Girl Talk. Their mashups are amazing and require a special kind of talent. Just like navigating AI usage requires a certain balance of skills to create a usable final product. But no matter how many pieces of music from other artists you mash together, you will not eventually turn into a groundbreaking, innovative musician. Think Pat Boone vs. The Beatles, Sha Na Na vs. David Bowie, Milli Vanilli vs. Prince, MC Hammer vs. Lauryn Hill.
What AI Gets Wrong in Writing
As a mathematician and a novelist, I see this glaring weakness in both of these very different disciplines. I’ll start with writing. ChatGPT is especially helpful in coming up with strange character or planet names for my science fiction novels. It will also help me create a disease or something else I need to drive the plot further. And, of course, it can help me find an errant comma or fix a fragmented sentence. But that is about it. If I ask it to write an entire chapter, for example, it will come up with the most boring, derivative, and bland excuse for prose I have ever seen. It will attempt my humor but fail miserably. It sometimes makes my stomach turn, it’s so bad.
A study from the Wharton School found that ChatGPT reduces the diversity of ideas in a pool of ideas. Thus, it diminishes the diversity of the overall output, narrowing the scope of novel ideas. Beyond that, I find that when I use ChatGPT to brainstorm, I typically don’t use its suggestions. Those suggestions just spark new ideas and help me come up with something different and more me.
For example, I asked ChatGPT to write a joke for its bad brainstorming practice of using the same core ideas over and over again. It said:
Joke: That’s not brainstorming—it’s a lazy mime troupe echoing each other.
That’s lame. I would never say that. But another joke it gave me sparked the music sampling analogy I opened this article with.
In any case, because of generative AI’s inability to actually generate anything new, I have hope that the college essay, like the fiction novel, will not die. Over-reliance on AI may indeed debilitate the essay, perhaps causing it to go on life support forcing students and faculty to drag its lifeless body across the finish line of graduation. But there is still hope.
I remember one of my favorite English teachers in middle school required that we keep a journal. Each day she asked us to write something, anything in our journal, even if it was only a paragraph or just a sentence. Something about putting pen to paper sparked my creativity. It also sparked a lifelong notebook addiction. And even though I consider myself somewhat of a techie and a huge AI enthusiast, to this day I still use notebooks for the first draft of my novels.
It is clear to me that ChatGPT will never be able to write my novels in my voice. I don’t claim to be a great novelist. I just feel that some of my greatest work hasn’t been written yet. While ChatGPT may be able to write a poem about aardvarks in the style of Robert Frost or a ballad about Evariste Galois in the style of Carole King, it can’t write my next novel, because it doesn’t yet exist. And even when it tries to imitate my voice and my style, predicting what I will write next, it does a poor job.
The Research Paper Dilemma: AI vs. Process
A research paper is inherently different from a creative work of fiction, however. ChatGPT does do a pretty good job of gathering information on a topic from several sources and synthesizing it into a coherent paper. You just have to make sure to check for the errant hallucinated reference. And honestly, when are our students ever going to be asked to write a 15-page research paper on Chaucer without any resources? And if they are, ChatGPT can probably produce that product better than an undergraduate student can. But the process, I would argue, is more important than the final product.
In his Inside Higher Ed paper Writing the Research Paper Slowly, JT Torres recommends a scaffolding process to writing the research paper. This method focuses on the process of writing a paper, exploring and reading sources, taking notes, organizing those notes into a ‘scientific story’ and creating an outline. Teaching students the process of writing the paper instead of focusing on the end product results in students feeling more confident that they can not only complete the task required but transfer those skills to another subject. Recognizing these limitations pushed me to rethink how I design assignments.
Using AI in the Classroom
Knowing that generative AI can do somethings (but not all things) better than a human has made me a more intentional professor. Now when I create assignments, I think: Can ChatGPT do this better than an undergraduate student? If so, then what am I really trying to teach? Here are a few strategies I use:
Method #1: Assess Your Assessments with AI in Mind
When designing an assignment, ask yourself whether it is testing a skill that AI already performs well. If so, consider shifting your focus to why that skill matters, or how students can go beyond AI’s capabilities.
Method #2: Use AI Where It Adds Value – Remove It Where It Does Not
In some cases, it makes sense to integrate AI directly into the assignment (e.g., generating code, automating data analysis). In others, the objective may be to build a human-only skill like personal expression or creative voice. I decide case by case whether AI should be a part of the process or explicitly excluded.
Method #3: Clarify Whether You are Teaching Theory or Application
When I am teaching tests, I have to ask myself: Am I assessing whether students understand the theory behind the test or whether they can run one using software? If it’s the latter, using AI to generate code might be appropriate. But if it’s the former, I’ll require manual calculations or a written explanation.
Method #4: Add a Reflection to any AI Supported Assignment
Any assignment where they are allowed to use AI, they also have to write a reflection about how they used AI and whether it was helpful or not. This encourages metacognition and reduces overreliance.
Method #5: Require Students to Share Their Prompts and Revisions
Having students share the prompts they used in completing the assignment teaches them about transparency and the need for iteration in their interaction with an AI. Students should not just be cutting and pasting the first response from ChatGPT. They need to learn how to take a response, analyze it then refine their prompt to get a better result. This helps them develop prompt engineering skills and realize that ChatGPT is not just a magic answer machine.
AI and the Limits of Innovation in Research
What about academic research in general? How is AI helping or hindering? Given that generative AI merely remixes the greatest hits of human history rather than creating anything new, I think its role in academic research is limited. Academic breakthroughs start with unasked questions. Generative AI works within the confines of existing data. It can’t sense the frontier because it doesn’t know there is a frontier. It can’t sample past answers of a question that hasn’t been asked yet. About a year ago, I was trying to get my AI to write a section of code for my research and it kept failing. I spent a week trying to get it to do what I wanted. I realized it was having such a difficult time because I was asking it to do something that hadn’t been done before. Finally, I gave up and wrote the piece of code myself, and it only took me about half an hour. Sure, the coding capabilities have gotten better over the past year, but the core principle remains the same. AI still struggles to innovate. It can’t do what hasn’t already been done. Also because of ‘creative flattery’ it wants to make you happy so it will try to do what you tell it to do even if it can’t. The product will be super convincing, but it can still be wrong.
I recently asked AI to write a theoretical proof that shows polygonal numbers are Benford distributed (Spoiler: They are not). Then I had it help me write a convincing journal-ready article. The only problem is it also wrote me a theoretical proof that Polygonal numbers are NOT Benford distributed as well. I submitted the former to a leading mathematics journal to see what would happen. Guess what, they caught it. A human was able to detect the ‘AI Slop’. This shows me, that (1) there will always be a need for human gatekeepers and (2) ‘creative flattery’ is extremely dangerous in a research setting and confirms the need for human review. The chatbot tries too hard to please, thus reinforcing what the user already thinks even if that means proving or disproving the exact same thing. Academic research thrives on novel questions and unpredictable answers, which AI is incapable of doing since it inherently just regurgitates what is already out there.
Helping Students See AI’s Blind Spots
The Benford Polygonal Numbers experiment is an important example of how we need to educate our students about AI usage in an academic setting. The Time.com article Why A.I. is Getting Less Reliable, Not More states that despite its progress over the years, AI can still resemble sophisticated misinformation machines. Students need to know how to navigate this.
One of my favorite assignments in my Statistics course is what I call:
Method #6: “Beat ChatGPT” – A Concept Mastery Challenge
Students must craft a statistics question that the chatbot gets wrong, explain why the chatbot got it wrong and then provide the correct answer. A tweak of this activity would be to take AI generated content and human written then compare and critique tone, clarity, or originality.
Remixing isn’t Creation
AI-generated content is like a song built entirely from remixed samples. Sampling has its place in music (and in writing) but when everything starts to sound the same, our ears and brains begin to tune out. A great remix can breathe new life into a classic, but we still crave the shock of the new. This is why people lost their minds the first time they heard Beyonce’s Lemonade or Kendrick Lamar’s To Pimp a Butterfly – not because they followed a formula, but because they bent the rules and made something we’d never heard before. AI, for all its value, doesn’t break the rules. It follows them. That is the difference between innovation and imitation. It is also the reason why AI, in its current capacity, will not kill original thought.
Sybil Prince Nelson, PhD, is an assistant professor of mathematics and data science at Washington and Lee University, where she also serves as the institution’s inaugural AI Fellow. She holds a PhD in Biostatistics and has over two decades of teaching experience at both the high school and college levels. She is also a published fiction author under the names Sybil Nelson and Leslie DuBois.
References
Hsu, Hua. 2025. “The End of the English Paper.” The New Yorker, July 7, 2025. https://www.newyorker.com/magazine/2025/07/07/the-end-of-the-english-paper.
Warner, John. 2024. “Get Ready for Faculty Bot-ification.” Inside Higher Ed, December 11, 2024. https://www.insidehighered.com/opinion/columns/just-visiting/2024/12/11/great-ready-faculty-bot-ification.
Meincke, Lea, Gideon Nave, and Christian Terwiesch. 2025. “ChatGPT Decreases Idea Diversity in Brainstorming.” Nature Human Behaviour 9: 1107–1109. https://doi.org/10.1038/s41562-025-02173-x.
Torres, J. T. 2021. “Writing the Research Paper Slowly.” Inside Higher Ed, May 5, 2021. https://www.insidehighered.com/advice/2021/05/05/benefits-new-approach-student-research-papers-opinion.
Sonnenfeld, Jeffrey, and Joanne Lipman. 2024. “Why A.I. Is Getting Less Reliable, Not More.” Time, June 20, 2024. https://time.com/7302830/why-ai-is-getting-less-reliable/.




















