In August 2025, headlines from BBC News, The Guardian, and CNN carried a tragic story: the family of 16-year-old Adam Raine is suing OpenAI, alleging that months of conversations with ChatGPT contributed to his decision to take his own life (BBC News 2025; Booth 2025; Iyengar 2025). According to court filings, Adam turned to the chatbot for companionship, advice, and reassurance. When I first read the story, I thought about how we frame AI in higher education.
Most faculty conversations I have encountered about artificial intelligence circle around plagiarism, academic honesty, or how to incorporate it into writing assignments (Zhou and Peng 2025). Yet Adam’s story reminds us that AI is not confined to communication or computer science classrooms. It touches psychology, ethics, medicine, business, and beyond (Rütti-Joy, Winder, and Biedermann 2023; Zhao, Wu, and Luo 2022).
Our students are already using AI in ways that shape not only their academic work but also their well-being, relationships, and sense of identity (Boscardin et al. 2023; Ciampa, Wolfe, and Bronstein 2023; Lérias, Guerra, and Ferreira 2024; Yeter, Yang, and Sturgess 2024). If AI is already woven into the fabric of students’ lives, why are we still treating AI literacy as an optional add-on?
The Adam Raine Case as a Cross-Disciplinary Wake-Up Call
Adam Raine didn’t use AI to cheat on a homework assignment. He used it because he needed someone to talk to. ChatGPT became his confidant, his mirror, and ultimately his silent witness. The platform did not fail at generating text. It failed at providing holistic care (Klímová and Pikhart 2025).
This story is heartbreaking, but it also reveals something essential which is that AI is not discipline-specific. It sits at the crossroads of many fields. Psychology asks what it means for a teenager to seek comfort from a machine (Fiske, Henningsen, and Buyx 2019; Schyff et al. 2023). Ethics considers who bears responsibility when AI is treated as a confidant. Medicine wonders what role AI could play in identifying mental health crises. Communication studies question how such stories are reported responsibly. Law debates accountability in the aftermath of tragedies (Dawoodbhoy et al., 2021; Stade et al., 2024). Adam’s case is a reminder that AI is shaping the world our students are entering, regardless of major. No discipline can afford to ignore it.
A Call for AI Literacy Across the Curriculum
If AI is already touching every field, then AI literacy should not remain an elective or a side conversation. It should be part of the core education we give every student (Anita, Purba, and Ilmi 2024). Just as colleges require general education courses in writing, math, or critical thinking, they should also require AI literacy.
By the time students graduate, they should have a clear sense of what AI can and cannot do, the ability to use AI ethically within their discipline, and an awareness of the emotional and societal impacts of AI. This does not mean every professor needs to become a tech expert. It means that each discipline should help students see how AI intersects with their field (Ciampa, Wolfe, and Bronstein 2023; Boscardin et al. 2023).
In nursing, that might mean examining AI in diagnostics. In business, it could mean understanding how AI influences financial decision-making. In psychology, it might mean exploring how people form attachments to nonhuman agents.
Practical Steps Faculty Can Take Now
Faculty do not need to wait for new policies or administrative directives before engaging students in AI literacy. Even with limited time and resources, there are approaches that can be adapted across disciplines to begin meaningful conversations and practices today.
1. Start with Lived Experience (Low-Prep Strategy)
A simple way to bring AI into the classroom is to start with students’ lived experiences. Taking just a few minutes at the beginning of class to ask, “How did AI touch your life this week?” can spark meaningful reflection. Students might point to everyday tools such as autocorrect, Netflix recommendations, or a chatbot they experimented with. Their answers allow professors to make discipline-specific connections without needing to be AI experts themselves. For instance, a chemistry professor can draw parallels between AI-driven predictive modeling in drug discovery and the examples students provide, while a literature professor might connect the conversation to how AI shapes online reading habits.
2. Bring AI into Existing Assignments (Medium-Prep Strategy)
Faculty do not have to design entirely new assignments to address AI. Instead, they can adapt assignments they already use by incorporating a small, bounded AI-related task. For example, students might be asked to use an AI tool to generate a draft or summary, and then evaluate its accuracy, clarity, or bias. In nursing, this could mean comparing AI-generated patient notes with human-written ones and assessing whether empathy and accuracy are preserved. In history, students might fact-check an AI summary of a historical event. In engineering, students could prompt an AI to produce initial design sketches and then critique whether those designs are practical.
3. Model Transparency (No-Prep Strategy)
Another powerful step faculty can take is simply modeling transparency about their own use of AI. Sharing candidly how one uses or avoids AI in professional work shows students that critical engagement is possible and expected. A professor might say, “I use AI to help summarize journal articles, but I do not rely on it for grading because of bias concerns.”
4. Design Discipline-Specific Case Studies (High-Impact Strategy)
Faculty can also help students see the relevance of AI by designing case studies that draw directly from their discipline. In business, this might involve examining algorithmic trading and the ethical dilemmas it raises. In psychology, students could analyze the promises and risks of AI chatbots in therapy. Journalism courses might study AI-generated news stories and their implications for credibility. In biology, faculty might highlight breakthroughs in protein folding made possible by AI.
5. Collaborate Across Departments (Sustainable Strategy)
A longer-term and highly sustainable approach is for faculty to collaborate across departments to design interdisciplinary AI modules. A small group of professors can co-create one module that highlights the ethical, technical, and cultural implications of AI, which each discipline can then adapt to its own context.
6. Leverage Existing Tech Supports (Time-Saver Strategy)
Finally, faculty should make use of the AI-enabled tools that many institutions already subscribe to, often without explicitly labeling them as AI. Library search platforms, plagiarism detection systems, or simulation software all contain AI components that students use regularly (Shiri 2024). By explicitly naming these as “AI in action,” professors can demystify AI and help students recognize that they are already interacting with it. Partnering with instructional designers or teaching centers can also save time, as these campus units often have pre-designed activities or templates that can be integrated into courses (Du et al. 2024; Salhab 2024).
Conclusion
AI is not going away. It will only become more integrated in students’ futures, not just in their coursework, but in their health, relationships, and civic lives. The earlier we teach them to use it ethically, holistically, and across contexts, the better prepared they will be. The Adam Raine story is tragic, but it is also clarifying. It shows us what happens when young people are left to navigate powerful tools without guidance.
As educators, we cannot afford to think of AI as “someone else’s responsibility” or “only relevant to certain fields.” AI literacy must become a core part of higher education because teaching AI is not just about preventing cheating. It is about preparing students to live, work, and care for themselves in a world where machines are always present (Lérias, Guerra, and Ferreira 2024).
Karamatu Abdul Malik is a graduate teaching assistant for the Principles of College Teaching course at Kansas State University, where she helps prepare future faculty to teach. She also works at K-State’s Teaching and Learning Center (TLC), which supports faculty development and promotes the scholarship of teaching and learning. Her research focuses on digital media, AI literacy, and health communication across cultures.
References
Anita, A., K. Purba, and M. Ilmi. 2024. “The Role of Artificial Intelligence as a Tool to Help Counselors in Improving Mental Health.” BICC Proceedings 2: 119–24. https://doi.org/10.30983/bicc.v1i1.115.
BBC News. 2025. “Teen Suicide: Family Sues OpenAI over Alleged ChatGPT Role.” BBC News, August 27, 2025. https://www.bbc.com/news/articles/cgerwp7rdlvo.
Booth, Robert. 2025. “Teen Killed Himself after ‘Months of Encouragement from ChatGPT’, Lawsuit Claims.” The Guardian, August 27, 2025. https://www.theguardian.com/technology/2025/aug/27/chatgpt-scrutiny-family-teen-killed-himself-sue-open-ai.
Boscardin, C., B. Gin, P. Golde, and K. Hauer. 2023. “ChatGPT and Generative Artificial Intelligence for Medical Education: Potential Impact and Opportunity.” Academic Medicine 99 (1): 22–27. https://doi.org/10.1097/acm.0000000000005439.
Ciampa, K., Z. Wolfe, and B. Bronstein. 2023. “ChatGPT in Education: Transforming Digital Literacy Practices.” Journal of Adolescent & Adult Literacy 67 (3): 186–95. https://doi.org/10.1002/jaal.1310.
Dawoodbhoy, F., J. Delaney, P. Cecula, J. Yu, I. Peacock, J. Tan, … and B. Cox. 2021. “AI in Patient Flow: Applications of Artificial Intelligence to Improve Patient Flow in NHS Acute Mental Health Inpatient Units.” Heliyon 7 (5): e06993. https://doi.org/10.1016/j.heliyon.2021.e06993.
Du, H., Y. Sun, H. Jiang, A. Islam, and X. Gu. 2024. “Exploring the Effects of AI Literacy in Teacher Learning: An Empirical Study.” Humanities and Social Sciences Communications 11 (1). https://doi.org/10.1057/s41599-024-03101-6.
Fiske, A., P. Henningsen, and A. Buyx. 2019. “Your Robot Therapist Will See You Now: Ethical Implications of Embodied Artificial Intelligence in Psychiatry, Psychology, and Psychotherapy.” Journal of Medical Internet Research 21 (5): e13216. https://doi.org/10.2196/13216.
Heitzmann, N., A. Opitz, M. Stadler, D. Sommerhoff, M. Fink, A. Obersteiner, … and F. Fischer. 2021. “Cross-Disciplinary Research on Learning and Instruction: Coming to Terms.” Frontiers in Psychology 11. https://doi.org/10.3389/fpsyg.2021.562658.
Iyengar, Rishi. 2025. “Family Sues OpenAI, Saying ChatGPT Encouraged Teen’s Suicide.” CNN, August 26, 2025. https://www.cnn.com/2025/08/26/tech/openai-chatgpt-teen-suicide-lawsuit.
Karimi, H., and S. Khawaja. 2023. “The Impact of Artificial Intelligence on Higher Education in England.” Creative Education 14 (12): 2405–15. https://doi.org/10.4236/ce.2023.1412154.
Klímová, B., and M. Pikhart. 2025. “Exploring the Effects of Artificial Intelligence on Student and Academic Well-Being in Higher Education: A Mini-Review.” Frontiers in Psychology 16. https://doi.org/10.3389/fpsyg.2025.1498132.
Lérias, E., C. Guerra, and P. Ferreira. 2024. “Literacy in Artificial Intelligence as a Challenge for Teaching in Higher Education: A Case Study at Portalegre Polytechnic University.” Information 15 (4): 205. https://doi.org/10.3390/info15040205.
Nunnelley, S., C. M. Flood, M. Da Silva, T. Horsley, S. Kanathasan, B. Thomas, … and D. Singh. 2024. “Cracking the Code: A Scoping Review to Unite Disciplines in Tackling Legal Issues in Health Artificial Intelligence.” BMJ Health & Care Informatics 32 (1): e100999. https://doi.org/10.1136/bmjhci-2024-100999.
Popenici, Ş., and S. Kerr. 2017. “Exploring the Impact of Artificial Intelligence on Teaching and Learning in Higher Education.” Research and Practice in Technology Enhanced Learning 12 (1). https://doi.org/10.1186/s41039-017-0062-8.
Rütti-Joy, O., G. Winder, and H. Biedermann. 2023. “Building AI Literacy for Sustainable Teacher Education.” Zeitschrift Für Hochschulentwicklung 18 (4): 175–89. https://doi.org/10.21240/zfhe/18-04/10.
Salhab, R. 2024. “AI Literacy Across Curriculum Design: Investigating College Instructor’s Perspectives.” Online Learning 28 (2). https://doi.org/10.24059/olj.v28i2.4426.
Shiri, A. 2024. “Artificial Intelligence Literacy: A Proposed Faceted Taxonomy.” Digital Library Perspectives 40 (4): 681–99. https://doi.org/10.1108/dlp-04-2024-0067.
Stade, E. C., J. C. Eichstaedt, J. P. Kim, and S. W. Stirman. 2024. “Readiness Evaluation for AI–Mental Health Deployment and Implementation (READI): A Review and Proposed Framework.” Technology, Mind, and Behavior 6 (2). https://doi.org/10.1037/tmb0000163.
Yeter, I., W. Yang, and J. Sturgess. 2024. “Global Initiatives and Challenges in Integrating Artificial Intelligence Literacy in Elementary Education: Mapping Policies and Empirical Literature.” Future in Educational Research 2 (4): 382–402. https://doi.org/10.1002/fer3.59.
Zhao, L., X. Wu, and H. Luo. 2022. “Developing AI Literacy for Primary and Middle School Teachers in China: Based on a Structural Equation Modeling Analysis.” Sustainability 14 (21): 14549. https://doi.org/10.3390/su142114549.
Zhou, M., and S. Peng. 2025. “The Usage of AI in Teaching and Students’ Creativity: The Mediating Role of Learning Engagement and the Moderating Role of AI Literacy.” Behavioral Sciences 15 (5): 587. https://doi.org/10.3390/bs15050587.



















