Explore how AI tools transform student writing—from brainstorming partner to ghost-writer. Research reveals the cognitive impacts, detection failures, and ethical dilemmas reshaping education.
Chapter One: The Digital Quill and the Question of Authorship
Picture this: It’s 2:47 AM in a college dorm room. The assignment is due in five hours. The student—let’s call her Maya—has a blank document open, a cold coffee beside her, and ChatGPT pulled up in an adjacent tab. She types a prompt, receives three paragraphs of coherent prose, reads it over, deletes two-thirds of it, rewrites the introduction in her own voice, and continues. Is Maya cheating? Is she being resourceful? Is she learning anything, or is she watching her writing skills atrophy in real-time like unused muscles?
Welcome to the most contentious frontier in modern education: the intersection of artificial intelligence and academic writing. This isn’t a story about right and wrong—it’s far more interesting than that. It’s a story about a spectrum so vast and nuanced that educators, students, and institutions are still trying to map its boundaries. And unlike the clear-cut plagiarism cases of yesteryear, where copying from SparkNotes was obviously misconduct, AI-assisted writing occupies a murky territory where support bleeds into substitution in ways that challenge our very definitions of authorship, learning, and intellectual labor.
The stakes are higher than you might think. In a 2025 study published in Scientific Reports, researchers examined how undergraduate students utilized AI in a large General Education course at a research university, finding that students documented AI use across multiple writing tasks, including understanding complex topics, revising and editing content, and enhancing efficiency. But here’s where it gets interesting: within that population, the variation is so wild that lumping all AI use together is like saying everyone who touches a piano is equally a musician. Some students are using AI as a sophisticated brainstorming partner; others are essentially having it write entire essays they then lightly edit. Most fall somewhere in between, navigating an ethical landscape with no clear map.
Research from a 2024 study of university students in Jordan found that students showed moderate familiarity with generative AI writing tools, with particular strength in engagement but lacking technical knowledge, while also recognizing benefits, especially regarding these tools’ capabilities in simulating creativity and fostering innovation. That complexity—the simultaneous recognition of power and peril—is what makes this moment so fascinating.
Chapter Two: Mapping the Writing Process in the Age of Algorithms
To understand where AI fits into student writing, we first need to revisit how writing actually works. The cognitive process model developed by Linda Flower and John Hayes in their 1981 publication in College Composition & Communication identified writing not as a linear march from introduction to conclusion, but as a recursive dance between planning, translating (putting thoughts into words), and reviewing. These stages are messy, iterative, and deeply cognitive—they’re where the actual learning happens.
Now, here’s where AI enters like an overeager dance partner: it can theoretically assist at every single stage. Need help brainstorming? AI can generate twenty ideas in seconds. Stuck on your thesis statement? AI can draft five variations. Struggling with transitions? AI can smooth them out. Can’t figure out if your argument makes sense? AI can restructure your entire essay. And therein lies both the promise and the peril.
University students value autonomy and interactivity in their AI interactions and may prefer AI-generated feedback for certain tasks, while important questions remain about how students maintain intellectual independence while leveraging AI support. Anna Mills, a faculty member at the College of Marin and influential voice on AI in composition pedagogy, co-authored guidance suggesting that AI-generated text encourages students to think of writing as task-specific labor disconnected from learning and the application of critical thinking.
Let me paint four portraits of students at different points on this spectrum, based on patterns identified in research:
The Strategic Architect: Marcus uses ChatGPT exclusively during the prewriting phase. He’ll input his assignment prompt and ask the AI to suggest possible thesis statements, counterarguments he should address, and organizational structures. Then he closes the AI tab and writes everything himself. His logic? “I’m not paying tuition for my brain to be idle, but I like having a conversation partner that helps me think through angles I might miss.” Research has identified patterns where students use AI for higher-order writing tasks like understanding complex topics and finding evidence while maintaining control over the final product.
The Collaborative Editor: Priya writes her entire first draft independently—always. But then she feeds it to Claude or ChatGPT with specific prompts: “Where is my argument weakest?” “Suggest better word choices for academic tone.” “Check my citations.” She treats AI like a writing center tutor available at 3 AM. The crucial distinction? Priya maintains agency over every change. She evaluates AI suggestions critically and often rejects them.
The Dependent Drafter: Jake’s approach is different. He’ll write a rough outline, feed it to AI, receive a polished essay, and then modify perhaps 15-20% of it—changing some words, adding a personal anecdote, tweaking the introduction. He’s not technically submitting AI-generated work unchanged, but the intellectual labor is dramatically asymmetrical. Jake knows this occupies ethical gray space, but rationalizes it: “I’m an engineering major. This gen-ed writing requirement isn’t teaching me anything I’ll use.” Here’s where learning scientists start worrying.
The Ghost-Writer Dependent: And then there’s Emma, who simply inputs the assignment prompt, receives a complete essay, and submits it with minimal changes. This isn’t a spectrum issue—it’s academic misconduct, full stop. But here’s what makes the current landscape so slippery: distinguishing Emma from Jake from an external evaluation standpoint is nearly impossible.
Chapter Three: What the Data Actually Tells Us
Let’s get empirical for a moment, because this conversation needs to be grounded in evidence rather than anxiety. A 2024 systematic review published in the Arab World English Journal found that while AI helps with grammar and style, questions remain about its impact on creativity and critical thinking. The findings were simultaneously reassuring and concerning.
Research from Indonesian universities revealed that EFL teachers unanimously agreed AI writing tools improved students’ writing quality, particularly in terms of content and organization. However, the review also emphasized that AI is not replacing university writing courses, which teach critical thinking, research, citation, argumentation, creativity, originality, and ethics—skills which AI lacks.
Then there’s the detection question. Educational technology companies have rushed to market AI detection tools, promising administrators the ability to identify AI-generated text with high accuracy. The reality is considerably messier. A 2023 study published in the International Journal for Educational Integrity tested detection tools and found they scored below 80% accuracy, with only five tools scoring over 70%, and that they have been found to diagnose human-written documents as AI-generated (false positives) and often diagnose AI-generated texts as human-written (false negatives).
In a Bloomberg test of two AI detectors (GPTZero and CopyLeaks), false positive rates were 1-2% when 500 essays written before generative AI’s release were run through the checkers. While that might sound small, if there are 2.235 million first-time degree-seeking college students in the U.S., and each writes 10 essays, that’s 22.35 million essays—meaning a 1% false positive rate could result in 223,500 essays falsely flagged as AI-generated.
Even more concerning: research reveals that AI detectors disproportionately target non-native English writers, and Black students are more likely to be accused of AI plagiarism by their teachers, while neurodiverse students are also more likely to be falsely flagged for AI-generated writing. The technological arms race between AI generation and AI detection has created a surveillance climate in education that many find pedagogically counterproductive.
Ethan Mollick, associate professor at the Wharton School of Business and author of Co-Intelligence: Living and Working with AI, offers a pragmatic perspective. Writing in April 2023, Mollick argued that AI cheating will remain undetectable and widespread, citing considerable evidence that even small amounts of editing can defeat AI detection software. Additionally, research has shown that it is theoretically impossible to build perfectly accurate detection systems. That pragmatic view is gaining traction among educators who see prohibition as both impossible and pedagogically misguided.
Chapter Four: The Cognitive Calculus—What Gets Lost and What Gets Gained
Now we arrive at the heart of the matter: What happens to student cognition when AI enters the writing process? This is where cognitive science meets educational practice, and the answers are deliciously complicated.
Cognitive Load Theory, with roots tracing back to 1982 and fully described in Sweller’s 1988 article, emphasizes that all novel information first is processed by a capacity and duration limited working memory and then stored in an unlimited long-term memory for later use, and once data is stored in long-term memory, the capacity and duration limits of working memory disappear, transforming our ability to function.
Cognitive Load Theory divides cognitive load into three categories: intrinsic (related to the inherent complexity of content), extraneous (irrelevant cognitive processes), and germane (cognitive processes that support learning). AI assistance can theoretically help by reducing extraneous cognitive load while preserving germane cognitive load—the deep processing that actually leads to learning.
For example: if an English language learner spends 80% of their cognitive energy just trying to construct grammatically correct sentences, they have little bandwidth left for developing sophisticated arguments. AI that helps with grammar and syntax could free up cognitive resources for higher-order thinking, allowing learners to focus on key aspects of learning. This is the optimistic interpretation—AI as scaffolding that enables learners to operate just beyond what they can do alone.
But here’s the rub: research on calculator use in mathematics education provides a cautionary tale. When calculators became ubiquitous in the 1980s and 1990s, educators debated whether they would free students to focus on conceptual understanding or prevent students from developing computational fluency, and decades later, the evidence suggests both happened—it depended entirely on how calculators were integrated pedagogically. Parents and educators expressed beliefs that calculators would impair students’ abilities to complete paper and pencil tasks, that children would become dependent on the calculator, and that students would forget how to do math, yet research revealed that students’ learning was not hindered in their mathematics education or in their paper and pencil calculations when properly integrated.
The writing parallel is obvious and ominous. Recent research examining AI-driven well-being in higher education found that while generative AI can reduce instructional pressure and increase engagement, it may also lead to growing levels of technological anxiety as dependence on GAI tools increases, particularly when students lack sufficient training or AI-related competencies.
But wait—there’s a counter-narrative emerging. Some researchers argue that prompt engineering (the skill of effectively communicating with AI to get useful outputs) represents a new form of literacy that’s genuinely valuable. Ethan Mollick contends that constructing a good AI prompt requires clarity of thought, specificity, and iterative refinement. In his book Co-Intelligence, Mollick describes AI as working “in many ways, as a co-intelligence” that “augments, or potentially replaces, human thinking to dramatic results”.
Research conducted by Mollick and colleagues at Boston Consulting Group found consultants using GPT-4 completed 12.2% more tasks on average, completed tasks 25.1% more quickly, and produced 40% higher quality results than those without the tool. However, another common finding in studies is that people who use AI for work are happier with their jobs because they outsource boring work to AI—a dynamic that may differ significantly in educational contexts where the “boring work” is often where skill development happens.
So who’s right? Probably both. The cognitive impact of AI assistance on writing development isn’t uniform—it’s contingent on usage patterns, student metacognition, and pedagogical context. The tool itself is neutral; the outcomes depend on implementation.
Chapter Five: The Authorship Paradox—Philosophy Meets Practicality
Here’s where we need to wade into deeper philosophical waters: What is authorship in the context of student work, and does AI assistance fundamentally violate it?
Traditional academic integrity frameworks define student authorship as original thinking expressed in the student’s own words. But this definition has always been somewhat fictional. Every student writer is influenced by sources they’ve read, lectures they’ve attended, peer feedback they’ve received, and writing center tutors they’ve consulted. We’ve always allowed—even encouraged—these forms of intellectual support. So why does AI assistance feel different?
This leads us to what I’ll call the Authorship Paradox: At what point does assistance become substitution? And more importantly—who gets to decide?
Consider these scenarios and ask yourself where you’d draw the line:
- A student discusses essay ideas with roommates, taking notes on their suggestions, then writes independently.
- A student visits the writing center, receives extensive feedback on structure and argument, then revises accordingly.
- A student feeds their draft to Grammarly, which suggests not just grammatical corrections but also tone adjustments and word choice improvements.
- A student asks ChatGPT to “explain the themes in The Great Gatsby,” reads the response, then writes an essay incorporating those insights without citation.
- A student asks ChatGPT to “write an essay outline on sustainable architecture,” uses that structure, but writes all content themselves.
- A student asks ChatGPT to “draft an introduction paragraph on climate policy,” receives it, modifies 40% of the wording, and includes it in their essay.
If you found yourself comfortable with scenarios 1-2 but uncomfortable with 4-6, you’re tracking the intuitive boundary most educators recognize. But notice how scenarios 3 and 5 occupy ambiguous middle ground. This ambiguity is where policy breaks down and where student confusion—and frustration—multiplies.
A 2024 study in the International Journal for Educational Integrity found that teacher respondents highly value the diverse ways algorithmically-driven writing tools can support their educational goals (perceived usefulness), though the study called for empirical investigation concerning the affordances and encumbrances of these tools and their implications for academic integrity.
The institutional response has been scattershot. According to a 2024 survey by the National Education Technology Consortium, 78% of universities have implemented or updated AI policies in the past year, yet the nature of those policies varies dramatically. Students navigating this landscape face a Russian roulette of expectations—what’s encouraged in one class is prohibited in another, even within the same institution.
Chapter Six: Reframing the Question—Assessment Design in an AI-Saturated World
Perhaps the most radical proposition emerging from this debate is that we’re asking the wrong question entirely. Instead of “How do we prevent AI use in writing?” maybe we should ask: “What are we actually trying to assess, and does AI assistance undermine that assessment?”
This is where learning objectives and assessment design become crucial. If the goal of a writing assignment is to evaluate students’ ability to construct grammatically correct sentences, AI assistance fundamentally undermines that assessment. But if the goal is to evaluate students’ ability to synthesize sources, develop original arguments, or apply theoretical frameworks to real-world problems—well, AI is considerably less helpful with those cognitive tasks.
In a 2024 paper, Ethan Mollick and Lilach Mollick argued that instructors can leverage their content and pedagogical expertise to design AI-enhanced learning experiences, putting them in the role of builders and innovators, and that this instructor-driven approach has the potential to democratize the development of educational technology by enabling individual instructors to create AI exercises and tools tailored to their students’ needs.
This means designing assignments that either:
- Embrace AI as a tool and explicitly teach students to use it effectively (e.g., “Use AI to generate three possible thesis statements, then analyze the strengths and weaknesses of each in a reflection memo”)
- Create AI-resistant assessments that privilege cognitive processes AI can’t easily replicate (e.g., in-class essay exams, oral defenses, portfolio development showing revision over time)
- Require transparency about AI use and make that use itself part of the learning objective (e.g., “Describe your writing process, including any AI tools used, and reflect on how they influenced your thinking”)
Mollick argued in 2023 that education will be able to adapt to AI far more effectively than other industries, and in ways that will improve both learning and the experience of instructors, suggesting that rather than education falling apart, thoughtful adaptation is possible.
Speaking to MIT Sloan in July 2024, Mollick emphasized that organizations need to set careful parameters for AI use, with clear distinctions between high-stakes applications requiring compliance and less-risky applications like creative inspiration, noting that vague guidelines that discourage AI use are counterproductive and that leaders should model successful use rather than shutting things down.
Epilogue: Living in the Liminal Space
So where does this leave us—students navigating an ethically ambiguous landscape, educators trying to preserve learning outcomes in rapidly changing conditions, and institutions struggling to craft coherent policies?
The honest answer is: in a liminal space. A threshold. A moment of transition where old frameworks no longer quite fit and new ones haven’t fully emerged. This is uncomfortable, but it’s also generative. The debates happening now in classrooms, faculty meetings, and student forums are shaping what academic integrity will mean for the next generation.
What seems clear is that blanket prohibition isn’t working—it’s driving AI use underground rather than eliminating it. What also seems clear is that uncritical embrace isn’t the answer either—letting students offload cognitive work during formative learning stages has real consequences for skill development.
The middle path—the one being charted by thoughtful educators and engaged students—involves transparency, intention, and constant reflection. Research emphasizes that understanding how students choose to use AI tools when given explicit permission is crucial for developing evidence-based insights about effective integration strategies, and that as institutions develop policies around AI use, understanding actual student practices and perspectives becomes essential for creating meaningful guidelines.
As Mollick writes in Co-Intelligence, “AI is what those of us who study technology call a General Purpose Technology,” advances that are “once-in-a-generation technologies, like steam power or the internet, that touch every industry and every aspect of life”. Where previous technological revolutions often targeted mechanical and repetitive work, this one augments human thinking itself.
In the end, perhaps the most important skill we can teach isn’t how to write without AI—it’s how to think with it, critically and carefully, while maintaining the intellectual agency that makes the writing genuinely ours. That’s a harder pedagogical challenge than simply banning the tools, but it’s probably the one most aligned with educational goals in an AI-saturated world.
The spectrum from support to substitution isn’t going away. Our task—students, educators, and institutions alike—is learning to navigate it with wisdom, integrity, and a clear-eyed understanding of what we stand to lose and gain along the way. As Mollick noted, AI cheating will remain undetectable and widespread, but education can adapt far more effectively than many fear if we focus on thoughtful integration rather than impossible prohibition.
The calculator debates of the 1980s eventually reached a practical consensus. In 1975, the National Advisory Committee on Mathematical Education issued a report suggesting eighth graders and above should have access to calculators for all class work and exams, and five years later, the National Council of Teachers of Mathematics recommended that mathematics programs take full advantage of calculators. Math education did not fall apart. Instead, it evolved.
We’re in the early chapters of a similar story with AI and writing. The ending hasn’t been written yet—and unlike an AI-generated essay, this is one narrative we all get to shape together.
References
- Banks, S. (2011). A historical analysis of attitudes toward the use of calculators in junior high and high school math classrooms in the United States since 1975 [Master’s thesis, Cedarville University]. https://digitalcommons.cedarville.edu/education_theses/31/
- Black, R. W., Tomlinson, B., Baek, C., & others. (2025). University students describe how they adopt AI for writing and research in a general education course. Scientific Reports, 15, Article 92937. https://www.nature.com/articles/s41598-025-92937-2
- Flower, L., & Hayes, J. R. (1981). A cognitive process theory of writing. College Composition & Communication, 32(4), 365-387. https://doi.org/10.58680/ccc198115885
- Hirsch, A. (2024, December 12). AI detectors: An ethical minefield. Center for Innovative Teaching and Learning. https://citl.news.niu.edu/2024/12/12/ai-detectors-an-ethical-minefield/
- Indrawati, I. (2023). The impact of AI writing tools on the content and organization of students’ writing: EFL teachers’ perspective. Cogent Education, 10(1), Article 2236469. https://doi.org/10.1080/2331186X.2023.2236469
- Jarrah, M. A. A., Wardat, Y., & Gningue, S. (2024). University students’ insights of generative artificial intelligence (AI) writing tools. Education Sciences, 14(10), Article 1062. https://doi.org/10.3390/educsci14101062
- McKenzie, L. (2024, September 10). AI detectors are easily fooled, researchers find. EdScoop. https://edscoop.com/ai-detectors-are-easily-fooled-researchers-find/
- Mills, A., & Goodlad, L. M. E. (2023, January 17). Adapting college writing for the age of large language models such as ChatGPT: Some next steps for educators. Critical AI. https://criticalai.org/2023/01/17/
- Mollick, E. (2023, April 9). The future of education in a world of AI. One Useful Thing. https://www.oneusefulthing.org/p/the-future-of-education-in-a-world
- Mollick, E. (2024). Co-intelligence: Living and working with AI. Portfolio/Penguin.
- Mollick, E. (2024, January 6). Signs and portents. One Useful Thing. https://www.oneusefulthing.org/p/signs-and-portents
- Mollick, E. R., & Mollick, L. (2024). Instructors as innovators: A future-focused approach to new AI learning opportunities, with prompts. SSRN. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4802463
- National Education Technology Consortium. (2024). How AI paper writers are changing the way students tackle research assignments. Yomu.ai. https://www.yomu.ai/resources/how-ai-paper-writers-are-changing-the-way-students-tackle-research-assignments
- Paas, F., & van Merriënboer, J. J. G. (2020). Cognitive-load theory: Methods to manage working memory load in the learning of complex tasks. Current Directions in Psychological Science, 29(4), 394-398. https://doi.org/10.1177/0963721420922183
- Rashwan, K. E., & Aljuaid, H. (2024). The impact of artificial intelligence tools on academic writing instruction in higher education: A systematic review. Arab World English Journal (AWEJ) Special Issue on ChatGPT, 26-55. https://doi.org/10.24093/awej/ChatGPT.2
- Sweller, J., van Merriënboer, J. J. G., & Paas, F. (2019). Cognitive architecture and instructional design: 20 years later. Educational Psychology Review, 31(2), 261-292. https://doi.org/10.1007/s10648-019-09465-5
- University of San Diego Legal Research Center. (n.d.). The problems with AI detectors: False positives and false negatives. Generative AI Detection Tools. https://lawlibguides.sandiego.edu/c.php?g=1443311&p=10721367
- Watters, A. (2015, March 13). A brief history of calculators in the classroom. The History of the Future of Education. https://medium.com/the-history-of-the-future-of-education/a-brief-history-of-calculators-in-the-classroom-4b448b7426d4
- Weber-Wulff, D., Anohina-Naumeca, A., Bjelobaba, S., Foltýnek, T., Guerrero-Dib, J., Popoola, O., Šigut, P., & Waddington, L. (2023). Testing of detection tools for AI-generated text. International Journal for Educational Integrity, 19(1), Article 26. https://doi.org/10.1007/s40979-023-00146-z
- Witt, B. (2024, July 15). How to tap AI’s potential while avoiding its pitfalls in the workplace. MIT Sloan. https://mitsloan.mit.edu/ideas-made-to-matter/how-to-tap-ais-potential-while-avoiding-its-pitfalls-workplace
- Zawacki-Richter, M. A., Zhang, J., Rosé, C. P., & Warschauer, M. (2024). Algorithmically-driven writing and academic integrity: Exploring educators’ practices, perceptions, and policies in AI era. International Journal for Educational Integrity, 20, Article 3. https://doi.org/10.1007/s40979-024-00153-8
Additional Reading
- Akgun, S., & Greenhow, C. (2022). Artificial intelligence in education: Addressing ethical challenges in K-12 settings. AI and Ethics, 2(3), 431–440. https://doi.org/10.1007/s43681-021-00096-7
- Hembree, R., & Dessart, D. J. (1986). Effects of hand-held calculators in precollege mathematics education: A meta-analysis. Journal for Research in Mathematics Education, 17(2), 83–99.
- Ruthven, K. (1998). The use of mental, written and calculator strategies of numerical computation by upper-primary pupils within a ‘calculator-aware’ number curriculum. British Educational Research Journal, 24(1), 21–42.
- Selwyn, N. (2022). The future of AI and education: Some cautionary notes. European Journal of Education, 57(4), 620–631.
- Sullivan, M., & Kelly, A. (2023). ChatGPT seems too good to be true: College students’ use and perceptions of generative AI. Computers & Education: Artificial Intelligence, 7, Article 100294.
Additional Resources
- Writing Across the Curriculum (WAC) Clearinghouse – AI Text Generators Resource Area
Curated by Anna Mills, this comprehensive collection includes articles, sample AI essays, and teaching strategies for AI in writing instruction.
https://wac.colostate.edu/repository/collections/ai-text-generators-and-teaching-writing-starting-points-for-inquiry/ - AI Pedagogy Project
A collaborative resource for educators exploring ethical and effective AI integration in teaching, featuring case studies and teaching reflections.
https://aipedagogy.org/ - Modern Language Association (MLA) Task Force on Writing and AI
Resources and guidelines for composition instructors navigating AI in writing pedagogy.
Contact through MLA Director or CCCC Director for task force information - Ethan Mollick’s “One Useful Thing” Newsletter
Practical insights on AI in education and work, with regular updates on research findings and implementation strategies.
https://www.oneusefulthing.org/ - International Journal for Educational Integrity
Peer-reviewed research on academic integrity issues, including multiple studies on AI detection tools and educational ethics.
https://edintegrity.biomedcentral.com/


Leave a Reply