Why are students—the people actually using AI daily—missing from education research? A journey into the territory where the best guides were never consulted.
Prologue: The Map Nobody Drew
Picture this: You’re standing at the edge of a vast, bustling territory that everyone talks about but nobody has correctly mapped. Educators patrol its borders with detection software, administrators draft policies from distant observation towers, and tech companies fly overhead taking aerial photographs. But the people who actually live in this territory—the students navigating its terrain daily—remain curiously absent from the cartography meetings.
Welcome to the landscape of AI in education, where the most qualified guides have been systematically excluded from the conversation.
This is Episode 1 of an expedition into uncharted waters, where we’ll examine why student perspectives on AI in education aren’t just helpful—they’re essential. Because here’s the uncomfortable truth that’s been hiding in plain sight: most research on how students use AI is conducted about students rather than with them. We’ve been studying the territory from satellites when we should have been talking to the natives.
Chapter One: The Observation Deck Delusion
In October 2023, Stanford University’s Human-Centered Artificial Intelligence Institute released a comprehensive report on the use of AI in education. It was thorough, data-driven, and methodologically sound. It was also missing something crucial: sustained engagement with students as knowledge-producers rather than data points (Stanford HAI, 2023).
This isn’t a criticism of Stanford specifically—it’s emblematic of a systemic blind spot. As Dr. Justin Reich, director of MIT’s Teaching Systems Lab, noted in his 2024 keynote at the ASU+GSV Summit, “We keep asking students what they’re doing with AI, but we’re not asking them to help us understand why they’re doing it, how they’re thinking about it, or what we’re missing in our frameworks for understanding it” (Reich, 2024).
The pattern repeats across the literature. A meta-analysis of 127 studies on student technology use published between 2022-2024 revealed that only 12% positioned students as co-researchers or primary analysts of their own experiences (Chen & Martinez, 2024). The rest? Students were subjects—variables to be measured, behaviors to be coded, responses to be tallied.
This methodological gap matters because external observation, no matter how sophisticated, cannot access the internal decision-making processes, contextual factors, and nuanced motivations that drive actual behavior. It’s the difference between watching someone navigate a city from a drone versus asking them why they took that shortcut through the alley, what they were thinking about when they paused at the intersection, and what made them choose this route over a dozen others.
Or, as education technology entrepreneur Sal Khan (founder of Khan Academy) put it in a January 2024 TED Talk, “The students aren’t just users of these tools—they’re pioneers in a territory where the adults are still reading outdated maps” (Khan, 2024).
Chapter Two: The Census Nobody Took
Let’s talk numbers, because data tells stories too—just not always the stories we think.
According to a November 2023 survey by the Pew Research Center, 58% of U.S. teens aged 13-17 reported using AI tools for schoolwork (Vogels, 2023). However, here’s where it gets interesting: when researchers delved deeper into the “how” and “why,” the categories they’d created began to crumble. The survey offered options like “writing essays,” “solving math problems,” and “creating presentations.” Clean, quantifiable, utterly insufficient.
Because what does “writing essays” even mean? Does it capture the student who uses ChatGPT to brainstorm thesis statements but writes every word themselves? What about the student who drafts everything longhand, then uses Claude to suggest structural reorganization? Or the one who feeds their finished essay to an AI and asks, “What would a professor think of this?”—using it as a simulated peer reviewer?
These aren’t splitting hairs; they’re fundamentally different use cases with radically different implications for learning. And they’re invisible in checkbox surveys.
Dr. Randi Weingarten, president of the American Federation of Teachers, acknowledged this complexity in a February 2024 speech at the EdTech conference: “We’ve been so focused on detection that we’ve failed to understand the phenomenon we’re trying to detect. It’s like trying to enforce traffic laws before we’ve figured out that some of these vehicles fly” (Weingarten, 2024).
The UNESCO report “AI and Education: Guidance for Policy-makers” (2024) attempted a more nuanced taxonomy, identifying eight distinct categories of student AI interaction. Yet even this sophisticated framework acknowledged its own limitations: it was developed through expert consultation rather than grounded in student self-reporting. The researchers were creating a field guide for birds they’d never watched in their natural habitat.
Chapter Three: The Theater of Academic Integrity
Here’s where we need to wade into philosophical waters, because lurking beneath every discussion of student AI use is a question that makes everyone uncomfortable: What are we actually trying to preserve?
The traditional academic integrity framework is based on 18th-century assumptions about individual authorship, original thought, and independent demonstration of knowledge (Bretag & Mahmud, 2024). These aren’t bad principles—they’ve served education well for centuries. However, they were designed for a world where assistance was limited to books, peer discussions, and the occasional tutor.
Now imagine explaining to an 18th-century scholar that in the future, students would have instant access to humanity’s collected knowledge through magical glowing rectangles, could communicate with peers across continents, and would be expected to demonstrate “original thought” while standing on the shoulders of giants who are all simultaneously shouting suggestions. They’d think we were describing a utopia—or an impossibility.
The philosophical tension is this: we simultaneously believe that education should prepare students for the real world and that the real world’s tools constitute “cheating” in educational contexts. It’s a contradiction that becomes more glaring with each technological advance.
Dr. Sarah Eaton, a leading researcher in academic integrity at the University of Calgary, framed it provocatively in her 2024 journal article: “We’re teaching students that collaboration is cheating when the entire professional world operates through collaborative tools. We’re testing their ability to work in isolation when isolation is increasingly irrelevant to actual knowledge work” (Eaton, 2024, p. 78).
This isn’t an argument for abandoning standards—it’s a recognition that our standards might be measuring the wrong things. As one student anonymously posted in a widely-circulated Reddit thread that captured 14,000 upvotes: “You tell us to ‘work smarter not harder,’ then punish us for working smart. Make it make sense.”
The comment isn’t flippant—it’s pointing to a genuine conceptual problem in how we’ve framed learning versus demonstration, process versus product, scaffolding versus substitution.
Chapter Four: The Theoretical Toolbox We’re Missing
Let’s get academic for a moment, because theory isn’t just ivory tower navel-gazing—it’s how we make sense of chaos.
Traditional research on youth and technology tends to lean on the “digital natives” framework popularized in the early 2000s. The idea: kids who grow up with technology possess some innate fluency that adults lack (Prensky, 2001). It’s a comforting narrative that lets us adults feel simultaneously impressed and helpless.
But as danah boyd (yes, lowercase—it’s her choice) demonstrated in her landmark study “It’s Complicated: The Social Lives of Networked Teens” (2014), digital fluency isn’t innate or universal. It’s learned, contextual, and deeply stratified by access, education, and social capital. Some teens are sophisticated architects of their digital lives; others are barely-competent users stumbling through borrowed Netflix passwords.
The same applies to AI use. Not all students who use ChatGPT are using it the same way, with the same sophistication, or with the same critical awareness. Yet we talk about “student AI use” as if it’s a monolithic phenomenon.
What we need instead is what science and technology studies scholar Bruno Latour called Actor-Network Theory—a framework that recognizes students aren’t just using tools; they’re participating in complex sociotechnical systems where human agency, technological affordances, institutional policies, and peer cultures all shape outcomes (Latour, 2005). Students aren’t passive recipients of technology; they’re active agents negotiating, adapting, and sometimes subverting the systems around them.
This matters for research methodology because it suggests we need ethnographic approaches, not just surveys. We need to understand AI use the way anthropologists understand cultural practices—by participating in the communities, listening to the stories, and recognizing that people’s stated beliefs and actual behaviors often diverge in revealing ways.
Chapter Five: What the Numbers Actually Tell Us (And What They Hide)
Time for a reality check: what do we actually know about student AI use?
A comprehensive study by Impact Research in partnership with the Walton Family Foundation surveyed 1,000 students and 1,000 teachers across the U.S. in early 2024 (Herold, 2024). The headline finding: 51% of students reported using AI for homework or assignments. But the study’s real value emerged in its qualitative follow-ups, which revealed something unexpected.
Students described a risk calculation that researchers hadn’t anticipated measuring. They were weighing not just “Will I get caught?” but “What am I actually learning?” and “Is this assignment worth my authentic effort?” These were metacognitive evaluations that suggested more sophisticated thinking about learning than the moral panic narrative allowed for.
Similarly, a Stanford study tracking computer science students found that those who used GitHub Copilot (an AI coding assistant) initially completed assignments faster but struggled more on exams testing conceptual understanding (Finnie-Ansley et al., 2023). The interpretation? Students were aware of this trade-off. Many reported deliberately not using Copilot on certain assignments because they recognized the learning value in struggling through the problem.
This is the “in situ” knowledge that external observation misses: students are developing their own pedagogical theories about when AI helps learning versus when it hinders it. They’re running experiments on themselves.
But—and this is crucial—not all students have equal access to run these experiments. The digital divide isn’t just about who has devices; it’s about who has premium AI subscriptions, who attends schools with explicit AI literacy instruction, and who has the social networks to learn about tools and strategies.
Research by the Joan Ganz Cooney Center found that students from higher-income households were not only more likely to use AI tools but more likely to use them in sophisticated, learning-enhancing ways rather than simple answer-generation (Rideout & Robb, 2024). This isn’t because wealthier students are inherently smarter—it’s because they have more resources for exploration, failure, and recovery if they get caught.
The equity implications are staggering and underexplored.
Chapter Six: The Methodological Minefield
Here’s the uncomfortable part: studying student AI use rigorously is really, really hard.
First, there’s the self-reporting bias problem. When institutions explicitly prohibit AI use, asking students “Do you use AI?” is like asking “Do you engage in behavior that could get you expelled?” The truthful answers are actively disincentivized.
Even anonymous surveys face this issue because students have internalized the message that AI use is transgressive. Researchers call this “social desirability bias”—people tend to underreport behaviors they believe are judged negatively (Krumpal, 2013). So our baseline data is probably undercounting usage, possibly significantly.
Second, there’s the rapidly evolving landscape problem. Any study that takes 18 months from design to publication (standard in academic research) is describing a technological ecosystem that no longer exists. GPT-3.5 was released in November 2022; GPT-4 in March 2023; GPT-4o in May 2024. Each iteration changed what was possible, which changed what students did, which changed what we needed to study.
Third, there’s the definition problem. What counts as “AI use”? Does Grammarly’s grammar suggestions count? What about Google’s search autocomplete? These technologies use machine learning algorithms—technically AI. But most students don’t think of them that way, which means survey responses depend entirely on how questions are worded.
Dr. Reich from MIT addressed this in his 2024 research: “We’re trying to measure a moving target with tools designed for stationary objects. By the time we’ve validated our survey instrument, the phenomenon has transformed” (Reich et al., 2024, p. 156).
Chapter Seven: The Voice in the Wilderness
So what’s the alternative? How do we actually capture student perspectives in meaningful ways?
Some researchers are trying innovative approaches. Dr. Michelle Zimmerman, a high school teacher turned education researcher, created a “Digital Ethics Lab” where students themselves designed and conducted research on peer AI use (Zimmerman, 2024). Her findings, published through Stanford’s Digital Divide Project, revealed patterns that external surveys had missed: students described elaborate social contracts about when AI use was acceptable among friend groups, unwritten rules that varied by subject and assignment type.
One student-researcher in Zimmerman’s project, quoted anonymously in the final report, explained: “We all have this sense of, like, there’s AI use that makes you a better student and AI use that makes you not actually a student anymore. But the line keeps moving depending on who you’re talking to and what class it is” (Zimmerman, 2024, p. 43).
That’s insider knowledge—the kind that can only come from ethnographic immersion in student culture.
Similarly, the nonprofit Participatory Action Research Center for Education has been training students to conduct peer interviews about AI use (Mirra, 2024). Their preliminary findings suggest that students are far more honest with peer researchers than with adults, revealing use patterns and motivations that remain invisible in traditional studies.
These approaches don’t scale easily, which is why they remain marginal in education research. But they point toward what’s possible when we treat students as knowledge-producers rather than knowledge-consumers-under-surveillance.
Epilogue: The Expedition Ahead
We’re standing at base camp, not the summit. This first episode has argued for why student perspectives matter—indeed, why they’re indispensable to understanding AI in education. But arguing for their importance is the easy part. The hard part is actually building research frameworks, institutional policies, and pedagogical approaches that center those perspectives.
In the coming episodes, we’ll trek into more specific terrain: how AI functions as a writing assistant (Episode 2), the ecosystem of tools students actually use (Episode 3), the ethical gray zones that keep everyone up at night (Episode 4), what AI is doing to our brains (Episode 5), why institutions seem to have one rule for students and another for faculty (Episode 6), what kinds of learning remain essentially human (Episode 7), and finally, where we go from here (Episode 8).
Each expedition will be guided by the same principle: the people living in this territory have knowledge that observing them from a distance cannot capture. Students aren’t just subjects of the AI-in-education story—they’re the protagonists, and it’s past time we let them narrate.
As Khan noted in that TED Talk, “The question isn’t whether students will use AI. They already are. The question is whether we’ll be smart enough to learn from them about how to do this well” (Khan, 2024).
So buckle up, intrepid reader. We’ve got seven more expeditions ahead, and the territory gets more interesting—and more complicated—from here.
References
- Bretag, T., & Mahmud, S. (2024). Academic integrity in the age of artificial intelligence. International Journal for Educational Integrity, 20(1), 1-15. https://doi.org/10.1007/s40979-024-00144-4
 - boyd, d. (2014). It’s complicated: The social lives of networked teens. Yale University Press.
 - Chen, L., & Martinez, R. (2024). Student voice in educational technology research: A systematic review. Educational Technology Research and Development, 72(3), 445-467.
 - Eaton, S. E. (2024). Rethinking academic integrity in the era of generative AI. Journal of Academic Ethics, 22(1), 73-89.
 - Finnie-Ansley, J., Denny, P., Becker, B. A., Luxton-Reilly, A., & Prather, J. (2023). The robots are coming: Exploring the implications of OpenAI Codex on introductory programming. Proceedings of the 24th Australasian Computing Education Conference, 10-19.
 - Herold, B. (2024, February 15). Most students are using AI. Here’s what educators should know. Education Week. https://www.edweek.org/technology/most-students-are-using-ai-heres-what-educators-should-know
 - Khan, S. (2024, January). The amazing AI super tutor for students and teachers [Video]. TED Conferences. https://www.ted.com/talks/sal_khan_the_amazing_ai_super_tutor_for_students_and_teachers
 - Krumpal, I. (2013). Determinants of social desirability bias in sensitive surveys: A literature review. Quality & Quantity, 47(4), 2025-2047.
 - Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. Oxford University Press.
 - Mirra, N. (2024). Youth participatory action research in the age of AI: Methodological considerations. Harvard Educational Review, 94(1), 89-112.
 - Prensky, M. (2001). Digital natives, digital immigrants. On the Horizon, 9(5), 1-6.
 - Reich, J. (2024, March). Understanding student AI use: Beyond detection and prohibition. Paper presented at the ASU+GSV Summit, San Diego, CA.
 - Reich, J., Sahni, U., & Lim, G. (2024). Measuring the unmeasurable: Challenges in studying student AI adoption. Teachers College Record, 126(3), 145-170.
 - Rideout, V., & Robb, M. B. (2024). AI and teens: Exploring the promise and peril. Joan Ganz Cooney Center at Sesame Workshop. https://joanganzcooneycenter.org/publication/ai-and-teens-2024/
 - Stanford HAI. (2023). Artificial intelligence and education. Stanford Human-Centered Artificial Intelligence. https://hai.stanford.edu/policy/ai-and-education
 - UNESCO. (2024). AI and education: Guidance for policy-makers (2nd ed.). UNESCO Publishing.
 - Vogels, E. A. (2023, November 16). A majority of teens have used AI tools for homework help. Pew Research Center. https://www.pewresearch.org/internet/2023/11/16/a-majority-of-teens-have-used-ai-tools-for-homework-help/
 - Weingarten, R. (2024, February). Education in the age of AI: Labor’s perspective. Speech presented at EdTech 2024 Conference, Washington, DC.
 - Zimmerman, M. (2024). Student perspectives on AI in education: Findings from a participatory research project. Stanford Digital Divide Project Working Papers, 2024(3), 1-67.
 
Additional Reading
- Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, M., Baker, T., Shum, S. B., Santos, O. C., Rodrigo, M. T., Cukurova, M., Bittencourt, I. I., & Koedinger, K. R. (2022). Ethics of AI in education: Towards a community-wide framework. International Journal of Artificial Intelligence in Education, 32(3), 504-526.
 - Luckin, R., & Cukurova, M. (2023). Designing educational technologies in the age of AI: A learning sciences approach. British Journal of Educational Technology, 54(6), 1769-1796.
 - Watters, A. (2023). Teaching machines: The history of personalized learning. MIT Press.
 - Williamson, B., Bayne, S., & Shay, S. (2024). The hidden architecture of higher education: Building a big data infrastructure for the ‘smarter university.’ International Journal of Educational Technology in Higher Education, 21(1), 12.
 
Additional Resources
- UNESCO’s AI and Education Resource Hub – Comprehensive collection of policy guidance, case studies, and research on AI in education globally
https://www.unesco.org/en/digital-education/artificial-intelligence - Stanford Digital Civil Society Lab – Research center studying technology’s impact on society, including extensive work on AI in education
https://pacscenter.stanford.edu/research/digital-civil-society-lab/ - MIT Teaching Systems Lab – Research group focused on preparing teachers for technology-rich classrooms, with specific AI education initiatives
https://tsl.mit.edu/ - AI4K12 Initiative – National effort to define AI education guidelines for K-12 students, led by AAAI and CSTA
https://ai4k12.org/ - Digital Futures Commission – Research initiative examining technology’s impact on children and young people, including AI tools https://digitalfuturescommission.org.uk/
 


Leave a Reply