Over 90% of students are using AI — but the cheating story is more complicated than the headlines.
Explore what educators are doing right now to build better assessments.
The Robot Wrote My Essay
(Or Did It?)
Cheating headlines, broken detectors, and the quiet revolution happening inside classrooms right now. AI didn’t kill the assignment — it just exposed every weak one we’d been leaning on for decades.
This Week on The Friday Download
This week’s episode tackles the question that’s been haunting every teacher, professor, and parent since ChatGPT landed in classrooms: did a student write this, or did a chatbot? Spoiler — the answer is usually “both, kind of, and the line is blurrier than anyone wants to admit.”
We dig into what the actual data shows about student AI use (it’s more nuanced than the headlines), why AI detectors failed spectacularly, and — here’s the hopeful twist — how the pressure from AI is quietly forcing educators to build better assessments than the ones they’ve been recycling for thirty years.
- The Big Weird: The cheating apocalypse that wasn’t — and the weird data behind how students are actually using AI
- Wait… That’s Actually Cool: How AI-resistant assessments are accidentally producing better education
- The Tiny Tech Snack: Five terms every teacher, student, and parent needs to know right now
Segment 1 — The Big Weird
If you only read the clickbait, you’d think every student on earth was feeding entire assignments into a chatbot, pressing Enter, and wandering off to binge Netflix while the robot earned them a degree. And honestly? Some did. But when researchers started looking at the actual data, the picture got a lot messier.
Surveys suggest the vast majority of college students — over 90% in some samples — are using AI somewhere in their study workflow. But only a much smaller slice admit to using it to fully complete an assignment. Most students are doing something more human: using AI the way we once used Google, SparkNotes, or the smart kid in the group chat. “Explain this concept.” “Give me practice questions.” “Help me brainstorm.”
Here’s where it gets weirder: a big chunk of students believe that turning in AI-written work is cheating — and a not-tiny percentage also say they’ve done it anyway. And a large majority are pretty sure they won’t get caught. The teenage brain, summarized: “I know this is wrong, but I am invincible.”
“A robot is accusing me of using another robot?”
The 2026 version of a cheating investigationThe AI Detector Arms Race — and Why It Failed
Schools responded with AI detectors — tools that felt a little like horoscope apps for essays. They flagged perfectly human writing as “probably AI.” They missed obviously AI-generated content. They disproportionately penalized non-native English speakers. Institutions started backing away, updating policies to say: “We might use these as one piece of evidence, but we cannot rely on them as proof.”
At some universities, both the suspected cheating and the accusation were mediated by AI — the essay came from a chatbot, and the evidence came from another chatbot. At some point, the humans had to step back in and say: okay, this is ridiculous. We need a different approach.
Access gap: When the choice is between a private tutor your family can’t afford and a free chatbot that explains every physics problem at 2 a.m., the temptation is structural, not just moral.
High stakes + easy shortcut: Maximum pressure combined with a tool that is always awake and never charges by the hour creates a predictable outcome.
The better question: Instead of “How do we catch cheaters?” — maybe ask: “Why are our assessments so easy for a robot to fake in the first place?”
Segment 2 — Wait… That’s Actually Cool
Here’s the twist: a lot of people in higher ed are now arguing that the biggest risk of AI isn’t cheating — it’s the possibility that we keep pretending our old assignments still work. If an AI can write your standard five-paragraph essay better than your students can, that might say as much about the assignment as it does about the AI.
We built decades of schooling on tasks that were easy to grade, easy to copy, and now — very easy to automate. Summarize this chapter. Explain this theory. Do 20 nearly identical math problems. Those were never great measures of deep learning. They were measures of “Can you follow the formula?” And AI is excellent at formulas.
“In trying to design AI-resistant assessments, a lot of educators are accidentally designing better assessments.”
JR DeLaney, The Friday DownloadAI-Vulnerable vs. AI-Resistant — The Key Distinction
Educators are starting to classify tasks into two buckets. AI-vulnerable tasks are the ones a chatbot can nail in seconds: generic summaries, basic definitions, cookie-cutter essays on overused prompts. AI-resistant tasks still allow AI in the mix, but they require human context, judgment, or performance that the tool can’t fake.
The Policy Shift Happening Right Now
A lot of schools and universities are moving from “Ignore AI” or “Ban AI” to something much more practical: define what counts as acceptable AI support, what requires disclosure, and what clearly crosses the line. The emerging consensus is landing somewhere like this: AI can help you brainstorm or get explanations, as long as the final work is yours. Significant AI use must be disclosed. Submitting AI-generated work as your own, without disclosure, is still cheating.
Crucially, institutions are writing these expectations down in plain language — so students aren’t guessing. And more are saying: we will not accuse someone of cheating based only on what an AI detector says. We need real evidence.
Reduces paranoia: Students know what’s actually allowed, and so do instructors.
Invites transparent use: AI becomes a visible tool, not a secret weapon.
Refocuses the goal: Can you think? Can you learn? Can you do something meaningful with knowledge — not just generate words about it?
Segment 3 — The Tiny Tech Snack
Five three-bite explainers to make you sound smarter in your next staff meeting, parent-teacher conference, or group chat meltdown.
The Takeaway
The old question — “Is this cheating?” — is still important. But the bigger, better question is: “Is this assessment worthy of a world where everyone has access to AI?”
For teachers: You don’t have to outsmart the bots. You just have to ask better questions — ones that require real thinking, real context, and a real human voice.
For students: AI can absolutely be your study buddy. But if it’s doing all the work, it’s also stealing your learning. Future-you — sitting in a job interview or a lab or a boardroom — is going to notice.
For parents and leaders: Don’t just ask, “Is my kid allowed to use AI?” Ask, “How is their school designing assessments so that, with or without AI, my kid is actually learning something real?”
“The robot forced us to admit: we can do better than the worksheet.”
JR DeLaney, The Friday DownloadGot a wild “Was this written by my student or a robot?” story — or an assignment that worked better in the age of AI? Send it in. We might feature it in a future Big Weird segment.




Leave a Reply