Automation vs. Authority: Who’s Really Making Decisions in Your Classroom?
AI recommendation engines are quietly stepping into the role of decision-maker in classrooms and homeschool environments alike — flagging struggling students, routing learning paths, assigning interventions. The technology is impressive. The accountability gap is alarming. And most educators never got a memo about any of it.
The Story We’re Telling Ourselves
Here’s the version of AI in education that gets the most airtime: a teacher, overwhelmed and underresourced, discovers an intelligent platform that helps her personalize instruction, identify struggling students early, and free up forty-five minutes of grading time each day. The AI is the sidekick. She is still the hero. Everyone wins.
It’s a compelling narrative — and it’s not wrong, exactly. The time savings are real. The early-alert capabilities are genuinely useful. But somewhere in the middle of that success story, a quieter plot twist has been unfolding. In classrooms, homeschool setups, and district dashboards across the country, AI systems aren’t just assisting decisions anymore. In many cases, they’re making them — and the human in the room is nodding along, assuming the algorithm has done its homework.
Teachers and homeschool educators are hearing a lot about “adaptive learning” and “intelligent tutoring.” What they’re hearing less about is the behavioral science phenomenon powering much of that adoption: the Default Effect. Or the documented research on automation bias — our deeply human tendency to over-trust machine outputs, even when those outputs are quietly, confidently wrong.
Education media tends to frame AI decision-making in schools as a future concern. Something to watch. A horizon issue. But parents, teachers, and administrators who are paying attention to what’s happening inside their learning management systems right now would tell you the horizon is already behind us. The decisions are being made. The question is whether anyone is interrogating them.
Episode 1 mapped the five layers of the AI Classroom Stack — the interconnected system of LMS platforms, tutoring tools, grading assistants, and analytics dashboards operating in today’s K–12 and homeschool environments.
Episode 2 (this post) zooms in on the tension between automation and human authority — what happens when the stack starts making decisions, why educators tend to trust those decisions, and what’s at stake when they shouldn’t.
Episodes 3 & 4 will follow: who controls the data pipeline, and how to design an AI-ready classroom with guardrails that actually work.
What’s Actually Happening Inside the Stack
To understand the automation-versus-authority tension, it helps to understand what AI recommendation engines actually do inside educational platforms — and how quietly they do it.
Modern adaptive learning platforms — tools like DreamBox, i-Ready, Newsela, and dozens of others embedded in school districts — are not passive repositories of content. They are active routing systems. They observe how a student moves through material: where they pause, where they skip, how long they spend on a problem, whether their response patterns change over time. They use that behavioral data to generate recommendations: next lesson, flagged for intervention, elevated to enrichment track, referred to the reading specialist.
On the surface, this is exactly what good teaching looks like — attentive, responsive, individualized. The difference is that a skilled teacher doing this work is drawing on rich contextual knowledge: the student who seemed distracted because her parents separated last month, the kid who always rushes through digital work but lights up with a physical manipulative, the homeschooler whose “slow” progress on fractions is actually deliberate mastery-based pacing. The algorithm is drawing on clickstream data.
Automation Bias: The Cognitive Trap
The deeper problem isn’t that AI systems make recommendations. It’s that humans, presented with those recommendations in authoritative-looking dashboards, tend to follow them without sufficient scrutiny — a well-documented phenomenon called automation bias.
First formally described in the aviation and medical literature, automation bias refers to the human tendency to over-rely on automated systems and to reduce independent information-seeking when a machine recommendation is present (Parasuraman & Manzey, 2010). It shows up in two forms: omission errors, where we fail to notice a problem because the system didn’t flag one, and commission errors, where we act on a system recommendation even when our own judgment would have pushed back.
In the cockpit, automation bias has contributed to serious accidents. In the emergency room, it has led clinicians to accept faulty diagnostic suggestions. And in the classroom — in a lower-stakes but no-less-consequential way — it is quietly shaping which students get flagged for intervention, which get accelerated, and which get left in an algorithmic holding pattern while the teacher assumes the system has it handled.
“Automation bias is not a personality flaw or a sign of laziness — it is a predictable response to cognitive overload in a system that provides authoritative-looking outputs. When teachers have thirty students, six preps, and a twelve-tab dashboard, the algorithm is going to win the attention battle every time.”
JR DeLaney · AI Innovations UnleashedThe Default Effect in Educational Technology
Alongside automation bias sits a related force: the Default Effect. Behavioral economists Richard Thaler and Cass Sunstein popularized the concept in their landmark 2008 book Nudge — the insight that whatever option is pre-selected or presented as the default will be chosen at dramatically higher rates, not because people consciously prefer it, but because changing a default requires active effort, and most people in most contexts are running low on exactly that.
In educational technology, the Default Effect operates at scale in ways that educators rarely pause to examine. When a platform’s recommended learning path is pre-loaded and requires a deliberate override to change, most teachers don’t override. When an AI-generated intervention list is the first thing that populates a teacher dashboard on Monday morning, most teachers work from it. When a homeschool parent’s curriculum app auto-advances a student based on quiz scores, most parents accept the advancement as a reasonable proxy for mastery.
None of these defaults are necessarily wrong. The dangerous assumption is that they are necessarily right — that they were designed with your specific students, your specific values, and your specific definition of educational success in mind. They weren’t. They were designed for the average of a training dataset, optimized for engagement or completion metrics, and deployed at scale across millions of learners who are decidedly not average.
What This Looks Like in the Real World
Abstract concepts get a lot clearer when you put them in a room with an actual teacher or a homeschool parent sitting at a kitchen table at 8 p.m.
The K–12 Scenario: The Intervention That Wasn’t
Picture a fifth-grade reading teacher using a widely deployed adaptive reading platform — one of the major names you’d recognize immediately. Every Monday, she opens her dashboard to a color-coded list: green students are on track, yellow students need monitoring, red students are flagged for reading intervention. She has twenty-six students. She has forty-five minutes of unstructured instructional time per day. She works from the list.
What she may not know is that the platform’s risk-flagging model was trained predominantly on data from students in suburban, English-dominant households — and that the “risk” signals it weights most heavily include response latency, re-reading behavior, and skipped passages. For her English Language Learner students, those exact behaviors often reflect active, effortful comprehension — the opposite of a risk signal. The algorithm flags them in red. She routes them to intervention. They spend time on below-grade phonics drills instead of grade-level content that would actually advance their academic language development.
Nobody lied to her. Nobody made a dramatic mistake. The system did exactly what it was designed to do. And a group of students got routed in the wrong direction because no one asked whether the system’s definition of “struggling” matched reality.
The Homeschool Scenario: The Algorithm as Co-Parent
The dynamics in homeschool settings are different but no less complex. Homeschool families who adopt AI-powered curriculum platforms — and their numbers have grown significantly since 2020 — often do so precisely because they want personalized, responsive instruction. The platforms deliver it. But personalization powered by an algorithm is not the same as personalization powered by a parent who knows their child.
A parent using a popular adaptive math platform notices her son has been stuck on the same multiplication unit for three weeks. The platform keeps cycling him through variations of the same problem set, increasing difficulty incrementally, reporting progress in the dashboard. What the platform can’t see is that the child has a visual processing difference that makes the platform’s primary format — small-screen, dense-grid multiplication tables — almost impossible to parse. He’s not struggling with multiplication. He’s struggling with the interface. The algorithm, measuring only outcome data, keeps optimizing within a broken loop.
The parent, trusting the platform’s mastery assessments, assumes her child needs more repetition. Three weeks pass before she overrides the system, switches to a hands-on manipulative approach, and watches her son master the concept in two days.
That parent was paying close attention. Many aren’t — or feel they don’t have the expertise to override an algorithm that presents itself as authoritative. The Default Effect, combined with the implicit credential of the technology, can be a powerful force pushing against human judgment even in the most autonomous educational setting imaginable.
Risks, Tradeoffs, and the Accountability Gap
It would be easy — and lazy — to conclude that the lesson here is simply “don’t trust AI.” That’s not it. Adaptive platforms, recommendation engines, and intelligent dashboards have real value. The early-alert capability alone, when functioning well and used thoughtfully, can catch students who might otherwise fall through the cracks. The problem isn’t the technology. It’s the accountability vacuum that surrounds it.
When the Algorithm Makes a Mistake, Who Answers for It?
In traditional educational decision-making, accountability flows in identifiable directions. A teacher makes a placement decision; a parent can question it, request documentation, ask for a meeting. A district adopts a curriculum; the school board can be held accountable. The decision-maker is, at least in principle, reachable.
When an AI system makes a recommendation — or when the Default Effect ensures that recommendation is quietly enacted — the accountability chain gets murky fast. The teacher may not have realized she was deferring to an algorithm. The platform vendor’s model is proprietary. The district’s technology coordinator doesn’t have visibility into the model weights. The parent doesn’t know a recommendation was ever made. The student just finds herself in a different class.
Researcher Ben Williamson at the University of Edinburgh has written extensively on what he calls “the datafication of education” — the shift toward algorithmic systems in schools that embed particular assumptions about learning, ability, and progress that often go unexamined and unchallenged (Williamson, 2017). His central concern isn’t that algorithms are malicious; it’s that they are authoritative in ways that resist scrutiny, precisely because their inner workings are invisible to the people most affected by them.
The Bias Problem
Educational AI systems trained on historical data inherit the inequities baked into that data. If high-achieving students in a training dataset were disproportionately from well-resourced districts, the model learns patterns that were shaped by resource advantage — and then applies those patterns in contexts where they don’t belong. Flagging systems can disadvantage students of color, multilingual learners, and students with disabilities not because anyone programmed them to, but because the proxies they use for “risk” and “readiness” were calibrated in environments that didn’t reflect those students’ strengths.
UNESCO’s 2023 guidance on generative AI in education specifically calls for scrutiny of how AI systems perform across demographic groups, noting that “bias in AI-generated educational content or recommendations can reinforce existing inequalities and limit students’ opportunities in ways that are difficult to detect and correct” (UNESCO, 2023). That document was aimed at generative AI, but the principle applies across the board — to adaptive platforms, analytics dashboards, and every other layer of the stack that routes students based on data.
The Engagement Optimization Trap
There’s a subtler risk worth naming. Many adaptive platforms optimize for engagement and completion rates — metrics that are measurable, reportable, and beloved in vendor dashboards. But engagement is not the same as learning. Completion is not the same as mastery. A student can spend forty-five minutes in a highly “engaging” AI tutoring session and come away with a reinforced misconception, a polished surface fluency over a fragile foundation, or simply a high confidence score on a narrow skill that won’t transfer.
When teachers delegate pacing and sequencing to platforms optimized for these proxy metrics, they risk building an educational experience that looks excellent in the data and feels meaningless in the test or the real-world application. The algorithm doesn’t care about transfer. It cares about the next click.
Who has the moral authority to make consequential decisions about a child’s learning path? When we allow AI systems to make those decisions by default — through the architecture of dashboards, the inertia of auto-populated recommendations, and the cognitive pressure of automation bias — we are not answering that question. We are simply deferring it. And deferred authority is not neutral. It accumulates. It shapes trajectories. And eventually, a student who needed a human judgment finds they’ve been on an algorithmic track for years.
What Teachers and Homeschool Educators Can Do Right Now
None of this requires abandoning adaptive platforms or treating AI recommendations as the enemy. It requires developing what might be called critical automation literacy — the habit of using AI tools as a starting point for professional judgment, not a replacement for it.
Name the Default, Then Decide
The single most powerful intervention against the Default Effect is making it visible. At the start of each week, before acting on any AI-generated recommendation or routing decision, pause and ask: Is this what the system suggested, or is this what I actually believe? That five-second question creates the cognitive interrupt that turns passive acceptance into active professional judgment. You may still follow the recommendation — and often you should. But it will be a choice, not a reflex.
Know Your Platform’s Training Data
Most educators have never asked their EdTech vendor a pointed question about the population their model was trained on. Start asking. When evaluating adaptive platforms — or advocating for transparency about ones already in use — push for answers to: What demographic data does this system use? What proxies does it use for “risk” or “readiness”? How has the model been validated across multilingual learners, students with IEPs, and students from under-resourced communities? If the vendor can’t answer these questions clearly, that’s itself important information.
Build in Human Override Protocols
Develop a personal or team protocol for the categories of AI recommendations that always require a human cross-check before action: reading level placements, intervention referrals, acceleration decisions, and any recommendation that would change a student’s instructional grouping. These are consequential enough that the Default Effect cannot be permitted to operate unchecked. A brief human review — even two minutes of context-checking — dramatically reduces the risk of automation bias in high-stakes decisions.
Make AI Reasoning Visible to Students
One underused strategy, particularly in middle and high school settings: let students see and interrogate the AI recommendations made about them. Ask a student to look at their adaptive platform’s progress report and explain, in their own words, what the system thinks they know and what it thinks they need to work on. Does the student agree? Where does their self-assessment diverge from the algorithm’s? This isn’t just a metacognitive exercise — it’s a form of AI literacy that will serve students in every domain of their lives.
Homeschool-Specific: Reset the Default Manually
For homeschool parents, the most powerful practice is periodic deliberate override. Every four to six weeks, set aside the platform’s recommended next steps and instead conduct your own informal assessment: a conversation, a hands-on task, a project. Use that human assessment to either confirm or revise the platform’s routing. This keeps you in the seat of authority and trains the habit of treating AI recommendations as one input among several, rather than the final word.
What Education Leaders Should Be Considering
The automation-versus-authority tension is not only a classroom-level issue. It has structural dimensions that only district and school leaders can address — and that most have not yet taken up with appropriate urgency.
AI Governance Starts With a Question You Probably Haven’t Asked
Most district technology plans focus on deployment: which platforms are in use, how they’re being accessed, what the contract terms are. Fewer plans include governance frameworks that specify: what decisions may AI systems make or recommend, what decisions require mandatory human review, and what accountability mechanisms exist when AI-influenced decisions cause harm. The Consortium for School Networking (CoSN) has published AI governance frameworks that offer useful starting points — but adoption has been slow (CoSN, 2024).
The governance question is urgent partly because the legal landscape is shifting. FERPA protections around student data were written before adaptive AI systems existed, and there is growing advocacy — and early legislative activity in several states — to extend algorithmic accountability requirements to educational platforms. Districts that build governance structures now will be ahead of compliance requirements later. More importantly, they will have protected students earlier.
Teacher Training Must Include Automation Literacy
Professional development on AI in education has grown substantially — RAND reported that nearly half of U.S. districts provided some AI training in the 2024–25 school year, nearly double the rate from the prior year. But the content of that training matters enormously. Training that focuses only on how to use platforms more efficiently is not sufficient. Educators need frameworks for evaluating AI recommendations critically, understanding the limitations of algorithmic systems, and maintaining professional authority in environments designed to nudge them toward deference.
Engage Families as Partners in Algorithmic Transparency
Parents and homeschool educators deserve to know when AI systems are making or influencing recommendations about their children. Districts and platforms should move toward proactive disclosure: clear, plain-language communication about which AI systems are in use, what decisions they influence, and what recourse families have when they disagree with an AI-influenced recommendation. Transparency here is not just ethical — it is strategically wise. Family trust is a precondition for sustainable EdTech adoption.
Episode 2: “When AI Starts Making Decisions”
This blog post pairs with Episode 2 of The EdTech Investigation Podcast — dropping soon. JR goes deeper into AI recommendation engines, the Default Effect in action, and real K–12 and homeschool accountability scenarios, with guest commentary from ARIA, our resident AI analyst. Subscribe now so you don’t miss it.
Where This Is Heading — and What We Should Be Preparing For
The automation-versus-authority tension is going to intensify before it resolves. AI systems in education are becoming more capable, more embedded, and more confident. Recommendation engines are giving way to generative tutoring systems with persistent memory. Analytics dashboards are evolving toward predictive models that don’t just flag current risk but forecast future trajectories. The Default Effect will not weaken as the technology improves — if anything, it will strengthen, because more capable systems will generate more compelling outputs and educators will have fewer obvious reasons to push back.
This makes the present moment unusually important. The habits of mind we build now — the practice of naming defaults before accepting them, the expectation of algorithmic transparency, the professional identity of the educator as the irreducible authority in consequential decisions about children — these are not just good practices for 2026. They are the cultural scaffolding that will either survive the next wave of AI capability or collapse under it.
Sal Khan has argued, compellingly, that AI tutoring done right could give every student access to the kind of one-on-one support that has historically been available only to the privileged few (Khan, 2023). He’s right. The technology’s potential in that direction is genuine and significant. But potential is not destiny. Whether AI in education narrows gaps or widens them, empowers teachers or quietly displaces them, serves children or optimizes for vendor metrics — these are not questions the technology will answer on its own. They are questions that require human authority, human accountability, and human willingness to ask hard things of systems that present themselves as already knowing the answers.
The AI stack is not going away. The question is not whether it will make recommendations about your students. The question is whether you will be the one deciding what to do with them.
“The goal is not for educators to become AI skeptics. It is for them to become AI-literate professionals who know the difference between a tool that assists their judgment and a system that has quietly replaced it.”
JR DeLaney · AI Innovations UnleashedReferences
- Child Trends. (2025, November 4). Most public schools lack AI policies for students. Child Trends. https://www.childtrends.org/publications/public-schools-ai-policies-students (Source: U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, School Pulse Panel 2024–25.)
- Diliberti, M., Lake, R., & Weiner, S. (2025). More districts are training teachers on artificial intelligence: Findings from the American School District Panel. RAND Corporation. https://www.rand.org/pubs/research_reports/RRA956-31.html
- Doss, C. J., Bozick, R., Schwartz, H. L., Chu, L., Rainey, L. R., Woo, A., Reich, J., & Dukes, J. (2025). AI use in schools is quickly increasing but guidance lags behind: Findings from the RAND Survey Panels. RAND Corporation. https://www.rand.org/pubs/research_reports/RRA4180-1.html
- Khan, S. (2023, March). Sal Khan’s 2023 TED Talk: AI in the classroom can transform education. Khan Academy Blog. https://blog.khanacademy.org
- Kaufman, J. H., Woo, A., Eagan, J., Lee, S., & Kassan, E. B. (2025). Uneven adoption of artificial intelligence tools among U.S. teachers and principals in the 2023–2024 school year. RAND Corporation. https://www.rand.org/pubs/research_reports/RRA134-25.html
- Parasuraman, R., & Manzey, D. H. (2010). Complacency and bias in human use of automation: An attentional integration. Human Factors, 52(3), 381–410. https://doi.org/10.1177/0018720810376055
- Schwartz, H. L., & Diliberti, M. K. (2026). More students use AI for homework, and more believe it harms critical thinking: Selected findings from the American Youth Panel. RAND Corporation. https://www.rand.org/pubs/research_reports/RRA4742-1.html
- Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Yale University Press.
- UNESCO. (2023). Guidance for generative AI in education and research. UNESCO. https://www.unesco.org/en/digital-education/ai-future-learning
- Williamson, B. (2017). Big data in education: The digital future of learning, policy and practice. SAGE Publications.
- Zhai, X., & Nehring, J. H. (2025). Artificial intelligence policies in K-12 school districts in the United States: A content analysis shaping education policy. Journal of Research on Technology in Education. https://doi.org/10.1080/15391523.2025.2476589
Additional Reading
- Selwyn, N. (2019). Should robots replace teachers? AI and the future of education. Polity Press.
- Watters, A. (2021). Teaching machines: The history of personalized learning. MIT Press.
- Williamson, B., Bayne, S., & Shay, S. (2020). The datafication of teaching in higher education: Critical issues and perspectives. Teaching in Higher Education, 25(4), 351–365.
- Center for Democracy and Technology. (2025). AI in schools: Equity, privacy, and student rights. CDT. https://cdt.org
- Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Shum, S. B., Santos, O. C., Rodrigo, M. T., Cukurova, M., Bittencourt, I. I., & Koedinger, K. R. (2022). Ethics of AI in education: Towards a community-wide agenda. Journal of Learning Analytics, 9(1), 163–182.




Leave a Reply