From chalkboards to chatbots, discover how AI reshaped classrooms,
where we are now, and what the future of learning holds
Prologue: The Classroom Time Machine
Picture this: It’s 1965. You’re plopped into a schoolroom somewhere in the U.S. The teacher writes something on the chalkboard; you scribble notes, ask questions, wait for that dread-inducing bell. The future is computers — hulking, older-than-your-grandpa behemoths — and yet some dreamers are already plotting: What if machines could help teach, too?
Fast-forward to 2025. You walk into a classroom and you might see a chatbot quietly answering a student’s question, dashboards tracking which kids are slipping behind, adaptive lessons shifting in real time. The dream isn’t sci-fi anymore. It’s unfolding now.
So buckle up. In this post, we’ll:
- Travel through the history of AI in education — from curious prototypes to real-world experiments
- Assess where we are now — with successes, stumbles, controversies
- Pose the central ethical and philosophical tension: what it means to learn, teach, and trust in an age augmented by machines
- Give you a sense of where this series is headed (minus the “here’s Post 2, 3, 4, 5…” list)
- Arm you with stats, stories, and spark for the ride ahead
Your guide on this journey? A fictional teen (Shay) whose questions and doubts mirror what real students, teachers, and parents are feeling. Let’s roll.
Chapter 1: The Dream Is Older Than You Think (and Stranger)
The seeds of “AI in classrooms” were planted long ago.
Ancient Roots (Sort of)
If you stretch the definition of “machine that teaches,” you might argue that the first roots go to mechanical automata or teaching aids in the 18th–19th centuries. But the real intellectual precursor arrived in the 20th century, when researchers began formalizing how learning works.
Minds like John Dewey, Piaget, and Vygotsky pondered scaffolding, zones of proximal development, and how students internalize knowledge. Meanwhile, computing and logic theory galloped forward. The two strands — cognitive science and computer science — were quietly courting each other (Luckin et al., 2022).
Dawn of AI + Education (1960s–1970s)
In 1960–1965, computer scientists attempted the first educational software experiments. Some point to PLATO (Programmed Logic for Automatic Teaching Operations) as an early system used at the University of Illinois to deliver lessons in math, science, and reading (Tech & media histories) (TechLearning, 2023). These were not yet AI — more programmable instructions — but they planted the belief that machines could do more than display text (TechLearning, 2023).
Soon, researchers began building intelligent tutoring systems (ITS). In the early 1970s, Jaime Carbonell built SCHOLAR, a system for teaching geography that tracked which questions students got right or wrong and gave hints (Luckin et al., 2022). It was crude — yes/no logic trees — but the idea was bold: a system that adapts to the student.
Through the 1970s and 1980s, dozens of ITS prototypes emerged: SOPHIE (electronics), DEBUGGY (programming help), LISP Tutor (for code). Some experiments showed promise: in tightly controlled settings, ITS could rival human tutors for narrow tasks (VanLehn, 2011). But the limitations were enormous:
- They were brittle — changing topics or slightly altered curricula could break them
- They required painstaking engineering for every new domain
- They lacked context, social awareness, or real student motivation
That said, the pioneers laid the scaffolding: students interacting with machines, machines monitoring responses, and early feedback loops.
Chapter 2: Winters, Hope, and a New Dawn
The Chill: AI’s False Promise
By the late 1970s and early 1980s, lofty promises collided with reality. Expectations were sky-high; outcomes were messy. The UK’s Lighthill Report (1973) critiqued AI’s overpromises and recommended limiting public funding (Lighthill, 1973). That snowballed into what’s known as an AI winter — a time when many projects stalled or died for lack of resources.
In education, ITS prototypes shrank back into labs. The leap from lab to real classroom was too steep. Schools lacked data infrastructure, computational power, or even buy-in. It wasn’t that the tech was useless — it was just ahead of its time.
The Slow Thaw
By the 1990s and 2000s, two trends reignited the flame:
- Cheap computing, ubiquitous internet, and big data: More students online, more logs of how they learn, more opportunity for mining patterns.
- Learning analytics and adaptive learning: Instead of rigid rule sets, systems could now statistically adjust paths based on student performance data.
Researchers began exploring learning dashboards, predictive analytics, early-warning systems, and student modeling (Baker & Siemens, 2014). The narrative shifted: AI wasn’t here to replace teachers but to assist and augment them.
By 2016, Stanford’s AI100 report observed that AI had begun to creep into education in earnest through personalization and analytics (Stanford AI100, 2016) (Stanford AI100, n.d.). The idea of “tailored learning at scale” became plausible—not just in elite labs, but in pilot classrooms.
Chapter 3: The Present — A Hybrid Jungle
Today, classrooms are messy hybrids — part chalkboard, part code, part dashboard, part AI.
The Landscape: What Tools Are Out There?
Here are some of the major categories and how they show up:
- Intelligent Tutoring & Adaptive Systems
Modern ITS tools track how students solve problems, where they hesitate, and what mistakes they make — then tailor the next problem or hint. A recent systematic review found that adaptive learning in AI-driven systems improved test results by about 62% (on average, in certain contexts) (ScienceDirect, 2024) (Yu et al., 2023). Meanwhile, in higher ed, another meta-analysis found that 59% of studies saw improvement in academic performance; student engagement improved in 36% of studies (Personalized adaptive learning scoping review, 2024) (PMC, 2024). That’s not a magic wand, but a meaningful effect, especially when layered with human support. - Generative AI & LLMs in Education
The arrival of ChatGPT and its siblings changed the game. Students can now ask the model to draft essays, explain concepts, or brainstorm ideas (ChatGPT in education, 2023) (Wikipedia, n.d.). In many institutions, adoption exploded — Education leadership had to scramble. One EDUCAUSE article noted that in just a year, generative AI shifted from fringe to priority in higher ed, with faculty and administrators wrestling with how to responsibly integrate it (EDUCAUSE, 2023) (Educause, 2023).
But generative AI is a double-edged sword. The strengths? Speed, creativity, scaffolding ideas, drafting outlines, and alternative explanations. The risks? Hallucinated facts, bias, opacity, and the temptation for students to outsource thinking (Ferreira et al., 2023) (Mello et al., 2023). - Administrative & Assessment AI
Behind the scenes, AI helps with auto-grading (especially multiple choice or short answers), plagiarism detection, early-dropout prediction, scheduling optimization, and curriculum recommendation systems (The Role of AI in Modern Education, UIowa, 2024) (UIowa, n.d.). A lot of innovation happens where teachers hate doing — grading, logistics, and identifying at-risk students. - District-scale Pilots & Experiments
Real-world trials are messy but insightful. For example:
- LAUSD’s “Ed” chatbot was launched in March 2024 to act as a personal assistant for students in multiple languages (EdWeek Staff, 2024), but was shut down in June 2024 after the vendor behind it collapsed (Blume, 2024; Chapman, 2024). The project became a cautionary tale: ambitious, full of promise, fragile (Blume, 2024; Chapman, 2024; Young, 2024) (Young, 2024).
- Houston ISD & Prof Jim Inc. partnered to generate 2,200+ reading passages aligned with curriculum for grades 3–10 (Houston Chronicle, 2024). But critics—including two professors at Rice University—warned that such AI-generated content must be carefully curated and not treated as a panacea (Rice University, 2025).
- LAUSD’s “Ed” chatbot was launched in March 2024 to act as a personal assistant for students in multiple languages (EdWeek Staff, 2024), but was shut down in June 2024 after the vendor behind it collapsed (Blume, 2024; Chapman, 2024). The project became a cautionary tale: ambitious, full of promise, fragile (Blume, 2024; Chapman, 2024; Young, 2024) (Young, 2024).
More recently, California Community Colleges announced in October 2025 a roll-out of AI “learning assistants” to serve 2.1 million students, reporting early pilot gains: a 20% GPA bump, 13% rise in final scores, and a 36% boost in motivation (Axios, 2025) (Axios, 2025). That’s the kind of experiment that will either fuel or break faith in AI in real institutions.
What Teachers Think (and Worry About)
Teachers aren’t naive. A 2025 World Economic Forum piece reports that 73% of educators indicate they see AI’s role in education as balanced, positive, or open to exploration — not blindly hostile (WEF, 2025) (WEF, 2025). That cautious optimism matters.
Some of the common concerns:
- Loss of human agency: If AI suggests everything, who is guiding the learning?
- Bias and fairness: The data behind AI is not neutral.
- Overreliance: Students (or even teachers) may abdicate critical thinking.
- Privacy, transparency, ownership: Who owns the data? Who sees how decisions are made?
A recent Business Insider article shared that academics fear students are outsourcing thinking to AI (Lubbe, 2025) (News) (Business Insider, 2025). Many argue that instead of banning AI, educators should design assignments that AI cannot fully do — e.g., requiring reflection, drafts, iterative thinking, human-AI dialogues.
In short, we’re in a period of experimentation, adaptation, and tension.
Chapter 4: The Ethical Frontier — Who Teaches When Machines Help?
If you zoom out, there’s a deeper tension pulsing through this entire adventure: What does it mean to teach and learn when machines are part of the process?
Let me sketch out three overlapping philosophical axes we’ll revisit through the series:
1. Augmentor vs. Replacement — The Role of the Teacher
Fei-Fei Li, one of AI’s leading voices, argues that AI should augment human intelligence, not replace it (Li, 2025). In education, that’s especially crucial. Teaching is about judgment, empathy, adaptability, relationships — things AI cannot fully replicate (Chan & Tsi, 2023). Indeed, research suggests that while AI can support exposition, it struggles with emotional nuance and context (Chan & Tsi, 2023).
In our metaphor, AI shouldn’t be captain of the ship but a co-navigator — suggesting routes, flagging obstacles, but letting the human steer.
2. Trust, Transparency, and Epistemic Agency
When an AI gives an answer, does the student own that answer? Does the student know why it gave that answer? Good education aims not just to produce correct answers, but to foster deep understanding, skepticism, and inquiry.
Yet many generative AI models are “black boxes.” If a student says, “Why did you recommend that explanation?” the model might shrug. One major challenge is explainability — the ability for models to justify their steps (Yan et al., 2023). Without that, you risk turning students from active thinkers into passive consumers of AI.
3. Equity, Access, and Power
AI is only as fair as its data. If you train models on privileged populations, they may fail on marginalized or underrepresented groups. The “rich get richer” effect is real: schools with better infrastructure, more data, more funding probably benefit more from AI initially (Bulathwela et al., 2021).
Michael Ananny argues that AI in education must be seen as a public problem — not only a product to sell (Ananny, 2024). Who owns the models? Who regulates them? Are they audited for fairness? The power dynamics matter deeply.
Chapter 5: Connecting the Dots — What This Series Will Do (and Why Stay Tuned)
By now, you (Shay, my skeptical yet hopeful reader) have a rough idea of where AI in classrooms began and where it stands now. But the story is far from over — in fact, the real adventure begins in how we use, govern, and live with these tools.
Over the upcoming posts, expect a blend of in-depth analysis, exploratory insights, and strategic thinking. I won’t just trot out topics like “AI tools” or “ethics” one by one — I’ll layer them, cross them, and interweave them so the series feels like a journey rather than a syllabus.
Here’s the style of what’s ahead (not exact titles):
- We’ll decode AI jargon together — “prompt engineering,” “student modeling,” “few-shot learning” — but make it feel like you’re unlocking a secret language you already half-speak
- We’ll turn toward real tools in classrooms — what works, what flops, what’s hype, what’s hiding
- We’ll wrestle with fears — plagiarism, cheating, automation, alienation — but also explore the hopes that AI might democratize great teaching
- We’ll look forward — what might classrooms feel like in 2035? Virtual tutors? AI co-mentors? Smart augmented reality?
- We’ll ground it in teaching practice — rubrics, checklists, guardrails, ideas you (or a teacher you know) can try tomorrow
- We’ll hear from voices — educators, students, technologists, ethicists — and work to amplify what’s often under-seen
If you’ve felt either skeptical, terrified, thrilled, or curious — you’re in the right place. I invite you to read, question, comment, and share with a friend or teacher. As we go, throw me your toughest questions. Let’s make this not just a history lesson, but a toolkit for the next 20 years.
Epilogue: Why You Should Sit Up for This Story
Why dig into this? Because AI in classrooms is not a sci-fi dream. It’s already reshaping who learns how, when, and with what tools. Decisions made now — about transparency, pedagogy, access, ethics — will ripple for decades.
The history teaches humility: our prototypes were flawed, overpromised, and fragile. But each generation built on the last. The present offers messy promise. The future demands thoughtful design.
So for now: ask questions. Read critically. Bookmark this series. Bring it to your classroom, your school board, your curious friends. Because the next chapter — your chapter — is already being written.
See you in the next post, where we crack the code on AI jargon and turn “something that sounds scary” into something we can wield.
===========
References
- Ananny, M. (2024, February 26). AI in education is a public problem. Code Acts in Education. https://codeactsineducation.wordpress.com/2024/02/26/ai-in-education-is-a-public-problem/
- Baker, R. S., & Siemens, G. (2014). Educational data mining and learning analytics. In K. Sawyer (Ed.), Cambridge Handbook of the Learning Sciences (2nd ed., pp. 253–274). Cambridge University Press.
- Blume, H. (2024, July 3). LAUSD shelves its hyped AI chatbot to help students after collapse of firm that made it. Los Angeles Times. https://www.latimes.com/california/story/2024-07-03/lausds-highly-touted-ai-chatbot-to-help-students-fails-to-deliver
- Bulathwela, S., Perez-Ortiz, M., & Yilmaz, E. (2021). Toward automatic, equitable, and explainable student success prediction. Frontiers in Artificial Intelligence, 4(678767). https://doi.org/10.3389/frai.2021.678767
- Chapman, B. (2024, June 26). Future of LAUSD’s AI student chatbot in doubt 3 months after launch as ed-tech firm furloughs staff. LA School Report. https://www.laschoolreport.com/future-of-lausds-ai-student-chatbot-in-doubt-3-months-after-launch-as-ed-tech-firm-furloughs-staff/
- Chan, C., & Tsi, N. (2023). Artificial intelligence and the teacher’s role: A review of capabilities and limits. Teaching and Teacher Education, 126, 104019. https://doi.org/10.1016/j.tate.2023.104019
- Conati, C., Gertner, A., VanLehn, K., & Druzdzel, M. (2002). On-line student modeling for coached problem solving using Bayesian networks. User Modeling and User-Adapted Interaction, 12(4), 371–417. https://doi.org/10.1023/A:1021258506583
- EdWeek Staff. (2024, March 21). Los Angeles Unified bets big on ‘Ed,’ an AI tool for students. Education Week. https://www.edweek.org/technology/los-angeles-unified-bets-big-on-ed-an-ai-tool-for-students/2024/03
- Ferreira, M., Watanabe, H., & Hosoya, T. (2023). Risks and promises of generative AI in education: A systematic review. Computers & Education: Artificial Intelligence, 5, 100169. https://doi.org/10.1016/j.caeai.2023.100169
- Houston Chronicle. (2024, December 20). What to know about Prof Jim Inc, the AI company Houston ISD is using to generate reading passages. Houston Chronicle. https://www.houstonchronicle.com/news/houston-texas/education/hisd/article/hisd-ai-prof-jim-inc-19958093.php
- Li, F. (2025, February 1). Human-centered AI is about augmenting human intelligence, not replacing it. The Edinburgh Reporter. https://theedinburghreporter.co.uk/2025/02/professor-fei-fei-li-at-edinburgh-human-centred-ai/
- Lighthill, J. (1973). Artificial intelligence: A general survey. In Artificial Intelligence: A paper symposium. Science Research Council, UK.
- Luckin, R., Cukurova, M., Millán, E., & Mavrikis, M. (2022). The intertwined histories of artificial intelligence and education. Springer. https://doi.org/10.1007/978-3-031-04083-2
- Mello, R., Holmes, W., & Porayska-Pomsta, K. (2023). Large language models in education: A cautionary review. British Journal of Educational Technology, 54(5), 1267–1285. https://doi.org/10.1111/bjet.13331
- Rice University. (2025, June 6). Dateline Rice: We’re professors. We’re parents. HISD students don’t deserve AI slop. Rice News. https://news.rice.edu/news/2025/dateline-rice-june-6-mcdaniel-srinivasan
- Stanford AI100 Standing Committee. (2016). Artificial intelligence and life in 2030: One hundred year study on artificial intelligence: Report of the 2015–2016 study panel. Stanford University. https://ai100.stanford.edu/2016-report
- Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460. https://doi.org/10.1093/mind/LIX.236.433
- VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46(4), 197–221. https://doi.org/10.1080/00461520.2011.611369
- World Economic Forum. (2025, February 18). 73% of educators see AI’s role in education as balanced, positive, or open. World Economic Forum. https://www.weforum.org/agenda/2025/02/educators-ai-role-in-education/
- Yan, L., Sha, L., Zhao, L., & Martinez-Maldonado, R. (2023). Practical and ethical challenges of large language models in education: A systematic scoping review. arXiv preprint. https://arxiv.org/abs/2310.11588
- Young, J. R. (2024, July 15). An education chatbot company collapsed. Where did the student data go? EdSurge. https://www.edsurge.com/news/2024-07-15-an-education-chatbot-company-collapsed-where-did-the-student-data-go
- Yu, D., Lei, Z., Gu, Y., Li, Y., Yin, J., Lin, J., Ye, L., Tie, Z., Zhou, Y., Wang, Y., Zhou, A., He, L., & Qiu, X. (2023). EduChat: A large-scale language model-based chatbot system for intelligent education. arXiv preprint. https://arxiv.org/abs/2308.02773
📌 Additional Resources (Practical, Reputable, Ongoing)
- International Artificial Intelligence in Education Society (IAIED)
https://iaied.org/
A leading global society advancing research on AI in education, with conferences, journals, and community discussions. - EDUCAUSE – Generative AI in Higher Education Hub
https://www.educause.edu/focus-areas-and-initiatives/teaching-and-learning/generative-ai
Offers toolkits, case studies, and policy insights for educators grappling with generative AI adoption. - Institute of Education Sciences (IES), U.S. Dept. of Education
https://ies.ed.gov/
The federal research arm for education; provides data, evaluation reports, and funding insights into technology-driven learning. - Stanford Human-Centered Artificial Intelligence (HAI)
https://hai.stanford.edu/
A cross-disciplinary research hub dedicated to ensuring AI augments rather than replaces human capabilities, with resources relevant to education. - OECD Centre for Educational Research and Innovation (CERI) – AI & Education
https://www.oecd.org/education/ceri/
International reports and policy briefs on the future of education in the age of AI.
📚 Further Reading (Books, Reports, & Collections)
- Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence Unleashed: An Argument for AI in Education. Pearson.
A seminal (and accessible) report that lays out why and how AI can positively reshape learning. - Selwyn, N. (2024). On the Limits of Artificial Intelligence in Education. Policy Futures in Education.
A critical exploration of what AI can’t (and shouldn’t) do in classrooms, useful for balancing hype. - Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial Intelligence in Education: Promises and Implications for Teaching and Learning. Center for Curriculum Redesign.
Practical overview of AI tools, risks, and opportunities for teachers. - Williamson, B., Eynon, R., & Potter, J. (2020). AI in Education: Learning, Policy and Practice. Routledge.
A broad academic collection examining the cultural and political implications of AI in learning systems. - UNESCO (2021). AI and Education: Guidance for Policy-makers. UNESCO Publishing.
A globally focused guide on how policymakers and educators should frame AI adoption responsibly.
Leave a Reply