Graduation Requirements, Million-Dollar Bets, and the Great Phone Paradox
Boston just made AI fluency a graduation requirement. Stanford is paying skeptics to prove AI is a bad idea. And somehow, schools are supposed to embrace AI tools while simultaneously banning the phones that run them. This week in AI education was absolutely bonkers—and we’ve got your 10-minute download.
Listen to This Week’s Friday Download
🎲 The Big Weird
The Paradox at the Center of It All
If you’ve been following the AI and education space this week, you’ve already felt the whiplash. We’re apparently supposed to teach kids to use AI in school… while also confiscating their phones. The same device administrators are locking in a Yondr pouch is the one teachers want students using to run ChatGPT.
The AI vs. Phone Paradox: Embrace Technology, Ban Technology
EdWeek published a piece on April 13th asking the very reasonable question: can schools resolve this tension? The honest answer is: probably not anytime soon. Because the policy is moving faster than the philosophy.
We’re in this bizarre moment where two legitimate concerns—digital engagement and distraction-free learning—are colliding head-on. Do we want digitally literate students who can navigate AI responsibly? Or do we want distraction-free classrooms? Right now, we’re trying to have both, and the result is a set of rules that contradict themselves before the bell rings.
It’s chef’s kiss ironic. And nobody has a clean answer yet.
31 States, 134 Bills, and Zero Consensus
This spring alone, 134 bills related to AI in education have been introduced across 31 states. California wants to ban using student data to train AI models. Oklahoma says AI can only be used under educator supervision with mandatory human review. Other states are proposing completely contradictory approaches—and nobody’s quite agreed on what “AI literacy” even means yet.
It’s like everyone’s building the plane while flying it, except there are 31 different flight manuals and half of them were written in crayon. The pace of legislative activity is real—the coherence, not so much.
✨ Wait… That’s Actually Cool
The Genuinely Good Stuff This Week
Not everything this week was chaos. Buried under the policy noise were three genuinely exciting developments that deserve more attention than they’re getting.
Boston’s Bold Graduation Requirement—and the $1M Behind It
Boston Public Schools is about to become the first major urban school district in the U.S. to make AI fluency a graduation requirement. Starting September 2026, graduating from a Boston high school means demonstrating real competency with AI tools.
And here’s the critical part: this isn’t an unfunded mandate. Tech entrepreneur Paul English—co-founder of Kayak—just committed $1 million to make it happen. That money is going toward training one teacher from each of Boston’s roughly two dozen high schools, so the infrastructure is actually being built.
This is significant not because AI is magic, but because we’re finally treating digital literacy as a requirement, not a nice-to-have extra credit project. Kids are growing up in a world where AI is embedded in job applications, healthcare, and daily decision-making. Pretending it doesn’t exist isn’t protecting them—it’s setting them up to fail.
“Pretending AI doesn’t exist isn’t protecting students—it’s setting them up to fail in a world where it’s embedded in everything from job applications to healthcare.”
JR DeLaney · The Friday Download, May 2026Stanford’s Million-Dollar Skeptic Fund
Stanford just launched a $1 million grant program through AI Meets Education at Stanford—AIMES—and they’re specifically inviting proposals from faculty who are skeptical of AI in the classroom.
Let that land for a second. They are funding people who don’t like AI. Not just the evangelists, not just the early adopters—the people who think this whole thing might be a bad idea. Grants go up to $100,000 to build a course or $50,000 to research alternatives.
Most institutional AI initiatives are drowning in rah-rah disrupt-everything energy. Stanford is saying: if you think this is wrong, prove it. That’s how you build responsible innovation—by funding the people trying to poke holes in your assumptions.
Rasmussen University’s 125-Year-Old Institution Goes AI-Native
Rasmussen University—a 125-year-old institution with campuses across six states—is ditching Blackboard and switching to D2L Brightspace, going all-in on AI-native tools. They’re prioritizing their nursing programs first, which makes a lot of sense: nursing education is demanding, and anything that can personalize study recommendations or provide instant feedback has real stakes.
The tools rolling out—Lumi for study recommendations, Lumi Tutor for interactive help, Lumi Feedback—aren’t gimmicks. They’re thoughtful integrations designed to support learning, not replace teaching. That distinction matters. And watching a 125-year-old institution make a deliberate, structured pivot toward AI-native tools is its own kind of proof of concept.
🍿 The Tiny Tech Snack
Four Terms Worth Actually Knowing
Quick, digestible explainers to make you sound smarter at your next faculty meeting—or at any meeting, honestly.
Not about knowing how to code AI. It’s about understanding when to use it, when not to, and how to evaluate its outputs critically.
Tools built with AI from the ground up—not AI features bolted onto existing software as an afterthought.
Digital literacy = knowing how to use technology. AI literacy = understanding how AI works, its limitations, and its biases.
AI makes suggestions, but a human makes the final decision. Oklahoma’s “educator supervision” legislation is exactly this model in practice.
The pace is real, the philosophy is lagging. Legislative activity on AI in education is accelerating fast, but the conceptual frameworks for what “AI literacy” actually means are still being built.
Institutional investment is the signal. When a 125-year-old university and a major urban school district both make structural moves in the same week, that’s a trend, not a coincidence.
Skepticism is underrated. Stanford funding its critics isn’t a contradiction—it’s exactly the kind of institutional humility that separates good AI implementation from hype cycles.
Sources
- EdWeek. (April 13, 2026). Schools Are Urged to Embrace AI—and Ban Phones. Education Week.
- Pursuit. (2026). Latest AI in Education News: Policies and Innovations.
- Multistate. (April 8, 2026). AI in Education Legislation: 2026 State Policy Trends.
- D2L / Rasmussen University. (April 20, 2026). Platform transition announcement.




Leave a Reply