🎙️ New Episode

Listen to This Week’s Friday Download

134
AI education bills introduced this spring
31
States with active AI education legislation
$1M
Paul English’s gift to fund Boston’s AI requirement
$1M
Stanford AIMES grant fund—including for AI skeptics

🎲 The Big Weird
The Paradox at the Center of It All

If you’ve been following the AI and education space this week, you’ve already felt the whiplash. We’re apparently supposed to teach kids to use AI in school… while also confiscating their phones. The same device administrators are locking in a Yondr pouch is the one teachers want students using to run ChatGPT.

Story 01 · The Big Weird

The AI vs. Phone Paradox: Embrace Technology, Ban Technology

EdWeek published a piece on April 13th asking the very reasonable question: can schools resolve this tension? The honest answer is: probably not anytime soon. Because the policy is moving faster than the philosophy.

We’re in this bizarre moment where two legitimate concerns—digital engagement and distraction-free learning—are colliding head-on. Do we want digitally literate students who can navigate AI responsibly? Or do we want distraction-free classrooms? Right now, we’re trying to have both, and the result is a set of rules that contradict themselves before the bell rings.

It’s chef’s kiss ironic. And nobody has a clean answer yet.

Story 02 · The Big Weird

31 States, 134 Bills, and Zero Consensus

This spring alone, 134 bills related to AI in education have been introduced across 31 states. California wants to ban using student data to train AI models. Oklahoma says AI can only be used under educator supervision with mandatory human review. Other states are proposing completely contradictory approaches—and nobody’s quite agreed on what “AI literacy” even means yet.

It’s like everyone’s building the plane while flying it, except there are 31 different flight manuals and half of them were written in crayon. The pace of legislative activity is real—the coherence, not so much.

✨ Wait… That’s Actually Cool
The Genuinely Good Stuff This Week

Not everything this week was chaos. Buried under the policy noise were three genuinely exciting developments that deserve more attention than they’re getting.

Story 01 · Wait… That’s Actually Cool

Boston’s Bold Graduation Requirement—and the $1M Behind It

Boston Public Schools is about to become the first major urban school district in the U.S. to make AI fluency a graduation requirement. Starting September 2026, graduating from a Boston high school means demonstrating real competency with AI tools.

And here’s the critical part: this isn’t an unfunded mandate. Tech entrepreneur Paul English—co-founder of Kayak—just committed $1 million to make it happen. That money is going toward training one teacher from each of Boston’s roughly two dozen high schools, so the infrastructure is actually being built.

This is significant not because AI is magic, but because we’re finally treating digital literacy as a requirement, not a nice-to-have extra credit project. Kids are growing up in a world where AI is embedded in job applications, healthcare, and daily decision-making. Pretending it doesn’t exist isn’t protecting them—it’s setting them up to fail.

“Pretending AI doesn’t exist isn’t protecting students—it’s setting them up to fail in a world where it’s embedded in everything from job applications to healthcare.”

JR DeLaney · The Friday Download, May 2026
Story 02 · Wait… That’s Actually Cool

Stanford’s Million-Dollar Skeptic Fund

Stanford just launched a $1 million grant program through AI Meets Education at Stanford—AIMES—and they’re specifically inviting proposals from faculty who are skeptical of AI in the classroom.

Let that land for a second. They are funding people who don’t like AI. Not just the evangelists, not just the early adopters—the people who think this whole thing might be a bad idea. Grants go up to $100,000 to build a course or $50,000 to research alternatives.

Most institutional AI initiatives are drowning in rah-rah disrupt-everything energy. Stanford is saying: if you think this is wrong, prove it. That’s how you build responsible innovation—by funding the people trying to poke holes in your assumptions.

Story 03 · Wait… That’s Actually Cool

Rasmussen University’s 125-Year-Old Institution Goes AI-Native

Rasmussen University—a 125-year-old institution with campuses across six states—is ditching Blackboard and switching to D2L Brightspace, going all-in on AI-native tools. They’re prioritizing their nursing programs first, which makes a lot of sense: nursing education is demanding, and anything that can personalize study recommendations or provide instant feedback has real stakes.

The tools rolling out—Lumi for study recommendations, Lumi Tutor for interactive help, Lumi Feedback—aren’t gimmicks. They’re thoughtful integrations designed to support learning, not replace teaching. That distinction matters. And watching a 125-year-old institution make a deliberate, structured pivot toward AI-native tools is its own kind of proof of concept.

🍿 The Tiny Tech Snack
Four Terms Worth Actually Knowing

Quick, digestible explainers to make you sound smarter at your next faculty meeting—or at any meeting, honestly.

Snack 01
AI Fluency

Not about knowing how to code AI. It’s about understanding when to use it, when not to, and how to evaluate its outputs critically.

Why it matters: “I used ChatGPT” isn’t a skill. Knowing when it’s helpful and when it’s garbage? That’s a skill.
Snack 02
AI-Native Tools

Tools built with AI from the ground up—not AI features bolted onto existing software as an afterthought.

Why it matters: AI-native tools tend to work better because they’re designed around what AI is actually good at, not just checking a marketing box.
Snack 03
AI Literacy vs. Digital Literacy

Digital literacy = knowing how to use technology. AI literacy = understanding how AI works, its limitations, and its biases.

Why it matters: You can be digitally literate and still get completely fooled by AI-generated misinformation. We need both.
Snack 04
Human-in-the-Loop

AI makes suggestions, but a human makes the final decision. Oklahoma’s “educator supervision” legislation is exactly this model in practice.

Why it matters: This is the model most educators actually want—AI as assistant, not replacement.
This Week’s Big Picture

The pace is real, the philosophy is lagging. Legislative activity on AI in education is accelerating fast, but the conceptual frameworks for what “AI literacy” actually means are still being built.

Institutional investment is the signal. When a 125-year-old university and a major urban school district both make structural moves in the same week, that’s a trend, not a coincidence.

Skepticism is underrated. Stanford funding its critics isn’t a contradiction—it’s exactly the kind of institutional humility that separates good AI implementation from hype cycles.

Sources

  1. EdWeek. (April 13, 2026). Schools Are Urged to Embrace AI—and Ban Phones. Education Week.
  2. Pursuit. (2026). Latest AI in Education News: Policies and Innovations.
  3. Multistate. (April 8, 2026). AI in Education Legislation: 2026 State Policy Trends.
  4. D2L / Rasmussen University. (April 20, 2026). Platform transition announcement.