I. The Map We’re Still Using

A photograph used to settle arguments. Now it opens them.

This is not a claim about some distant synthetic future. It is a description of the present. Generative AI has crossed a threshold where fabricated text, images, and video are no longer the province of experts with expensive equipment—they are available to anyone with a browser and a prompt. The perceptual shortcuts we built our epistemic lives around have been quietly revoked.

To understand how disorienting this is, it helps to remember how deeply we once trusted those shortcuts. When Nicéphore Niépce captured the first permanent photograph in 1826, the cultural impact was immediate and profound: here, finally, was evidence that didn’t depend on a human hand or a human memory. The camera was a witness without agenda. Over the next century and a half, photographs became the gold standard of proof in courtrooms, in journalism, in science, in history. We built legal frameworks, journalistic ethics codes, and educational epistemologies on the bedrock assumption that captured images bore a reliable relationship to reality. That assumption was never perfect—darkroom manipulation has existed since the darkroom—but it was good enough. The effort required to fake something convincingly served as a natural speed bump on the road to mass deception.

Generative AI removed that speed bump entirely. Not gradually. Overnight.

Picture a courtroom. A video plays. A man is clearly visible committing the act in question. The defense attorney stands up, adjusts her glasses, and says—calmly, confidently—“That’s a deepfake.” She can’t prove it. She doesn’t need to. The seed of doubt, once planted, is enough. That is the world we have entered: not with a bang, not with a manifesto, but with a quiet, unsettling shrug from the technology sector and a dawning collective awareness that something fundamental has shifted.

And nowhere is that gap more consequential, or more under-discussed, than inside the classroom.

When generative AI first muscled its way into schools in late 2022, the dominant conversation was disciplinary. Students were submitting AI-generated essays. Teachers were exasperated. Administrators were scrambling for policies. The instinct was relatable: identify the problem, contain it, enforce your way through it. Call it the Whack-a-Mole theory of educational technology.

But here’s the thing about framing generative AI primarily as a cheating crisis: it’s like describing the invention of the printing press as a forgery problem. Technically true in the narrowest sense. Wildly insufficient as an analysis.

The real disruption isn’t that students can now shortcut their homework. It’s that we’ve stumbled into a new epistemic reality—one where the foundational question isn’t did a student write this, but can anyone verify anything, and who gets to decide? That’s a much bigger frontier. And we’ve barely started exploring it.

II. What’s Actually Happening (It’s Weirder Than You Think)

Visual 4 The Liar’s Dividend: How Deepfakes Destabilize Truth
Data Visual · 04
Chesney & Citron, 2019 · California Law Review

The Liar’s Dividend

The most dangerous effect of deepfakes isn’t that fake things get believed. It’s that real things stop being believable.

Concept Definition
“The existence of plausible deepfakes allows individuals to deny authentic evidence by claiming fabrication — making real, documented events deniable on the basis of technical possibility alone.”
Adapted from Chesney, R. & Citron, D. (2019). California Law Review, 107(6), 1753–1820.
How It Works: The Four-Step Mechanism
1

Synthetic media becomes culturally normalized

Deepfakes, AI-generated audio, and synthetic video become widely known and accessible to the public.

2

Authentic evidence becomes deniable

Any documented evidence — video, photo, audio — can now be dismissed as “probably AI-generated” regardless of its authenticity.

3

Doubt becomes a deliberate tactic

Accused parties, institutions, and political actors exploit normalized skepticism to reject inconvenient evidence without disproving it.

4

Verification systems become the new battleground

Trust migrates from the artifact to the authentication infrastructure — whoever controls provenance controls credibility.

⚖️
Courtrooms
Video evidence dismissed as potential deepfake, shifting burden of proof onto authentic documentation.
🗳️
Elections
Authentic candidate recordings denied or dismissed; fabricated audio distributed as real.
🎓
Classrooms
Students can deny authentic work; institutions falsely flag genuine writing as AI-generated.
WEF 2023
Global Risks
The World Economic Forum identified AI-generated misinformation as one of the most significant global risks — systemic corrosion of shared epistemic ground, not just individual deceptions.
Sources: Chesney, R. & Citron, D. (2019). California Law Review, 107(6), 1753–1820. | World Economic Forum. (2023). Global Risks Report 2023.
Trust & Autonomy Series · Part I · AIInnovationsUnleashed.com
Chesney, R., & Citron, D. (2019). California Law Review, 107(6), 1753–1820.

Let’s talk about the “liar’s dividend,” a concept introduced by legal scholars Robert Chesney and Danielle Citron in their landmark 2019 California Law Review paper. Their argument was chillingly prescient: the danger of synthetic media isn’t only that fake things will be believed. It’s that real things will stop being believable. Once deepfakes become culturally normalized, anyone accused of anything can simply point at the technology and say: that could have been faked. Authentic evidence becomes deniable. Truth becomes a matter of contested provenance rather than observable fact (Chesney & Citron, 2019).

Six years later, that’s not a thought experiment. It’s a legal strategy.

The World Economic Forum identified AI-generated misinformation as one of the top global risks in its 2023 Global Risks Report (World Economic Forum, 2023). The concern wasn’t about individual bad actors producing individual fake videos. It was systemic: a corrosion of the shared epistemic ground that democratic societies depend on. When citizens can no longer reliably assess the authenticity of information, public reasoning doesn’t just get harder—it can unravel.

What makes this moment genuinely new isn’t the existence of fakes. Forgery, propaganda, and manipulated imagery have existed for millennia. What’s new is the democratization of sophisticated deception and the simultaneous collapse of the friction that once made large-scale fabrication prohibitively expensive. The 2016 U.S. election interference campaigns required significant state-level resources to produce disinformation at scale. Today, a motivated teenager with a free account can do more with a lunch break. The asymmetry between production and verification has never been wider.

Now zoom in from geopolitics to a high school English classroom in suburban Ohio. Same problem, different stakes.

According to Pew Research Center data from 2023, approximately one in five U.S. teenagers reported using ChatGPT for schoolwork within months of the tool’s launch (Pew Research Center, 2023). That’s not a niche behavior. That’s a behavioral shift moving faster than any educational technology adoption in recent memory—faster than calculators into math class, faster than Wikipedia into research papers, faster than smartphones into everything. And unlike those earlier disruptions, this one doesn’t just change where students find information. It changes what the word “producing” even means.

Visual 1 1 in 5 U.S. Teens Using ChatGPT for Schoolwork
Data Visual · 01
Pew Research Center, 2023 — U.S. Teen AI Adoption
1
in
5

U.S. teenagers reported using ChatGPT for schoolwork
within months of its public release.

Uses ChatGPT
Faster adoption than smartphones. Faster than search engines.
Source: Pew Research Center (2023). How teens navigate school in the age of AI.
Trust & Autonomy Series · Part I · AIInnovationsUnleashed.com
Pew Research Center (2023). How teens navigate school in the age of AI.

Schools noticed. Schools responded. Schools reached for the nearest available tool: AI detection software. And here’s where the story gets genuinely, uncomfortably interesting.

A rigorous 2023 study by Weber-Wulff et al., published in the International Journal for Educational Integrity, systematically evaluated a range of leading AI detection tools and found significant error rates across the board—both false positives (flagging genuine student work as AI-generated) and false negatives (missing actual AI output) (Weber-Wulff et al., 2023). The research exposed a structural problem that no amount of software iteration is likely to solve: generation and detection are in an arms race, and generation will always move faster. The moment a detection tool learns to flag a particular stylistic fingerprint, the generative systems producing that fingerprint get updated. Detection is, by design, always chasing.

This creates a deeply uncomfortable situation for educators. You can’t see the problem with the naked eye. You can’t reliably detect it with available software. And the tool you’re relying on to enforce fairness may itself be generating unfair outcomes—penalizing authentic students while missing sophisticated AI use. That’s not a cheating crisis. That’s a verification crisis.

Visual 2 AI Detection Tools Are Failing in Both Directions
Data Visual · 02
Weber-Wulff et al., 2023 · Int’l Journal for Educational Integrity

AI Detection Tools Are Failing
in Both Directions

A systematic evaluation of leading AI text detection tools found significant error rates — penalizing real students and missing actual AI output.

❌ False Positives — Real student work flagged as AI Significant
Authentic student writing incorrectly identified as machine-generated
⚠️ False Negatives — AI output passing undetected Significant
AI-generated text successfully evading detection tools
📈 Year-over-year detection lag (structural) Growing
Generation capability consistently outpaces detection accuracy

Detection tools aren’t just imperfect — they’re structurally compromised. Generative AI developers can test outputs against detectors until outputs pass. Detection will always chase generation. It cannot win by design.

The real risk: false positives fall hardest on non-native English speakers and students with atypical writing styles — the students already most vulnerable.
Source: Weber-Wulff, D., et al. (2023). Testing of detection tools for AI-generated text. International Journal for Educational Integrity, 19(1). https://doi.org/10.1007/s40979-023-00146-z
Trust & Autonomy Series · Part I · AIInnovationsUnleashed.com
Weber-Wulff, D., et al. (2023). International Journal for Educational Integrity, 19(1).

III. Where AI Has Already Moved In

Here’s what doesn’t make the headlines but absolutely should: AI isn’t just in the essays. It’s in the infrastructure.

AI-powered translation tools are supporting multilingual learners in real time, collapsing barriers that previously required dedicated human interpreters. Speech-to-text and text-to-speech systems are enabling meaningful access for students with dyslexia, visual impairments, and motor challenges. Intelligent tutoring platforms—systems like Khanmigo, Carnegie Learning’s MATHia, and DreamBox—are personalizing learning pathways, adjusting difficulty and pacing based on each student’s response patterns in ways no single teacher could replicate across a class of thirty. Automated feedback tools are giving students more revision cycles than any human instructor could manually provide. And administrative AI is already drafting schedules, processing accommodations requests, and flagging at-risk students based on attendance and engagement patterns before a counselor has noticed anything is wrong.

The schoolhouse, in other words, is already partially automated. Most of that automation is beneficial. Some of it is quietly consequential in ways institutions haven’t fully examined. When an algorithm decides a student is “at risk,” what are the training data, the error rates, and the appeals process? When an AI tutoring system determines a student has “mastered” a concept, what does mastery mean to the model? These are live governance questions dressed in the clothing of technical progress.

Sal Khan, founder of Khan Academy, articulated a vision that captured both the scale of the opportunity and the weight of the responsibility. In his widely-viewed 2023 TED Talk, Khan described AI as potentially providing every student with something like “a brilliant tutor”—the kind of personalized, patient, adaptive instruction previously available only to the privileged few (Khan, 2023). The analogy he reached for was Aristotle tutoring Alexander the Great: one-on-one, responsive, transformative. It’s a compelling vision. It also depends entirely on the AI being trustworthy, the data being accurate, and the system being governed well.

Series Arc — Trust & Autonomy

Part I (this post): The verification crisis — when perception fails, where does trust go?

Part II: Authenticity as infrastructure — C2PA, provenance standards, and who controls the definition of authentic.

Part III: From tools to actors — what makes AI “agentic” and what it means when AI initiates.

Part IV: When machines act (and fail) — accountability, cascading errors, and the manager-of-machines workforce.

IV. The Philosophical Interlude We Can’t Skip

At some point in any serious conversation about AI and epistemics, someone needs to say the quiet part loud. The verification crisis isn’t just a technical problem. It’s a philosophical one. And the philosophical dimension has implications that outlast any particular technology.

Hannah Arendt, writing in 1971 about the Pentagon Papers, argued that factual truth functions as the foundation of political judgment—that without a shared, stable sense of what actually happened, democratic deliberation becomes impossible (Arendt, 1971). Her concern wasn’t about deception per se. It was about the conditions under which shared reality could be maintained at all. When powerful actors can simply deny facts, she warned, it isn’t that citizens believe the denial—it’s that they become exhausted by the impossibility of verification and retreat into private certainties. Apathy dressed as pragmatism.

Generative AI industrializes the mechanism Arendt feared. It doesn’t require powerful actors. It doesn’t require state resources. It requires a browser and an intention.

The C2PA response—embedding cryptographic provenance into digital artifacts—is, in many ways, a technical answer to Arendt’s political problem: if we can’t maintain shared reality through perception, perhaps we can maintain it through infrastructure. There is genuine merit in this. But the infrastructure solution carries its own philosophical payload.

When verification becomes cryptographic, the citizen’s epistemic autonomy is partially transferred to whatever institution controls the authentication standard. This is not unlike the shift that occurred when mechanical timekeeping replaced solar observation: we gained precision and coordination, but we also surrendered direct relationship with the phenomenon being measured. You no longer know what time it is; you know what time the clock says it is. These are meaningfully different things, even when they happen to agree.

The question for education, then, is not merely “how do we teach students to use AI responsibly?” It is: how do we cultivate epistemic agents who can navigate systems of delegated verification without losing the capacity for independent judgment? That’s a genuinely hard question. It’s also one of the most important educational challenges of the next two decades.

“The goal isn’t to restore naïve visual trust. It’s to cultivate informed skepticism—citizens who can interrogate verification systems, not just consume their outputs.”

V. Risks and Tradeoffs: Let’s Be Honest

This is the section where responsible writers resist the urge to either catastrophize or cheerfully hand-wave. Both moves are lazy. The reality is messier and more interesting.

The risks of generative AI in education are real, layered, and—critically—not evenly distributed. When an AI detection tool incorrectly flags authentic student work as machine-generated, research on language model behavior and human writing diversity suggests that non-native English speakers, students from linguistic minorities, and those with atypical writing styles may face higher false positive rates—precisely because their prose patterns diverge from the training distributions these tools were optimized on. The burden of proof then falls on the student, who must somehow prove that their voice is their own. That’s an epistemically inverted situation. It is also, potentially, a discriminatory one.

Educational advocacy organizations have raised concerns about algorithmic decision-making in academic integrity contexts, noting that error-prone automated systems can reproduce and amplify existing inequities when used to make high-stakes judgments about students. When detection tools fail asymmetrically, the students already carrying the heaviest burdens bear the cost.

Then there’s the provenance question. The C2PA standard—cryptographic metadata embedded in digital files, endorsed by Adobe, Microsoft, Google, and other major players—represents a genuine and important technical advance (C2PA, 2023). But it also represents a meaningful shift in where epistemic authority lives.

Visual 3 The Architecture of Trust Has Moved
Data Visual · 03
C2PA Coalition · Content Provenance and Authenticity

The Architecture of Trust Has Moved

When synthetic media makes perception unreliable, trust doesn’t disappear — it relocates. The question is: who controls where it goes?

The Previous Era

Perception-Based Trust

  • You looked at something and judged it
  • Fabrication required expensive expertise
  • Friction itself was a form of truth-telling
  • Trust was distributed — anyone could assess
  • Visual evidence carried persuasive weight
  • Authenticity was perceptual by default
The Emerging Era

Provenance-Based Trust

  • A platform verifies authenticity on your behalf
  • Cryptographic metadata travels with the artifact
  • C2PA standard embeds origin + edit history
  • Trust is centralized — platforms authenticate
  • Chain-of-custody records replace visual cues
  • Adobe, Microsoft, Google adopt the standard
⚠️
The philosophical wrinkle: In a perception-based world, you trusted your own eyes. In a provenance-based world, you trust a platform’s authentication infrastructure. That’s a meaningful transfer of epistemic authority — from the individual to the institution.
Risk 1 · Power Consolidation

Whoever controls the certification controls the definition of authentic. Epistemic authority centralizes with platform owners.

Risk 2 · System Compromise

Provenance systems can be gamed, compromised, or selectively applied. Technical trust is not the same as genuine truth.

Sources: C2PA (2023). Content Credentials: Technical Specification. Coalition for Content Provenance and Authenticity. https://c2pa.org | Chesney, R. & Citron, D. (2019). California Law Review, 107(6). | Arendt, H. (1971). Lying in Politics.
Trust & Autonomy Series · Part I · AIInnovationsUnleashed.com
C2PA (2023). Content Credentials. Coalition for Content Provenance and Authenticity.

Provenance systems can be gamed. They can be compromised. They can be selectively applied. And if the companies that control those systems make business decisions that affect the integrity of the authentication layer, there may be no independent authority to appeal to. The verification crisis doesn’t disappear when we adopt technical solutions. It relocates—and whoever controls the new location controls the definition of authentic.

None of this means we shouldn’t adopt provenance standards. We absolutely should—with eyes open, with regulatory oversight built alongside the technology, and with students who understand the systems they’re trusting, not just trust them.

1 in 5
U.S. teens using ChatGPT for schoolwork (Pew, 2023)
↑↑
Detection error rates: false positives & false negatives both significant (Weber-Wulff et al., 2023)
#2
AI misinformation ranked among top global risks (WEF Global Risks Report, 2023)

VI. What Teachers Can Do Right Now

Good news: the path forward for educators isn’t waiting for policy or technology to catch up. It’s a pedagogical reframe—and teachers are, historically, very good at those.

The core move is this: shift the locus of assessment from artifact to process. When AI can generate a polished essay in thirty seconds, the polished essay proves very little about the student who submitted it. What AI cannot fake—at least not yet, and not easily—is a student’s demonstrated ability to think, respond, adapt, and explain in real time. Assessment strategies that move toward process documentation, staged drafting, and live conversation are not just AI-resistant. They are, arguably, better measures of learning than single-submission final products ever were.

Here are five concrete approaches already working in classrooms:

  • The AI Interaction Log. Require students to document their AI use the same way a researcher documents methodology. What prompts did you use? What did the AI produce? What did you change, and why? This doesn’t punish AI use—it makes it visible, teachable, and assessable. It also develops a metacognitive habit that transfers directly to professional contexts where AI use will be routine.
  • The Oral Defense. For major written assignments, add a short ten-to-fifteen-minute conversation where students walk through their argument and respond to follow-up questions. If they wrote it—or meaningfully engaged with AI-assisted drafts—they can talk about it. If they didn’t, they can’t. Low-tech, high-validity authentication that also develops oral communication skills most curricula underserve.
  • Process Portfolios. Instead of single-submission assignments, collect multiple drafts with reflective commentary at each stage. AI can produce a draft; it cannot fabricate a student’s genuine intellectual history. The revision arc—where ideas develop, get challenged, and deepen—is precisely what learning looks like.
  • Source Provenance Assignments. Use the C2PA conversation as curriculum. Have students investigate where their sources come from: Who published it? When? What’s the modification history? Can they find the C2PA content credentials? Verification literacy in practice, not just in theory.
  • Redesign the Prompt. “Write an essay about the causes of World War I” can be generated in seconds. “Interview your grandmother about what she remembers of the Cold War, then analyze her account against two primary sources and explain where her memory and the historical record diverge” cannot. Specificity is a natural AI deterrent—and tends to produce more interesting work anyway.
Visual 5 5 Things Teachers Can Do Right Now
Data Visual · 05
Pedagogical Framework · AI Verification Era

5 Things Teachers Can Do Right Now

None of these require new software, new budgets, or new policies. They require pedagogical intentionality — which teachers already have.

01
📋 The AI Interaction Log
Require students to document AI use like a researcher documents methodology. What prompts did you use? What did the AI produce? What did you change, and why? This doesn’t punish AI use — it makes it visible, teachable, and assessable.
Process over artifact
02
🎤 The Oral Defense
Add a short 10-15 minute conversation where students walk through their argument and respond to follow-up questions. If they meaningfully engaged with the work, they can talk about it. If they didn’t, they can’t. Low-tech, high-validity authentication.
No software required
03
📁 Process Portfolios
Collect multiple drafts with reflective commentary at each stage. The portfolio demonstrates learning over time, not just final output. AI can produce a draft; it cannot fabricate a student’s genuine intellectual history.
Document the journey
04
🔍 Source Provenance Assignments
Use C2PA as curriculum. Have students investigate where sources come from: Who published it? When? What’s the modification history? What platform authenticated it? Verification literacy in practice, not just in theory.
Teach the infrastructure
05
✏️ Redesign the Prompt
AI-vulnerable assignments are AI-vulnerable because they ask for generic outputs. “Write about WWI causes” can be generated. “Interview your grandparent about the Cold War, then analyze against two primary sources” cannot. Specificity is a natural AI deterrent.
Design-level defense
Core Pedagogical Shift
Move the locus of assessment from artifact to process. When AI can generate a polished essay in 30 seconds, the polished essay proves very little. What AI cannot easily fake is a student’s demonstrated ability to think, respond, adapt, and explain in real time.
Framework informed by: Weber-Wulff et al. (2023). Int’l Journal for Educational Integrity. | Wineburg, S., et al. (2016). Stanford History Education Group.
Trust & Autonomy Series · Part I · AIInnovationsUnleashed.com
Framework informed by Weber-Wulff et al. (2023) and Wineburg et al. (2016). Stanford History Education Group.

Stanford’s History Education Group documented persistent failures in students’ ability to evaluate online sources well before generative AI arrived (Wineburg et al., 2016). The solution they recommended—“lateral reading,” verifying sources by leaving them and checking what others say about them—turns out to be exactly the disposition needed for the provenance era. What AI does is intensify the cost of those failures and accelerate the urgency of repair. The curriculum has needed this upgrade for a decade. Now there’s no more deferring it.

VII. What Leaders Should Be Considering

For principals, superintendents, curriculum directors, and board members: this is a strategic moment, not just a policy moment. The decisions made in the next two to three years will shape institutional credibility and student outcomes for a decade.

Don’t over-index on detection. The Weber-Wulff et al. (2023) findings should be read as a systemic risk disclosure. If your integrity policy relies primarily on AI detection tools, you have built your enforcement architecture on an unreliable foundation. That creates legal exposure when a false positive results in disciplinary action, equity exposure when error rates fall unevenly, and credibility exposure when the tools publicly fail. Treat detection as one signal among many—never as a verdict.

Engage with provenance infrastructure now. C2PA is not yet mandatory, and not yet widely understood. Institutions that develop internal expertise in what content provenance means—and build it into curriculum and assessment design—will be better positioned to prepare students for what comes next.

Reframe professional development. Most current AI PD for educators focuses on using AI tools. What’s needed alongside that is conceptual fluency: how these systems work, where they fail, and what their adoption means for assessment design and equity. That’s a curriculum problem, not a software training problem.

Think long-term about assessment architecture. High-stakes assessments taken at home on unmonitored devices are now essentially unverifiable without additional authentication layers. That’s worth a strategic conversation before it becomes a credibility crisis.

Build adaptive systems, not static policies. Resist the temptation to treat this as a temporary disruption. Generative AI capabilities are improving on a timeline measured in months. Build policies that can evolve and faculty development that is ongoing—not one-time.

VIII. The Forward Horizon

“Seeing is believing” was never a philosophical claim. It was a practical heuristic—a rule of thumb that worked well enough, for long enough, that we mistook it for something more durable. Generative AI didn’t destroy it. It just made the fragility visible, the way a hard winter makes visible the cracks in a foundation that were always there.

What comes next isn’t less trust. It’s differently structured trust—more technical, more infrastructure-dependent, more explicitly governed, and more legible to those who understand how it works. The question for education, and for democracy, is whether we can build the literacy to participate intelligently in that kind of world. Whether we can teach students not just to consume content, but to interrogate its provenance. Not just to produce work, but to account for the processes that generated it. Not just to use AI, but to understand what it’s doing and who benefits when they do.

Dr. Debora Weber-Wulff, professor of media and computing at HTW Berlin and lead author of the landmark detection tools study, has been consistent in her framing: the issue is not any particular technology, but the assumptions we build around it and the speed at which institutions allow those assumptions to harden before they’ve been tested (Weber-Wulff et al., 2023). Systems designed to restore trust can, if poorly governed, simply relocate where the failures occur.

That’s a harder ask than installing a detection tool or writing an AI policy. It requires genuine intellectual humility from leaders, ongoing investment in educator development, and a willingness to hold the current moment as a genuine educational opportunity rather than a threat to be managed. It requires, in short, the same habits of mind we’re trying to develop in students: curiosity about how systems work, skepticism about easy answers, and the patience to follow a question further than the first available response.

“The verification crisis is not the end of truth. It is an invitation to get more rigorous, more honest, and more intentional about how we establish it—and more candid about who gets to decide.”

Education’s role in that project isn’t peripheral. It’s foundational. And classrooms are exactly the right place to begin.

References

  1. Arendt, H. (1971). Lying in politics: Reflections on the Pentagon Papers. The New York Review of Books.
  2. C2PA. (2023). Content credentials: Technical specification v1.3. Coalition for Content Provenance and Authenticity. https://c2pa.org
  3. Chesney, R., & Citron, D. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107(6), 1753–1820.
  4. Khan, S. (2023, April). How AI could save (not destroy) education [Video]. TED Conferences. ted.com
  5. Pew Research Center. (2023). How teens navigate school in the age of AI. pewresearch.org
  6. Weber-Wulff, D., Anohina-Naumeca, A., Bjelobaba, S., Foltýnek, T., Guerrero-Dib, J., Popoola, O., Šigut, P., & Waddington, L. (2023). Testing of detection tools for AI-generated text. International Journal for Educational Integrity, 19(1). doi.org/10.1007/s40979-023-00146-z
  7. Wineburg, S., McGrew, S., Breakstone, J., & Ortega, T. (2016). Evaluating information: The cornerstone of civic online reasoning. Stanford History Education Group. sheg.stanford.edu
  8. World Economic Forum. (2023). Global risks report 2023. weforum.org

Additional Reading

  1. Floridi, L., et al. (2020). An ethical framework for a good AI society. Minds and Machines, 28(4), 689–707.
  2. OECD. (2023). Generative AI and the future of education. OECD Publishing. doi.org/10.1787/17c4f821-en
  3. Selwyn, N. (2022). Education and technology: Key issues and debates (3rd ed.). Bloomsbury Academic.
  4. Wineburg, S., & McGrew, S. (2019). Lateral reading and the nature of expertise. Teachers College Record, 121(11).
  5. Zuboff, S. (2019). The age of surveillance capitalism. PublicAffairs.