This is Part II of the Trust & Autonomy series. Part I — “The Death of “Seeing Is Believing”” — introduced the deepfake crisis, the liar’s dividend, and the collapse of perceptual trust. This post picks up where that one left off, examining the technical and pedagogical response being built right now.
Sources: Chesney & Citron (2019); C2PA (2022, 2025); Adobe Content Authenticity Initiative (2025)
The Current Narrative: “Can We Trust Anything Anymore?”
Here is the story playing out in faculty lounges, parent Facebook groups, homeschool co-op newsletters, and district professional development sessions from coast to coast: AI has made everything fake. Students are submitting AI-written essays. Teachers can’t tell what’s real. Images are manufactured. Videos are doctored. Nobody knows who made what, and detection tools — the ones that were supposed to save us — don’t actually work.
That narrative isn’t wrong, exactly. But it’s incomplete in ways that matter enormously for educators. It treats the crisis as purely a problem of deception and stops there, when in reality something far more interesting is happening underneath it. The world is actively building a response. Not a perfect one, not a fast one, but a structural one — and it runs through the infrastructure of the internet itself.
The public perception, understandably, is doom-flavored. Media headlines oscillate between “AI Will End Authenticity as We Know It” and “New Tool Can Detect AI Writing with 99% Accuracy” (spoiler: the second type is almost always wrong). Administrators are drafting AI policies that read like terms of service. Homeschool families are debating whether to ban generative tools entirely. Teachers are toggling between zero-tolerance and hopeful experimentation, often in the same week.
What’s missing from most of these conversations is an understanding of what the technology industry is actually building to address the problem — and how those efforts translate into something educators can both teach and use. Because here’s the thing: the solution to the authenticity crisis isn’t a better detection algorithm. It’s provenance. And provenance is becoming infrastructure.
What’s Actually Happening: From Trusting Content to Trusting Systems
In Part I of this series, we introduced the concept of the “liar’s dividend” — the deeply unsettling idea that awareness of deepfakes may actually help liars, because it gives them a rhetorical tool to dismiss real evidence as fake. As legal scholars Robert Chesney and Danielle Citron wrote in their landmark 2019 California Law Review paper:
“The liar’s dividend flows, perversely, in proportion to success in educating the public about the dangers of deep fakes.”
— Chesney & Citron, 2019, California Law Review, 107, 1753–1819
The answer the technology industry has converged on is neither AI detection nor a return to simpler times. It’s cryptographic provenance — a system where content carries verifiable metadata about who made it, when, with what tools, and whether it has been altered since. Think of it less like a lie detector and more like a nutrition label for digital content: not telling you whether to eat it, but giving you the ingredients so you can decide for yourself.
The leading standard for this approach is called C2PA — the Coalition for Content Provenance and Authenticity. C2PA was formed through an alliance between Adobe, Arm, Intel, Microsoft, and Truepic, unifying the efforts of the Adobe-led Content Authenticity Initiative and Project Origin, a Microsoft- and BBC-led initiative that tackles disinformation in the digital news ecosystem (C2PA, 2022). This is not a startup pitch deck. These are the companies that build the cameras, the operating systems, the creative software, and the browsers through which essentially all digital content flows.

Figure 2. The C2PA ecosystem — founding members, adopters, and partner organizations driving the Content Credentials standard.
The mechanics are elegant in concept. Content Credentials function like a nutrition label for digital content, giving a peek at the content’s history available for anyone to access, at any time (C2PA, 2022). Each time an asset is changed, the existing provenance of the asset is preserved, with each new change being added to the provenance. Cryptographic signing means that if the content — or its attached credentials — is tampered with, the signature breaks.
C2PA adds cryptographically signed metadata (“manifests”) to media files containing provenance information. Any tampering breaks the signature, making modifications detectable. It uses standard PKI (like HTTPS certificates), not blockchain (c2pa.wiki, 2025). The parallel to HTTPS is instructive: we don’t think about SSL certificates when we browse the web, but we do notice the padlock in our browser’s address bar. Content Credentials are designed to work the same way — invisible infrastructure surfacing as a simple, scannable signal.
The momentum behind this standard is real. BBC News has implemented Content Credentials. OpenAI announced support for images generated by DALL·E 3. Meta announced plans to build on the C2PA standard across Facebook, Instagram, and Threads. Google joined the C2PA steering committee. By 2026, Content Credentials are embedded in the production pipelines of the internet’s largest platforms (Adobe Blog, 2024).
“We’re working to combat misinformation by advocating for widespread adoption of Content Credentials as an industry standard for establishing trust in all of the digital content that’s being created.”
— Shantanu Narayen, Chair & CEO, Adobe (Adobe Summit, March 2024)
For educators, the shift being described here is not just technical. It is philosophical. We are moving from a world where trust was a feeling — an intuition built on appearance, familiarity, and authority signals — to a world where trust is a system — an infrastructure of verifiable claims, cryptographic signatures, and traceable histories.
The academic researcher who has perhaps thought longest about the cognitive skills required to navigate this shift is Sam Wineburg of Stanford, co-founder of the Digital Inquiry Group. His research identifies lateral reading — the practice of leaving a source immediately to search for what other trusted sources say about it — as the core skill that expert fact-checkers use. His SIFT framework operationalizes this for learners. Wineburg describes SIFT as standing for “Stop, Investigate, Find a better source, Trace back to the original,” noting that many people skip “stop,” the first step (Wineburg, 2024).

Figure 3. The SIFT framework — four steps to disciplined verification, developed by Mike Caulfield and Sam Wineburg. In a provenance-enabled world, “Trace back to original” is a feature built into the content itself.
Where AI Is Already Being Used: What the Classroom Looks Like Tomorrow Morning
Let us get specific. Because “cryptographic metadata” sounds like something that happens in server rooms, not schools. What does any of this actually look like when a teacher walks into third period?
The most immediate implication is a new category of media literacy instruction. Not “is this fake?” — that question is increasingly unanswerable in isolation — but “where did this come from, and how do I know?” That reframing changes the classroom activity entirely. Instead of playing “spot the deepfake,” students can learn to check Content Credentials on images, trace a photograph’s chain of edits, and distinguish between content signed at the camera and content with no provenance at all.
This approach also transforms research assignments. The 2025 landscape for student AI use is striking:

Figure 4. AI adoption in education — key 2025 data points. Student adoption outpaces educator comfort levels, creating a governance gap that provenance literacy can help close.
The 2025 results from Michigan Virtual’s statewide AI educator survey confirm that educators remain both cautious and curious about AI. Trust in AI tools is growing, but actual use appears to be expanding at an even faster rate, suggesting that practice may be outpacing comfort levels (Michigan Virtual AI Statewide Workgroup, 2025). That gap — between what students are doing and what teachers feel equipped to supervise — is exactly where provenance literacy can help.
In practical terms, the classroom implications cluster around four areas:
- Sourcing for research projects — Students can be taught to prioritize sources carrying verifiable provenance signals and flag sources that lack them, not as automatically false, but as requiring additional lateral reading.
- Original creative work — When students create images, audio, or video using AI-assisted tools, they can be required to use tools that attach Content Credentials, making their creative process visible and citable.
- Journalism and civics units — The C2PA standard is being adopted by major news organizations precisely because the news industry faces the same crisis that classrooms do. Teaching students how newsrooms are responding gives civics instruction a contemporary anchor.
- Digital portfolio work — In a world where employers will increasingly value verifiable portfolios, students who understand how to create provenance-tagged work have a genuine advantage.
Risks and Tradeoffs: Who Verifies the Verifiers?
This is where the series’ most important question arrives: even if we can verify content — who verifies decisions? And before we get there, the philosophical trap door beneath the provenance solution itself: who controls the verification systems?
C2PA’s trust model depends on a Trust List — a registry of certificate authorities authorized to issue Content Credentials. The C2PA established an official Trust List as part of their 2.0 specification, open to any organization meeting defined requirements (c2pa.wiki, 2025). That sounds democratizing. But the steering committee of C2PA is composed of the largest technology companies on earth. The certification infrastructure, while technically open, is practically dominated by organizations with enormous market power.
| ⚠️ The Philosophical Question at the Heart of This Post: When a system is designed to establish what is “trust-worthy,” the entity controlling that system holds extraordinary power. The history of internet standards is a history of open protocols captured and reshaped by concentrated interests. Teaching provenance literacy must come bundled with structural literacy: students need to understand not just how Content Credentials work, but who decides which signers are trusted — and what incentives they carry. |
There is also the stripping problem. Content Credentials can be removed — C2PA proves authenticity when present, rather than preventing removal (c2pa.wiki, 2025). A malicious actor can strip metadata, present content without credentials, and benefit from the resulting ambiguity. The absence of credentials proves nothing — a legitimate photograph taken with an older camera may simply lack the infrastructure to sign its output.
Then there is the bias embedded in what gets credentialed and what doesn’t. If Content Credentials require camera-level hardware signing or institutional-grade software adoption, then community journalists shooting on older phones, documentary filmmakers in under-resourced contexts, and students in schools without updated equipment are systematically disadvantaged.
For classrooms, the balanced framing is this: provenance standards are a genuine and meaningful improvement over a world with no provenance infrastructure at all. But they are not the end of the critical thinking requirement — they are a new beginning of it. Teaching students that a Content Credentials badge means “verified true” would be as misleading as teaching them that an official-looking logo means “legitimate source.”
What Teachers Can Do Now: Five Concrete Steps
The good news is that provenance literacy doesn’t require a curriculum overhaul, a new budget line, or waiting for the district to issue a policy. It requires teachers who understand the shift and can introduce it purposefully.
- Incorporate provenance questions into existing sourcing instruction. Add one layer to research assignments: not just “who published this?” but “can I trace where this content was created, and what has happened to it since?” The Content Authenticity Initiative’s verify tool at contentcredentials.org/verify allows students to check images for attached credentials at no cost.
- Teach the SIFT framework explicitly. Sam Wineburg and Mike Caulfield’s four-step process — Stop, Investigate the source, Find better coverage, Trace claims back to the original — is free, peer-reviewed, and designed specifically for the current information environment. The Digital Inquiry Group (diginquiry.org) provides free, classroom-ready materials.
- Have students produce provenance-tagged work. Adobe Express, Firefly, and several widely available tools now support Content Credentials for AI-generated content. Requiring students to create AI-assisted work using credentialed tools shifts the academic integrity conversation from “did you use AI?” to “how did you use AI, and can you show me?”
- Build a unit around a real-world case. The BBC’s adoption of Content Credentials, or coverage of a recent election featuring AI-generated content, can anchor a rich discussion about why provenance matters beyond the classroom.
- Model the uncertainty. Teachers who tell students “here’s a tool that tells you if something is AI-generated” are inadvertently teaching false confidence. Teachers who say “here’s what this tool shows us — and here’s what it still leaves open” are teaching genuine critical thinking.
What Leaders Should Be Considering: The Strategic View
For administrators, district leaders, and homeschool families functioning as their own curriculum directors, the provenance shift has two strategic implications that deserve explicit attention.
Policy Design
Most existing AI policies are prohibition-focused: lists of what students may not do. A provenance-aware policy would be design-focused: specifying not just whether AI may be used, but which tools may be used and under what conditions of transparency. Requiring that AI-assisted student work be submitted with attached Content Credentials (where the tool supports it) is a policy that doesn’t ban a technology — it builds accountability into the workflow.
Professional Development
Research on K–12 teachers finds a significant positive relationship between knowledge of AI and trust in AI, and that knowledge of AI is a robust and substantial predictor of teachers’ trust in AI tools (Nazaretsky et al., 2022). In plain language: teachers who understand more trust AI more appropriately — neither blindly nor reflexively. Professional development that teaches content provenance serves both the institutional interest in responsible AI adoption and the individual teacher’s interest in feeling competent and confident.
The C2PA specification is expected to progress toward ISO international standardization, and its adoption at the browser level is under active discussion by the W3C (NSA Cybersecurity, 2025). Leaders who build familiarity with this standard now are positioning their institutions ahead of a transition that is coming regardless.
A Forward-Looking Close: The Bigger Question This Series Is Building To
Part I of this series asked: if perception is unreliable, what replaces it? Part II’s answer is: systems — technical infrastructure designed to make content’s origin traceable and verifiable. But provenance infrastructure, as we’ve seen, is not a neutral technology. It embeds choices about who is trusted, who controls the trust list, and whose content enters the ecosystem with the imprimatur of verification.
That brings us to the question that will anchor Part III: even if we can verify content, what happens when we need to verify decisions? Because increasingly, AI systems are not just generating images and text. They are making choices — scheduling meetings, filtering applications, recommending interventions, routing resources. Those decisions don’t come with Content Credentials.
What students are learning in classrooms right now about how to evaluate the origin, history, and trustworthiness of digital content is not just a media literacy skill. It is the foundational civic competency for a world where the systems making consequential decisions are themselves opaque, dynamic, and deeply difficult to audit.
Teaching students to ask “where did this come from, who built it, what are its incentives, and what questions does it leave unanswered?” — about an image, about a news article, about an algorithm, about a policy — is the same intellectual move every time. That is the adventure ahead.
Reference List
- Chesney, R., & Citron, D. K. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107, 1753–1819. https://doi.org/10.2139/ssrn.3213954
- Coalition for Content Provenance and Authenticity. (2022). Overview: C2PA. https://c2pa.org
- Coalition for Content Provenance and Authenticity. (2025). Content credentials: C2PA technical specification (Version 2.2). https://spec.c2pa.org
- c2pa.wiki. (2025). Content provenance & authenticity standard. https://c2pa.wiki
- Michigan Virtual AI Statewide Workgroup. (2025). AI in education: A 2025 snapshot of trust, use, and emerging practices. Michigan Virtual. https://michiganvirtual.org/research/publications/ai-in-education-a-2025-snapshot-of-trust-use-and-emerging-practices/
- Narayen, S. (2024, March 26). Adobe Summit 2024 keynote [Conference presentation]. Adobe Summit, Las Vegas, NV. As cited in The Karo Startup. https://thekarostartup.com/shantanu-narayen-ceo-of-adobe-discusses/
- Nazaretsky, T., Cukurova, M., & Alexandron, G. (2022). An instrument for measuring teachers’ trust in AI-based educational technology. LAK22: 12th International Learning Analytics and Knowledge Conference. https://dl.acm.org/doi/10.1145/3506860.3506866
- NSA Cybersecurity. (2025, January). Content credentials: Establishing trust in digital content (TLP:CLEAR Report). U.S. Department of Defense. https://media.defense.gov/2025/Jan/29/2003634788/-1/-1/0/CSI-CONTENT-CREDENTIALS.PDF
- Wineburg, S., & McGrew, S. (2019). Lateral reading and the nature of expertise. Teachers College Record, 121(11), 1–40. https://doi.org/10.1177/016146811912101102
- Wineburg, S. (2024, October). The high-speed connection between digital literacy and civic engagement. Education Next. https://www.educationnext.org/the-high-speed-connection-between-digital-literacy-and-civic-engagement/
Additional Reading
- Caulfield, M., & Wineburg, S. (2023). Verified: How to think straight, get duped less, and make better decisions about what to believe online. University of Chicago Press.
- Chesney, R., & Citron, D. K. (2023, January 18). All’s clear for deepfakes: Think again. Lawfare. https://www.lawfaremedia.org/article/alls-clear-deepfakes-think-again
- Engageli. (2026). 25 AI in education statistics to guide your learning strategy in 2026. https://www.engageli.com/blog/ai-in-education-statistics
- Higher Education Policy Institute. (2025). HEPI student generative AI survey 2025. https://www.hepi.ac.uk
- Stanford Digital Education. (2024). Sam Wineburg on Verified. https://digitaleducation.stanford.edu/book-series/2024/sam-wineburg-verified
Additional Resources
- Coalition for Content Provenance and Authenticity (C2PA) — https://c2pa.org
- Content Authenticity Initiative (Adobe CAI) — https://contentauthenticity.org
- Digital Inquiry Group (formerly Stanford History Education Group) — https://diginquiry.org
- Michigan Virtual AI Lab — https://michiganvirtual.org/research
- NIST AI Safety Institute — https://www.nist.gov/artificial-intelligence




Leave a Reply