Authenticity as Infrastructure: Why the Future of Trust Isn’t a Feeling — It’s a Protocol
We’ve built systems that can fake anything. Now we’re building systems that can verify everything. Here’s what cryptographic content provenance, the C2PA standard, and the death of “just Google it” mean for the classroom — and for every educator trying to prepare students for a world where trust has moved from a gut feeling to a technical specification.
The Current Narrative: “Can We Trust Anything Anymore?”
Here is the story playing out in faculty lounges, parent Facebook groups, homeschool co-op newsletters, and district professional development sessions from coast to coast: AI has made everything fake. Students are submitting AI-written essays. Teachers can’t tell what’s real. Images are manufactured. Videos are doctored. Nobody knows who made what, and detection tools — the ones that were supposed to save us — don’t actually work.
That narrative isn’t wrong, exactly. But it’s incomplete in ways that matter enormously for educators. It treats the crisis as purely a problem of deception and stops there, when in reality something far more interesting is happening underneath it. The world is actively building a response. Not a perfect one, not a fast one, but a structural one — and it runs through the infrastructure of the internet itself.
The public perception, understandably, is doom-flavored. Media headlines oscillate between “AI Will End Authenticity as We Know It” and “New Tool Can Detect AI Writing with 99% Accuracy” (spoiler: the second type is almost always wrong). Administrators are drafting AI policies that read like terms of service. Homeschool families are debating whether to ban generative tools entirely. Teachers are toggling between zero-tolerance and hopeful experimentation, often in the same week.
What’s missing from most of these conversations is an understanding of what the technology industry is actually building to address the problem — and how those efforts translate into something educators can both teach and use. Because here’s the thing: the solution to the authenticity crisis isn’t a better detection algorithm. It’s provenance. And provenance is becoming infrastructure.
What’s Actually Happening: From Trusting Content to Trusting Systems
In Part I of this series, we introduced the concept of the “liar’s dividend” — the deeply unsettling idea that awareness of deepfakes may actually help liars, because it gives them a rhetorical tool to dismiss real evidence as fake. As legal scholars Robert Chesney and Danielle Citron wrote in their landmark 2019 California Law Review paper:
“The liar’s dividend flows, perversely, in proportion to success in educating the public about the dangers of deep fakes.”
Chesney & Citron, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security (2019)The answer the technology industry has converged on is neither AI detection nor a return to simpler times. It’s cryptographic provenance — a system where content carries verifiable metadata about who made it, when, with what tools, and whether it has been altered since. Think of it less like a lie detector and more like a nutrition label for digital content: not telling you whether to eat it, but giving you the ingredients so you can decide for yourself.
The leading standard for this approach is called C2PA — the Coalition for Content Provenance and Authenticity. C2PA was formed through an alliance between Adobe, Arm, Intel, Microsoft, and Truepic, unifying the efforts of the Adobe-led Content Authenticity Initiative and Project Origin, a Microsoft- and BBC-led initiative that tackles disinformation in the digital news ecosystem (C2PA, 2022). This is not a startup pitch deck. These are the companies that build the cameras, the operating systems, the creative software, and the browsers through which essentially all digital content flows.
Content Credentials function like a nutrition label for digital content, giving a peek at the content’s history available for anyone to access, at any time (C2PA, 2022). C2PA adds cryptographically signed metadata — called “manifests” — to media files. Any tampering breaks the signature, making modifications detectable. It uses standard PKI (like HTTPS certificates), not blockchain (c2pa.wiki, 2025). The parallel to HTTPS is instructive: we don’t think about SSL certificates when we browse the web, but we do notice the padlock in our browser’s address bar. Content Credentials are designed to work the same way — invisible infrastructure surfacing as a simple, scannable signal.
The momentum behind this standard is real. BBC News has implemented Content Credentials, embedding them in its images to verify content provenance and authenticity. OpenAI announced support for Content Credentials for images generated by DALL·E 3. Meta announced plans to build on the C2PA standard across Facebook, Instagram, and Threads. Google joined the C2PA steering committee and is actively working to implement Content Credentials across its products and services (Adobe Blog, 2024).
“We’re working to combat misinformation by advocating for widespread adoption of Content Credentials as an industry standard for establishing trust in all of the digital content that’s being created.”
Shantanu Narayen, Chair & CEO, Adobe — Adobe Summit, March 2024For educators, the shift being described here is not just technical. It is philosophical. We are moving from a world where trust was a feeling — an intuition built on appearance, familiarity, and authority signals — to a world where trust is a system — an infrastructure of verifiable claims, cryptographic signatures, and traceable histories. That shift has profound implications for what we teach students about how knowledge works.
The academic researcher who has perhaps thought longest about the cognitive skills required to navigate this shift is Sam Wineburg of Stanford, co-founder of the Digital Inquiry Group. His research identifies lateral reading — the practice of leaving a source immediately to search for what other trusted sources say about it — as the core skill that expert fact-checkers use and that ordinary readers almost never do. His SIFT framework operationalizes this for learners. Wineburg describes SIFT as standing for “Stop, Investigate, Find a better source, Trace back to the original,” noting that many people skip “stop,” the first step (Wineburg, 2024).
Where AI Is Already Being Used: What the Classroom Looks Like Tomorrow Morning
Let us get specific. Because “cryptographic metadata” sounds like something that happens in server rooms, not schools. What does any of this actually look like when a teacher walks into third period?
The most immediate implication is a new category of media literacy instruction. Not “is this fake?” — that question is increasingly unanswerable in isolation — but “where did this come from, and how do I know?” That reframing changes the classroom activity entirely. Instead of playing “spot the deepfake,” students can learn to check Content Credentials on images, trace a photograph’s chain of edits, and distinguish between content that was signed at the camera and content that arrived with no provenance at all.
The 2025 landscape for student AI use is striking. Consider what the data shows:
The 2025 results from Michigan Virtual’s statewide AI educator survey confirm that educators remain both cautious and curious about AI. Trust in AI tools is growing, but actual use appears to be expanding at an even faster rate, suggesting that practice may be outpacing comfort levels (Michigan Virtual AI Statewide Workgroup, 2025). That gap — between what students are doing and what teachers feel equipped to supervise — is exactly where provenance literacy can help. It gives teachers a framework that doesn’t require them to be AI experts. You don’t need to understand a transformer model to teach students to look for a Content Credentials badge. You need to understand why provenance matters.
In practical terms, the classroom implications cluster around four areas. Sourcing for research projects: students can be taught to prioritize sources carrying verifiable provenance signals and flag sources that lack them — not as automatically false, but as requiring additional lateral reading. Original creative work: when students create images, audio, or video using AI-assisted tools, they can be required to use tools that attach Content Credentials to their work, making their creative process visible and citable. Journalism and civics units: the C2PA standard is being adopted by major news organizations precisely because the news industry faces the same crisis classrooms do. Digital portfolio work: in a world where employers will increasingly value verifiable portfolios, students who understand how to create provenance-tagged work have a genuine advantage.
Risks and Tradeoffs: Who Verifies the Verifiers?
This is where the series’ most important question arrives: even if we can verify content — who verifies decisions? And before we get there, the philosophical trap door beneath the provenance solution itself: who controls the verification systems?
C2PA’s trust model depends on a Trust List — a registry of certificate authorities authorized to issue Content Credentials. The C2PA established an official Trust List as part of their 2.0 specification, open to any organization meeting defined requirements (c2pa.wiki, 2025). That sounds democratizing. But the steering committee of C2PA is composed of the largest technology companies on earth. The certification infrastructure, while technically open, is practically dominated by organizations with enormous market power.
The power problem: When a system is designed to establish what is “trustworthy,” the entity controlling that system holds extraordinary power. The history of internet standards is a history of open protocols captured and reshaped by concentrated interests.
The stripping problem: Content Credentials can be removed — C2PA proves authenticity when present, rather than preventing removal (c2pa.wiki, 2025). The absence of credentials proves nothing. A legitimate photograph taken with an older camera may simply not have the infrastructure to sign its output.
The access problem: If Content Credentials require camera-level hardware signing or institutional-grade software, then community journalists, documentary filmmakers in under-resourced contexts, and students in under-equipped schools are systematically disadvantaged. A trust infrastructure that tracks institutional access more than truth is not neutral.
For classrooms, the balanced framing is this: provenance standards are a genuine and meaningful improvement over a world with no provenance infrastructure at all. But they are not the end of the critical thinking requirement — they are a new beginning of it. Teaching students that a Content Credentials badge means “verified true” would be as misleading as teaching them that an official-looking logo means “legitimate source.” What the badge means is: this is where this content says it came from, and the signature hasn’t been broken. The evaluation of what that origin means still belongs to a human mind trained to ask the right questions.
What Teachers Can Do Now: Five Concrete Steps
The good news is that provenance literacy doesn’t require a curriculum overhaul, a new budget line, or waiting for the district to issue a policy. It requires teachers who understand the shift and can introduce it in ways that feel purposeful rather than tacked-on.
- Incorporate provenance questions into existing sourcing instruction. Add one layer to research assignments: not just “who published this?” but “can I trace where this content was created, and what has happened to it since?” The Content Authenticity Initiative’s verify tool at contentcredentials.org/verify allows students to check images for attached credentials at no cost.
- Teach the SIFT framework explicitly. Sam Wineburg and Mike Caulfield’s four-step process — Stop, Investigate the source, Find better coverage, Trace claims back to the original — is free, peer-reviewed, and designed specifically for the current information environment. The Digital Inquiry Group (diginquiry.org) provides free, classroom-ready materials.
- Have students produce provenance-tagged work. Adobe Express, Firefly, and several widely available tools now support Content Credentials for AI-generated content. Requiring students to create AI-assisted work using credentialed tools shifts the academic integrity conversation from “did you use AI?” to “how did you use AI, and can you show me?”
- Build a unit around a real-world case. The BBC’s adoption of Content Credentials, or coverage of a recent election featuring AI-generated content, can anchor a rich discussion about why provenance matters beyond the classroom. Students who understand why the BBC made an institutional decision to tag every image it publishes are engaging with the information ecosystem at a genuinely sophisticated level.
- Model the uncertainty. Teachers who tell students “here’s a tool that tells you if something is AI-generated” are inadvertently teaching false confidence. Teachers who say “here’s what this tool shows us — and here’s what questions it still leaves open” are teaching genuine critical thinking. The discomfort of living with partial information is not a problem to be solved by better tools. It is the fundamental condition of knowledge in a complex world.
What Leaders Should Be Considering
Policy Design
Most existing AI policies are prohibition-focused: lists of what students may not do. A provenance-aware policy would be design-focused: specifying not just whether AI may be used, but which tools may be used and under what conditions of transparency. Requiring that AI-assisted student work be submitted with attached Content Credentials (where the tool supports it) is a policy that doesn’t ban a technology — it builds accountability into the workflow. That is a fundamentally different posture, and one that prepares students for workplaces where AI use is expected to be declared, documented, and traceable.
Professional Development
Research on K–12 teachers finds a significant positive relationship between knowledge of AI and trust in AI, and that knowledge of AI is a robust and substantial predictor of teachers’ trust in AI tools (Nazaretsky et al., 2022). In plain language: teachers who understand more trust AI more appropriately — neither blindly nor reflexively. Professional development that teaches content provenance serves both the institutional interest in responsible AI adoption and the individual teacher’s interest in feeling competent and confident in a changing landscape.
The C2PA specification is progressing toward ISO international standardization, and its adoption at the browser level is under active discussion by the W3C (NSA Cybersecurity, 2025). Leaders who build familiarity with this standard now are positioning their institutions ahead of a transition that is coming regardless of whether any individual school is ready for it.
A Forward-Looking Close: The Bigger Question This Series Is Building To
Part I of this series asked: if perception is unreliable, what replaces it? Part II’s answer is: systems — technical infrastructure designed to make content’s origin traceable and verifiable. But provenance infrastructure, as we’ve seen, is not a neutral technology. It embeds choices about who is trusted, who controls the trust list, and whose content enters the ecosystem with the imprimatur of verification.
That brings us to the question that will anchor Part III: even if we can verify content, what happens when we need to verify decisions? Because increasingly, AI systems are not just generating images and text. They are making choices — scheduling meetings, filtering applications, recommending interventions, routing resources. Those decisions don’t come with Content Credentials. There is no cryptographic manifest for “why did the algorithm flag this student as at-risk?” or “what training data shaped this admissions recommendation?”
What students are learning in classrooms right now about how to evaluate the origin, history, and trustworthiness of digital content is not just a media literacy skill. It is the foundational civic competency for a world where the systems making consequential decisions are themselves opaque, dynamic, and deeply difficult to audit. Teaching students to ask “where did this come from, who built it, what are its incentives, and what questions does it leave unanswered?” — about an image, about a news article, about an algorithm, about a policy — is the same intellectual move every time.
That is the adventure ahead. Not a simple one. Not a finished one. But one that the best educators have always known how to begin: by sitting with students inside a genuinely hard question and refusing to pretend the answer is easier than it is.
References
- Chesney, R., & Citron, D. K. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107, 1753–1819. https://doi.org/10.2139/ssrn.3213954
- Coalition for Content Provenance and Authenticity. (2022). Overview: C2PA. https://c2pa.org
- Coalition for Content Provenance and Authenticity. (2025). Content credentials: C2PA technical specification (Version 2.2). https://spec.c2pa.org
- c2pa.wiki. (2025). Content provenance & authenticity standard. https://c2pa.wiki
- Michigan Virtual AI Statewide Workgroup. (2025). AI in education: A 2025 snapshot of trust, use, and emerging practices. Michigan Virtual. https://michiganvirtual.org/research
- Narayen, S. (2024, March 26). Adobe Summit 2024 keynote [Conference presentation]. Adobe Summit, Las Vegas, NV. As cited in The Karo Startup. https://thekarostartup.com/shantanu-narayen-ceo-of-adobe-discusses/
- Nazaretsky, T., Cukurova, M., & Alexandron, G. (2022). An instrument for measuring teachers’ trust in AI-based educational technology. LAK22: 12th International Learning Analytics and Knowledge Conference. https://dl.acm.org/doi/10.1145/3506860.3506866
- NSA Cybersecurity. (2025, January). Content credentials: Establishing trust in digital content (TLP:CLEAR). U.S. Department of Defense. media.defense.gov
- Wineburg, S., & McGrew, S. (2019). Lateral reading and the nature of expertise. Teachers College Record, 121(11), 1–40. https://doi.org/10.1177/016146811912101102
- Wineburg, S. (2024, October). The high-speed connection between digital literacy and civic engagement. Education Next. educationnext.org
Additional Reading
- Caulfield, M., & Wineburg, S. (2023). Verified: How to think straight, get duped less, and make better decisions about what to believe online. University of Chicago Press.
- Chesney, R., & Citron, D. K. (2023, January 18). All’s clear for deepfakes: Think again. Lawfare. lawfaremedia.org
- Engageli. (2026). 25 AI in education statistics to guide your learning strategy in 2026. engageli.com
- Higher Education Policy Institute. (2025). HEPI student generative AI survey 2025. hepi.ac.uk
- Stanford Digital Education. (2024). Sam Wineburg on Verified. digitaleducation.stanford.edu
Additional Resources
- Coalition for Content Provenance and Authenticity (C2PA) — The official home of the open technical standard. c2pa.org
- Content Authenticity Initiative (Adobe CAI) — Free verification tool at contentcredentials.org. contentauthenticity.org
- Digital Inquiry Group (Sam Wineburg) — Free, research-based K–12 curriculum for lateral reading and SIFT. diginquiry.org
- Michigan Virtual AI Lab — Ongoing research and practitioner resources on AI in education. michiganvirtual.org/research
- NIST AI Safety Institute — U.S. government body developing AI safety frameworks and standards. nist.gov/artificial-intelligence




Leave a Reply