Reading Time: 10 minutes
Categories: , , , , , , ,

Faculty use AI for productivity while students face bans for “cheating.” We expose the “Shadow AI” habits of academia and the need for radical transparency.


Welcome back, intrepid explorers of the digital frontier.

If you have been following our expedition through the Academic Research & Analysis Series, you know we have spent the last five episodes deep in the trenches. We have mapped the chaotic, innovative, and sometimes terrifying terrain of student AI use. We gave students a voice when no one else would. We dissected the cyborg-like nature of modern writing. We cataloged the vast ecosystem of tools that students use to survive the semester, wrestled with the ethical gray zones of “cheating” versus “collaborating,” and even hooked ourselves up to the electrodes to see if our brains were turning to mush.

But today? Today, we are turning the camera around.

We are leaving the student dorms, with their empty energy drink cans and glowing screens. We are stepping out of the library basement. We are swiping our keycards (or perhaps picking the lock) to enter the most hallowed, mysterious, and contradictory chamber in the entire university: The Faculty Lounge.

Grab your coffee and lower your voice; it’s time to talk about the wizards behind the curtain.

The Open Secret: What’s Really Happening in the Ivory Tower

There is a pervasive, comforting narrative on campus. It goes something like this: Students are the reckless accelerants of AI chaos, pushing boundaries and breaking rules, while Faculty are the stoic, tweed-jacketed guardians of traditional intellect, holding the line against the machine.

It is a compelling story. It fits our archetypes. It is also largely fiction.

While students are sweating bullets every time they submit an essay to Turnitin, fearing that a false positive will derail their academic career, a quiet revolution is happening on the other side of the podium. The data paints a picture of a faculty body that is far more digitally entangled than their syllabus policies might suggest.

According to the Time for Class 2024 report by Tyton Partners (2024), the adoption gap is closing fast. While 59% of students are using generative AI regularly, faculty adoption is trailing but significant, with approximately 40% of instructors now using these tools. But here is the twist: they aren’t just using it to check for plagiarism. They are using it to do their jobs.

The Global AI Faculty Survey 2025 by the Digital Education Council (2025) reveals an even starker reality: 61% of faculty have used AI in their teaching. We aren’t just talking about spell-check here. We are talking about the heavy lifting of academia. They are generating quiz questions, summarizing dense research papers for lecture notes, drafting grant proposals, and creating lesson plans.

In a twist of irony that would make Alanis Morissette pause, a significant number of these educators are using AI to create the very assignments that students are then banned from using AI to complete. This creates a dissonant environment where the tool is simultaneously a “productivity hack” for the master and a “cheating device” for the apprentice.

The Rise of “Shadow AI” Among Faculty

This brings us to a phenomenon that the business world knows well, but academia is just waking up to: Shadow IT.

In the corporate sector, “Shadow IT” refers to employees using unauthorized software to get their work done faster because the official tools are too slow or clunky. In higher education, it has taken on a new, more complex form: Shadow AI.

Faculty members are pressed for time. They are buried under administrative bloat, grant deadlines, and the intense pressure to “publish or perish.” So, they are turning to tools like ChatGPT, Claude, and Gemini to survive the semester. They are using these tools to draft emails to department heads, to outline new courses, and—controversially—to assist in grading.

However, they are often doing so in a vacuum. The Time for Class report notes that only a fraction of institutions have fully developed AI policies that explicitly cover faculty use, leaving the vast majority of professors to navigate this landscape without a map (Tyton Partners, 2024).

This lack of governance creates a dangerous double standard. When a student uses an unapproved AI tool to speed up their work, it is called “academic dishonesty” or “plagiarism.” When a professor does it, it is called “productivity” or “innovation.”

Recent analysis suggests that education IT leaders are finding it increasingly difficult to control this unauthorized use, creating an “arms race” where institutions are forced to choose between clamping down or risking total chaos (EdTech Digest, 2025). By operating in the shadows, faculty are missing a critical opportunity to model responsible AI use. Instead of showing students how to navigate these tools ethically, they are teaching—through their silence—that AI is a guilty pleasure, something to be used in private but condemned in public.

The Philosophical Core: The Expertise Reversal Effect

Why do professors feel justified in this double standard? Is it pure hypocrisy? Is it a power trip? Or is there a method to the madness?

To understand this, we have to dig into a concept from educational psychology known as the Expertise Reversal Effect.

This theory, researched extensively by cognitive scientists like Slava Kalyuga, posits that instructional methods that work well for novices can actually be detrimental to experts, and vice versa (Kalyuga, 2007). For a novice learner (the student), the “struggle” of drafting an essay from scratch is essential. It builds the cognitive schema required to understand the subject. If a novice uses AI to bypass this struggle, they suffer from “cognitive offloading”—they never build the mental muscle.

However, the expert (the professor) already has that schema. They have spent decades writing, researching, and thinking. When a professor uses AI to draft a lecture or summarize a meeting, they aren’t bypassing learning; they are accelerating execution. They possess the deep domain knowledge required to instantly spot an AI “hallucination” or a bias, effectively keeping the “human in the loop” (Mollick, 2024).

This is the intellectual defense for the “Rules for Thee, But Not for Me” approach. The argument is that faculty use AI as a force multiplier, while for students, it acts as a crutch.

There is also the “Assessment vs. Production” argument. Students are in the university to be assessed—to prove they can do the work. Faculty are there to be productive—to generate research and manage courses. A student using AI avoids the assessment; a professor using AI enhances the production.

It is a valid pedagogical argument. A master carpenter uses a nail gun; the apprentice must learn to use a hammer. A pilot uses autopilot; the student pilot must learn to fly manually.

But here is where the logic crumbles in 2025: We are no longer just teaching students to write essays or hammer nails. We are teaching them to exist in a world where AI is the baseline. By banning the tool for novices entirely, we aren’t protecting their learning; we are denying them the chance to develop the new expertise they actually need: AI Literacy.

The Business Reality: The Skill Gap We Are Ignoring

If universities continue to treat AI as contraband for students while faculty secretly upskill, they are setting those students up for professional failure. The market has already decided that AI is not a crutch—it is a requirement.

The 2024 Work Trend Index from Microsoft and LinkedIn dropped a bombshell on the “abstinence-only” crowd. The report reveals that 66% of leaders would not hire someone without AI skills (Microsoft & LinkedIn, 2024). Even more striking, 71% of leaders stated they would rather hire a less experienced candidate with AI aptitude than a senior candidate without it.

This is a complete inversion of the traditional value hierarchy. Experience is no longer the only currency; adaptability is king.

Ryan Roslansky, the CEO of LinkedIn, put it bluntly: “The future of work belongs not anymore to the people that have the fanciest degrees or went to the best colleges, but to the people who are adaptable, forward-thinking, ready to learn, and ready to embrace these tools” (Microsoft & LinkedIn, 2024).

If the CEO of LinkedIn is telling us that AI skills outweigh “fancy degrees,” why are we still penalizing students for developing those very skills? We are preparing them for a world that no longer exists. By enforcing an “AI-free” environment for students, universities are arguably committing educational malpractice. They are sending unarmed soldiers into a digital war zone.

The Trust Crisis: Hypocrisy and the Hidden Curriculum

The most damaging aspect of this double standard isn’t the skill gap; it’s the erosion of trust.

Education is built on a social contract. Students agree to do the work, and faculty agree to evaluate it fairly. When a professor uses AI to generate feedback on an assignment—feedback that feels robotic, generic, and oddly polished—students notice. They know the “voice” of ChatGPT just as well as the professor does.

When that same professor fails a student for using AI to polish a rough draft, the lesson learned isn’t about “academic integrity.” The lesson is about power.

This is the Hidden Curriculum: the unspoken values an institution teaches through its actions. Right now, many universities are inadvertently teaching that AI is a tool of privilege, reserved for those who have already “made it.”

We are seeing a rise in what students call “The Asymmetry Problem.” Students are subjected to surveillance—locked browsers, proctored exams, and AI detection software that is notoriously unreliable. Meanwhile, faculty operate with almost total autonomy.

Dr. C. Edward Watson, a leading voice in educational development and co-author of Teaching with AI, warns that we are navigating a space of profound ambiguity. “You must use AI as a starting point in the real world,” Watson notes (Watson, 2024). He emphasizes that ignoring these tools isn’t an option and that the “gotcha” game of detection is a losing battle.

If the people enforcing the rules don’t follow them, the rules lose all legitimacy. We risk creating a generation of students who view academic integrity not as a moral code, but as a game of cat and mouse—one they intend to win.

The Glass Classroom: A Call for Radical Transparency

So, how do we fix this? We don’t need to ban faculty from using AI. We need to pull the curtain back. We need Radical Transparency.

We need to move toward a model of the “Glass Classroom,” where the use of tools is visible, acknowledged, and discussed.

Imagine a syllabus that includes an “AI Disclosure Statement”—not just for the student, but for the professor. It might look something like this:

  • “This syllabus was outlined using Claude 3.5 to ensure all learning objectives were met, but the content was refined by humans.”
  • “These lecture slides were visually enhanced using Midjourney to make the concepts clearer.”
  • “I used ChatGPT to brainstorm these essay prompts, but I verified their logic personally.”

This transparency does two things.

First, it dismantles the Hidden Curriculum that teaches students that AI is a secret weapon for the powerful. It levels the playing field. It says, “I use this tool because it helps me think, and I want to teach you how to use it to help you think, too.”

Second, and more importantly, it models the exact behavior we want students to learn: Attribution and Verification.

If a professor admits they used AI to draft a lecture, they can then show the class how they verified the facts. They can show where the AI hallucinated and how their expertise caught the error. That is a teaching moment. That is education.

Ethan Mollick, a professor at the Wharton School and author of Co-Intelligence, argues that we need to stop viewing AI as an adversary. His philosophy is simple: “Always invite AI to the table” (Mollick, 2024). He suggests we must move past the fear of cheating and embrace the concept of the Centaur—a hybrid worker who combines human intuition with machine speed.

When faculty model responsible, transparent AI use, they stop being hypocrites and start being mentors. They demonstrate the very thing students are desperate to learn: How to remain the ‘human in the loop’.

Institutional Reform: From Prohibition to Policy

The burden shouldn’t just fall on individual professors. Institutions need to step up.

Currently, university policies are often a patchwork of vague warnings about “unauthorized assistance.” We need policies that are consistent across the board. If AI is banned for students because it “diminishes critical thinking,” universities need to explain why it is acceptable for faculty use in research or administration.

We need Institutional Coherence.

This means developing AI literacy programs not just for students, but for faculty as well. It means creating guidelines that focus on appropriate use rather than blanket prohibition. It means treating students as partners in this transition, rather than suspects.

Some forward-thinking departments are already doing this. They are holding “AI Town Halls” where faculty and students discuss how they are using these tools. They are creating “Living Syllabi” that evolve as the technology changes. They are recognizing that we are all—students and teachers alike—novices in this new age.

Conclusion: Into the Human Element

The transparency problem isn’t just about policy; it’s about trust. If we want to maintain the sanctity of the student-teacher relationship, we have to stop pretending that AI is a contagion that can be contained by a firewall. It is the air we breathe.

By acknowledging their own use of these tools, faculty can shift the conversation from “policing” to “pedagogy.” They can show us how to wield the sword without cutting ourselves.

But this raises a massive, looming question. If AI can write the syllabus, grade the papers, help write the papers, and even predict hiring trends… what is left for us?

If the machine can do the logic, the structure, and the grammar, what is the irreducible “human element” that we are fighting to protect? Is there anything an AI cannot learn?

Join us in Episode 7, where we will leave the digital world behind. We will trek into the “Beyond AI’s Reach”—the messy, beautiful, cognitive domains where human intelligence remains the only game in town. We’ll explore why AI can write a sonnet but can’t feel heartbreak, and why that difference might just be the future of education.

Until then, keep your prompts sharp, your citations honest, and your faculty lounge doors open.


Synopsis of the Series

  • Episode 1: We established that student voices are missing from the AI debate and that we need to look at “why” they use it, not just “if” they use it.
  • Episode 2: We explored AI as a writing assistant, analyzing the spectrum from helpful spell-check to total ghostwriter.
  • Episode 3: We mapped the ecosystem of tools, from general LLMs to niche homework solvers, and the socioeconomic divides they create.
  • Episode 4: We tackled the ethics of “Academic Integrity,” questioning if our old definitions of cheating still hold up.
  • Episode 5: We examined the cognitive impact, asking if AI is rotting our brains or freeing them for higher thought.
  • Episode 6 (This Post): We exposed the double standards of faculty AI use and called for radical transparency.
  • Episode 7 (Next): We will identify the human skills that AI cannot replicate—creativity, empathy, and physical embodiment.
  • Episode 8: We will conclude with a roadmap for the future, proposing a new framework for AI literacy in education.

References

  • Digital Education Council. (2025). Global AI Faculty Survey 2025: Key Results. Digital Education Council. https://www.digitaleducationcouncil.com
  • EdTech Digest. (2025). Ending The Arms Race: Addressing Shadow AI Use in Higher Education. EdTech Digest.
  • Kalyuga, S. (2007). Expertise reversal effect and its implications for learner-tailored instruction. Educational Psychology Review, 19(4), 509-539.
  • Microsoft & LinkedIn. (2024). 2024 Work Trend Index Annual Report: AI at Work Is Here. Now Comes the Hard Part. Microsoft. https://www.microsoft.com/en-us/worklab/work-trend-index/
  • Mollick, E. (2024). Co-Intelligence: Living and Working with AI. Portfolio/Penguin.
  • Tyton Partners. (2024). Time for Class 2024: Unlocking Access to Effective Digital Teaching & Learning. Lumina Foundation. https://www.luminafoundation.org
  • Watson, C. E. (2024, February). Thinking with and About AI [Audio podcast episode]. In Teaching in Higher Ed. https://teachinginhighered.com/podcast/thinking-with-and-about-ai/

Additional Reading

  1. “Teaching with AI: A Practical Guide to a New Era of Human Learning” by José Antonio Bowen and C. Edward Watson. A handbook that moves beyond the panic to offer concrete strategies for the classroom.
  2. “Co-Intelligence” by Ethan Mollick. An essential read for understanding the “centaur” approach to working alongside AI agents.
  3. “The Wolf in the Ivory Tower” (Op-Ed). A look at how “Shadow IT” has historically plagued universities, from calculators to the internet, and now LLMs.

Additional Resources


Leave a Reply

Your email address will not be published. Required fields are marked *