An AI DJ charmed Sydney listeners for months—until they found out she wasn’t human. Here’s the wild story of voices, trust, and deception.
The Curious Calming of Morning Commutes
Imagine this: you’re cruising down Parramatta Road in Sydney, radio humming through static, coffee cup precariously balanced in the cup holder. A confident, magnetic voice fills the car. She’s stylish, witty, upbeat—the kind of host who makes traffic jams survivable. Her name? Thy.
For months, Thy charmed listeners on CADA, a youth-oriented radio station owned by ARN Media. She ran a daily program called Workdays with Thy, spinning tracks and bantering with impeccable timing. Only one problem: Thy wasn’t real. She was an AI host, generated using voice-cloning technology from ElevenLabs, modeled loosely on an employee in ARN’s finance department (Scott, 2025).
And nobody noticed. Not the groggy commuters. Not the media critics. Not the teenagers skipping class.
Thy was code. Thy was a ghost in the machine. Thy was the DJ who wasn’t human.
The Sweet Sound of Silence—or Deception?
The unraveling began when media writer Stephanie Coombes asked the obvious: Who is Thy, really? She had no last name. No social media. No visible career arc. Radio hosts usually have messy histories—open mics, bad selfies, embarrassing tweets. Thy had none.
When pushed, ARN admitted the truth: Thy was a project born out of experimentation, not personhood. Fayed Tohme, a digital director at ARN, had even posted on LinkedIn—“No mic, no studio, just code and vibes”—though he later deleted it after the backlash (Vice, 2025).
Listeners were stunned. Critics were furious. The discovery ignited debate not just about AI in broadcasting, but about something more primal: trust. If your “favorite DJ” is just software, do you feel duped—or intrigued?
Voices of Outrage and Solidarity
Creative professionals didn’t hesitate to clap back. Teresa Lim, vice president of the Australian Association of Voice Actors, called the move “a lack of transparency leading [listeners] to trust a fake person they think is real” (The Verge, 2025).
Her comments carried a sting. Lim highlighted that Thy’s likeness drew on an Asian-Australian woman—a demographic already underrepresented in broadcasting. For many, this was more than just an AI experiment: it was a theft of opportunity, a hollowing out of representation (Vice, 2025).
Musicians, journalists, and broadcasters began asking the same question: If the voice in our ears isn’t real, what else might the media hide?
Experiments Echoing Across the World
Sydney wasn’t alone in trying this. In Poland, OFF Radio Kraków unveiled AI presenters as part of a bold experiment to appeal to younger listeners. Within a week, public backlash was so intense—bolstered by a petition with over 23,000 signatures—that the station pulled the plug (AP News, 2024).
Meanwhile, in the United States, Oregon’s Live 95.5 introduced “AI Ashley”, a voice-cloned version of beloved host Ashley Elzinga. This time, the human Ashley was in on the joke—she consented to the cloning, and her voice carried on even when she wasn’t at the studio (ABC News, 2023). Still, her peers worried about job losses. Vegas radio host Shawn Tempesta said bluntly: “AI is a powerful tool… but to pretend it’s not going to be a mass extinction event for jobs? You’re fooling yourself.”
Across the globe, AI radio hosts keep popping up like digital mushrooms. Spotify rolled out an AI DJ in 2023, offering curated playlists with commentary in a smooth, artificial voice. Will.i.am launched RAiDiO.FYI, an experimental interactive service where AI hosts banter with listeners about sports, politics, and pop culture (Time, 2024). And companies like Super Hi-Fi and WellSaid Labs introduced “ANDY,” a fully artificial DJ who could handle segues, ads, and weather with humanlike delivery.
These weren’t isolated events—they were ripples of a coming wave.
What’s in a Voice, Anyway?
Let’s step back. Why does this matter?
A radio host isn’t just a playlist delivery system. They’re companionship. They’re the illusion that someone is riding shotgun with you, cracking jokes at red lights and easing you through bad news. A voice is a deeply intimate technology—it bypasses the analytical brain and slips straight into the emotional one.
That’s why the revelation of Thy hit so hard. It wasn’t just a programming trick. It was a challenge to the very nature of authenticity. If the voice soothing your morning stress is synthetic, are you being comforted—or manipulated?
Philosophers of media have long debated this. Jean Baudrillard argued that in an age of simulation, signs and symbols can replace reality itself. Thy wasn’t just a DJ; she was a simulacrum—a symbol of humanity without substance. And yet… listeners loved her. Does affection for an illusion make it real in its own way?
When Ethics Meets Innovation
Of course, AI DJs come with undeniable benefits. They’re cheaper. They don’t get sick. They can talk forever without losing their voice. For smaller stations with limited budgets, this is tempting. Imagine a hyper-local AI DJ trained to talk about your small town, the weather, even your favorite coffee shop specials. That’s the dream.
But when companies conceal the use of AI, trust corrodes. Listeners form parasocial bonds with voices they believe are human. When those bonds are revealed as synthetic, betrayal follows. Transparency isn’t just an ethical choice—it’s a survival strategy.
Hybrid models may offer a middle ground: clear labeling of AI segments, AI assistance for human hosts, or playful co-hosting between real personalities and digital ones. Instead of replacing humanity, AI could augment it. But that requires honesty, clarity, and respect.
Lessons Echoing Beyond the Studio
The Thy experiment is a case study for a bigger cultural shift. We’re entering an era where human and digital blur. Chatbots counsel us. Algorithms write the news. AI voices sell products in call centers. Trust is becoming the most precious commodity.
In Australia, regulators like the ACMA have no hard rules about AI disclosure—yet. But given public backlash and petitions, that could change quickly. Meanwhile, academics are raising alarms about voice bias. A 2025 study found that AI voice models often struggled with non-Western accents, amplifying inequality (Michel et al., 2025). If voices define belonging, then bias in AI voices can deepen exclusion.
The radio is only the first domino. Television, podcasts, film dubbing—all may soon face the same crisis of trust.
The Encore
So what do we make of Thy? On one hand, she was a quirky stunt—a neat trick in an industry scrambling for relevance. On the other, she was a warning shot across the bow of cultural trust.
AI DJs are coming, whether we like it or not. The question is not if, but how. Will they announce themselves proudly, or hide behind the curtain? Will they democratize creativity, or replace it?
As the static hums and the music fades, the question remains: when you tune in tomorrow, will you know whose voice you’re really hearing?
Reference List
- Fike, A. (2025, May 3). Radio station duped audience and secretly used an AI host for six months. VICE. foxbusiness.com+15vice.com+15the-independent.com+15
- An AI‑generated radio host in Australia went unnoticed for months. (2025, April 25). The Verge. theverge.com
- Cuthbertson, A. (2025, April 26). Australian radio station secretly used an AI host for six months. The Independent. the-independent.com
- Polish radio station abandons use of AI ‘presenters’ following outcry. (2024, October 28). Associated Press. theverge.com+15apnews.com+15apnews.com+15
- Ross, T. C. (2024, October 24). AI presenters spark backlash in Poland. Radio World. radioworld.com
- Radio broadcasters sound off on artificial intelligence. (2023, September 4). ABC News. abcnews.go.com
- Betz, B. (2023, June 18). World’s first AI DJ hits the airwaves in Oregon via RadioGPT. Fox Business.
Additional Reading List
- Oyedokun, I. S. (2023). Effects of Adopting Artificial Intelligence Presenters in Broadcasting on Audience Perception and Gratification of Broadcast Content. ResearchGate. Retrieved from https://www.researchgate.net/publication/372824952_Effects_of_Adopting_Artificial_Intelligence_Presenters_in_Broadcasting_on_Audience_Perception_and_Gratification_of_Broadcast_Content
- Gutiérrez‑Caneda, B. (2024). Ethics and journalistic challenges in the age of artificial intelligence integration. Frontiers in Communication. Retrieved from https://www.frontiersin.org/articles/10.3389/fcomm.2024.1465178/full ResearchGateFrontiers
- Lewis, S. C. (2025). Generative AI and its disruptive challenge to journalism: A conceptual institutionalist perspective. Communication Research Journal. Retrieved via Springer at https://link.springer.com/article/10.1007/s44382-025-00008-x SpringerLink
- Amponsah, P., & Atianashie, A. (2024). Navigating the New Frontier: A Comprehensive Review of AI in Journalism. Advances in Journalism and Communication. Retrieved from https://www.scirp.org/journal/paperinformation?paperid=130552 UNESCO Documents+3SCIRP+3ResearchGate+3
- Chen, Y. (2024). The Ethical Deliberation of Generative AI in Media. Sage Journals. Retrieved from https://journals.sagepub.com/doi/10.1177/27523543241277563 SAGE Journals
Additional Resources
- Australian Association of Voice Actors – Advocacy group promoting ethical standards for voice AI usage. Find them via a quick search for their official website.
- Australian Communications and Media Authority (ACMA) – The national regulatory body for broadcasting in Australia. Accessible at https://www.acma.gov.au
- ElevenLabs – The voice-cloning platform behind “Thy.” Visit https://elevenlabs.com
- Futuri Media / RadioGPT – Platform used for the “AI Ashley” experiment. Explore https://futurimedia.com
- University Frontiers in Communication – A hub for media-related academic research, including ethical studies on AI. For media ethics, see https://www.frontiersin.org/journals/communication
Leave a Reply