Deep Dive: AARON and the Origins of Computational Creativity: An Academic Reassessment of the First AI Artist

Reading Time: 18 minutes – Before diffusion models existed, Harold Cohen’s AARON was quietly making art with AI. This is the fifty-year story the tech world forgot.

Categories: , , , , , , , , ,

Editor’s Note

AI Innovations Unleashed has covered hundreds of stories at the intersection of artificial intelligence, policy, and culture. None have resonated the way our original investigation into AARON did. Published in April 2025, The Untold Story of AARON: The AI That’s Been Creating Art for 50 Years quickly became the most-read article in our blog’s history — shared by researchers, cited in discussion threads, and referenced by readers who had never heard of Harold Cohen before clicking that link.

That kind of response carries a responsibility. Developments since that original publication — including the Whitney Museum’s landmark 2024 retrospective, new U.S. Copyright Office guidance on AI-generated works, and a wave of fresh academic research — made it clear that AARON’s story wasn’t finished. Neither was ours.

This expanded investigation is the piece we wish we could have published the first time. We recommend reading the original first to ground yourself in the history, then returning here for the full picture. For those of you who’ve already made that journey — thank you for being the reason this update exists.


Before diffusion models existed, Harold Cohen’s AARON was quietly making art with AI.
This is the fifty-year story the tech world forgot.


The Origin Story Nobody Talks About

Here is a question worth sitting with for a moment: What if the most important AI artist in history isn’t Midjourney, isn’t DALL-E, and isn’t Stable Diffusion — but a piece of software that has been generating original artwork since the Ford administration?

The dominant cultural narrative about artificial intelligence and art goes something like this: around 2022 and 2023, a handful of tech companies released generative image tools, the internet lost its collective mind, artists panicked, copyright lawyers got busy, and the world woke up to the fact that machines could make pictures. It was, depending on your vantage point, either a revolution or a catastrophe. But embedded inside that story is a quiet, inconvenient historical fact: a British-born painter named Harold Cohen had already been doing this — methodically, rigorously, and with extraordinary philosophical depth — since the early 1970s.

His creation was called AARON. It is not an acronym. According to the Whitney Museum of American Art, the name is an allusion to the biblical figure anointed as speaker for his brother Moses — a deliberate nod to questions about how artistic creation is glorified as a form of divine communication (Whitney Museum of American Art, 2024). Cohen, who passed away in 2016 after devoting over four decades to the project, understood his work with AARON to be a collaboration: a lifelong conversation between a human artist’s accumulated knowledge and a machine capable of expressing it in ever-new visual configurations.

In February 2024, the Whitney mounted a full retrospective titled Harold Cohen: AARON, running through May 2024 — the first major American museum exhibition to shine a sustained spotlight on the program. Live plotters drew AARON’s output in the gallery, exactly as Cohen’s machines had drawn them in the 1970s. Visitors watched algorithms become artworks in real time. For many of them, it was the first time they had heard of AARON at all.

That gap — between what AARON represents intellectually and how little the mainstream technology conversation has engaged with it — is exactly what this deep dive is designed to close. Because if we are serious about understanding generative AI: where it came from, what it means ethically, how it intersects with law and economics, and whether machines can truly be creative — we have to start here. We have to start with Harold Cohen, a plotter robot, and a rule-based system that dared to ask whether artistic knowledge could be encoded.

Human Hand — Encoded Knowledge
“If what AARON is making is not art, what is it exactly, and in what ways, other than its origin, does it differ from the ‘real thing?’”
— Harold Cohen
AARON Plotter — Rule-Based Generation
“Cohen tried to encode the artistic process and sensibility itself, creating an AI with knowledge of the world.”
— Christiane Paul, Whitney Museum
Fig. 1 — The Human–Machine Creative DyadAI Innovations Unleashed · 2025
1
Fig. 1 — Hero Visual
Harold Cohen and AARON — The Human–Machine Creative Dyad

Encoding the Act of Drawing

Cohen began developing AARON at the University of California San Diego in the late 1960s, after representing Great Britain at the Venice Biennale in 1966 and at Documenta in 1964 — credentials that positioned him as one of the most established painters of his generation. He arrived in California and promptly walked away from painting, not because he had lost interest in art, but because he had developed a consuming fascination with a more fundamental question: what is art, and can it be formalized?

From 1973 to 1975, Cohen refined AARON during a residency at Stanford University’s Artificial Intelligence Laboratory, working alongside some of the field’s founding luminaries, including John McCarthy and Ed Feigenbaum (Computer History Museum, 2019). It was in this environment — surrounded by researchers building the theoretical scaffolding of what we now call symbolic AI — that AARON took its definitive shape.

The architecture of AARON was radically different from what we associate with modern generative AI. It did not train on image datasets. It did not perform statistical inference across millions of pixel values. It operated through explicit rule encoding. Cohen spent years formalizing his own understanding of art-making: the rules governing spatial relationships, figure-ground dynamics, enclosure, compositional balance, the way a drawing moves from foreground to background, the way a mark generates the expectation of another. All of this was translated into code.

As the Whitney’s description explains, AARON “combines formal rules — such as starting in the foreground of a drawing and moving to the background — with random events to generate elements like curved lines, straight lines, or closed figures,” with an internal feedback mechanism that evaluates “the success of a composition” (Whitney Museum of American Art, 2024). The program seeded its code with knowledge of external objects — their size, shape, and spatial position — accessible in long-term memory as needed.

The early outputs were monochromatic line drawings produced by “turtle” robots: small mechanical devices equipped with markers that physically executed drawing instructions on paper. The 1979 exhibition Drawings at the San Francisco Museum of Modern Art featured one of these turtle robots creating works live in the gallery — a demonstration of machine-made art that was, if anything, more viscerally present than what modern generative tools produce on a screen. Cohen wrote of the system’s early years: “In all its versions prior to 1980, AARON dealt exclusively with internal aspects of human cognition” (as cited in Computer History Museum, 2019).

What makes that statement remarkable is its implication. AARON was not simply generating pleasing shapes. It was modeling the structure of human perceptual and artistic cognition, encoding the cognitive architecture that underlies the act of drawing. This is a very different proposition than the data-driven inference that powers today’s diffusion models — and, as we will see, it carries significant and underappreciated implications for how we think about authorship, creativity, and intellectual property.


Fifty Years of Evolution: From Turtle Robots to the Whitney

One of the most striking things about AARON is the sheer duration of its evolution. This was not a proof-of-concept demonstrated once and shelved. Cohen continued developing the program for the rest of his life, from the early 1970s through 2016, across multiple programming languages (the system migrated from C to Lisp in the early 1990s), multiple output technologies, and multiple aesthetic phases.

By the 1980s, AARON had developed figurative capabilities. It could generate rocks, plants, human figures, and place them in coherent spatial contexts. The works from this period are dense and colorful: still lifes, lush exterior scenes, figures in bright clothing. Cohen credits the transition to Lisp with enabling the color capabilities that had previously eluded him (AARON article, Wikipedia as a secondary reference for technical detail). In the 1990s, digital painting machines replaced the turtle plotters, outputting AARON’s images in ink and fabric dye. His final iterations used large-scale inkjet printers on canvas.

In the last years of his life, Cohen returned to a form of physical painting himself — using his fingers on a screen to apply color and texture to AARON’s drawn images, layering intentionality over the algorithm’s output in a recursive loop that blurred the boundary between human and machine contribution (Brooklyn Rail, 2024). The program, in its final pre-Cohen-death iteration, had looped back to generating line drawings reminiscent of its earliest phase.

What we see across five decades is not a technological artifact frozen in time but a living creative practice — one in which the boundaries between programmer, artist, and algorithm were deliberately and continuously interrogated. Cohen himself described AARON as his “doppelganger” (Studio International, 2024). Not a tool. Not a product. A double.

Figure 2 — Fifty Years of AARON
The Evolution of the World’s
Longest-Running AI Art System
1966 — 2024
Harold Cohen
🎨
1966 — Venice Biennale
Cohen represents Great Britain at the Venice Biennale & Documenta
Hover each milestone for detail
Source: Whitney Museum of American Art (2024); Computer History Museum (2019)
2
Fig. 2 — Timeline Infographic
AARON’s Evolution: 1966–2024

The Symbolic Road Not Taken

To understand why AARON matters in 2025, it helps to understand the paradigm it represented — one that modern AI largely abandoned, and may now be circling back toward.

AARON belongs to the tradition of symbolic artificial intelligence, sometimes called Good Old-Fashioned AI (GOFAI). In this framework, intelligence is modeled through the explicit representation of knowledge — rules, relationships, structured hierarchies of concepts — rather than through statistical pattern recognition over large datasets. The symbolic approach prioritized interpretability. You could, in principle, look at AARON’s code and understand why it made any given compositional decision. The system’s reasoning was transparent by design.

The deep learning revolution that accelerated through the 2010s largely displaced this tradition. Neural networks, trained on enormous corpora, proved dramatically more capable at tasks like image recognition and language modeling than hand-crafted symbolic systems. But they brought a significant trade-off: opacity. A modern large language model or image diffusion system operates through billions of learned parameters that resist human-interpretable explanation. The decision-making is distributed across a vast numerical substrate that no single person can fully trace or articulate.

This opacity has become a live concern in AI governance, creative industries, and intellectual property law alike. We will return to the legal implications in a moment. But it is worth pausing here to note that AARON’s symbolic architecture also gave it a clean ethical profile in an area where modern systems are embattled: training data.

AARON’s generative capacity did not derive from mass extraction of existing artworks. Cohen did not feed the system millions of paintings and ask it to statistically interpolate between them. He encoded his own knowledge, his own artistic cognition, into the system’s rule structures. There was no cultural scraping, no ingestion of artists’ work without consent, no derivative inference from unlicensed corpora. AARON’s creative base was Harold Cohen — and nothing else.

This distinction has become central to the most contentious legal battles in contemporary AI. When artists sue generative AI companies for training on their work without permission, the AARON paradigm represents a historical counterexample: proof that powerful, exhibition-worthy generative art was achievable through a methodology that raised none of these concerns.


The Economic Stakes: From Galleries to Trillions

AARON spent most of its operational life in galleries and academic circles. The economic footprint of its outputs was, by any commercial measure, modest. The landscape it now inhabits is categorically different.

According to McKinsey & Company’s landmark 2023 report, The Economic Potential of Generative AI: The Next Productivity Frontier, generative AI could contribute between $2.6 trillion and $4.4 trillion annually to the global economy across use cases in software engineering, customer operations, marketing, and research and development (McKinsey & Company, 2023). That is a range comparable to the entire GDP of the United Kingdom, injected into the global economy each year through machine-generated content.

Figure 3 — Generative AI Economic Impact

$2.6T – $4.4T

Annual Economic Contribution
McKinsey & Company, 2023 — The Economic Potential of Generative AI
$0$1T$2T$3T$4T$5T+
50+ years
AARON validated machine-generated creative content
>10,000
Public comments to US Copyright Office AI initiative
1972
First LACMA exhibition of AARON’s works
Source: McKinsey & Company (2023). The Economic Potential of Generative AI.
3
Fig. 3 — Economic Data Visualization
Generative AI’s $2.6T–$4.4T Annual Economic Impact

The creative industries sit squarely within this projection. Visual art, marketing design, video production, game asset generation, and architectural visualization are among the sectors already being restructured by generative tools. Studios are experimenting with AI-generated concept art. Advertising agencies are integrating text-to-image pipelines into campaigns. The economic logic is compelling: faster, cheaper, at scale.

The intellectual premise underlying all of this economic transformation — that machines can generate novel, contextually appropriate, aesthetically functional content — was experimentally validated by AARON in gallery settings decades before the venture capital arrived. Cohen’s work demonstrated, through sustained practice rather than theoretical argument, that machine-generated visual content could satisfy sophisticated aesthetic judgment. The modern generative economy did not invent that premise. It industrialized it.

Christiane Paul, Curator of Digital Art at the Whitney Museum, articulated this connection precisely when describing the 2024 exhibition: “Harold Cohen’s AARON has iconic status in digital art history, but the recent rise of AI artmaking tools has made it even more relevant. Cohen’s software provides us with a different perspective on image making with AI. What makes AARON so remarkable is that Cohen tried to encode the artistic process and sensibility itself, creating an AI with knowledge of the world that tries to represent it in ever-new freehand line drawings and paintings” (as cited in GothamToGo, 2024).

Paul’s framing points to something the economic projections tend to skip over: that generative AI is not simply a content-production efficiency tool. It is an ongoing experiment in what kinds of knowledge can be encoded, and what kinds of output that encoding can produce. AARON was always that experiment. It just ran in a different era, with smaller hardware and no venture backing.


Who Owns What a Machine Makes? The Copyright Earthquake

If the economic stakes are staggering, the legal terrain is rapidly shifting beneath them. The past two years have produced an extraordinary volume of regulatory activity around AI-generated content and intellectual property — activity that AARON’s history illuminates in ways that purely technical analysis cannot.

In March 2023, the U.S. Copyright Office issued formal policy guidance confirming its longstanding position: copyright protection requires human authorship, and works generated solely by AI are not eligible for registration (U.S. Copyright Office, 2023). The policy statement, effective March 16, 2023, was clear: “It is well-established that copyright can protect only material that is the product of human creativity.” The Office would examine AI-generated works on a case-by-case basis, but the baseline was unambiguous — a prompt is not authorship, and an AI system is not an author.

In January 2025, the Copyright Office reinforced this position in Part 2 of its Copyright and Artificial Intelligence report, stating that “given current generally available technology, prompts alone do not provide sufficient human control to make users of an AI system the authors of the output” (U.S. Copyright Office, 2025). The principle has been upheld by federal courts as well: a 2023 U.S. District Court ruling affirmed that “human authorship is a bedrock requirement of copyright,” finding that copyright “has never stretched so far as to protect works generated by new forms of technology operating absent any guiding human hand” (as cited in Congress.gov, 2024).

Figure 4 — Intellectual Property Framework
The Human Authorship Spectrum
in AI-Generated Art
U.S. Copyright Office Policy (March 2023) · Part 2 Copyrightability Report (January 2025)
← No ProtectionMaximum Protection →
Source: U.S. Copyright Office (2023, 2025); Congress.gov (2024)
4
Fig. 4 — Copyright Spectrum
The Human Authorship Spectrum in AI-Generated Art

The question of how these principles apply to AARON is genuinely complex — and that complexity is instructive. Cohen encoded the system. He defined every rule, every constraint, every decision boundary. AARON’s generative capacity was entirely downstream of Cohen’s intellectual labor. In that sense, AARON’s outputs could be understood as the product of Cohen’s creative control, mediated through a procedural system he authored. This is structurally very different from a user typing a three-word prompt into Midjourney and receiving an image that was generated by inference over millions of scraped artworks.

The U.S. Copyright Office’s own framework acknowledges these distinctions matter. The current guidance allows copyright protection where a human provides “creative input or control” over the final expression (U.S. Copyright Office, 2023). Under that rubric, Cohen’s dense encoding of artistic knowledge across decades of iterative development looks significantly more like creative control than a text prompt does.

The Supreme Court may yet have the final word on all of this: as of early 2026, a petition from Stephen Thaler seeking Supreme Court review of the human-authorship requirement in AI-generated works remains a possibility, and a ruling would reshape the entire landscape (IP.com, 2025). The outcome will determine, among other things, whether AI companies can hold copyright in their systems’ outputs — and whether the concept of machine authorship will ever receive legal recognition in the United States.

Harold Cohen never sought to resolve this question legally. But he posed it philosophically with characteristic precision. His documented challenge to critics is one of the most elegant formulations in the entire debate: “If what AARON is making is not art, what is it exactly, and in what ways, other than its origin, does it differ from the ‘real thing?’ If it is not thinking, what exactly is it doing?” (Cohen, as cited in Wikipedia/AARON, 2024, drawing from The Further Exploits of AARON, Painter).

That question has not aged a day.


The Philosophical Fault Line: Is the Machine Creative, or Is It Just Following Orders?

Let us stay in that philosophical territory for a moment, because it is where the most interesting and unresolved thinking happens — and because it connects directly to the ethical stakes of how we build and deploy generative systems today.

The core debate can be framed simply: when AARON generates a painting, who — or what — is being creative?

Position one: The creativity is entirely Cohen’s. He encoded the rules. He designed the decision spaces. He tested and refined the system over decades. AARON is a very sophisticated paintbrush, and Cohen is the artist. The machine is not creating; it is executing.

This is essentially the position articulated by Aaron Hertzmann, Principal Research Scientist at Adobe and one of the field’s most rigorous academic voices on this question. In his widely cited 2018 paper “Can Computers Create Art?” — published in the peer-reviewed journal Arts and presented at TEDx — Hertzmann argues that art is fundamentally a product of social agents, and that computers cannot be credited with authorship “in our current understanding” (Hertzmann, 2018). His reasoning is precise: “Creative and intelligent people write software that creates art; the software itself is not intelligent or creative” (Hertzmann, 2018). Hertzmann’s framework does not diminish the interest or value of systems like AARON — it locates their creativity in the human who built them.

Position two: Something genuinely novel is happening in the generative act itself, something that was not fully specified by the programmer and cannot be fully attributed to them. Cohen encoded constraints, not outcomes. AARON’s specific compositional decisions within those constraints were not predetermined. The system explored a possibility space that Cohen defined but could not exhaustively inhabit. Each drawing was genuinely new, even to its maker.

This view aligns with cognitive scientist Margaret Boden’s framework of “exploratory creativity,” developed across decades of work in cognitive science and AI. Boden (2004) describes this type of creativity as the generation of novelty through systematic traversal of structured conceptual spaces — a process that can, she argues, be meaningfully attributed to computational systems. By Boden’s criteria, AARON’s outputs are not merely executions of pre-specified instructions. They are explorations of a structured but open-ended domain, generating configurations that constitute genuine novelty within that domain.

Neither position is obviously wrong, and the tension between them is not merely academic. It shapes how we answer the questions that now face courts, policymakers, and creative professionals: Who owns what a machine makes? Who is responsible for the machine’s outputs? And — most profoundly — does the machine’s lack of conscious experience disqualify it from participating in something we call creativity?

The phenomenological objection is worth engaging directly. Critics argue that genuine creativity requires intentionality, subjective experience, emotional investment — qualities that AARON clearly lacks. Cohen agreed, at least partially. He was very careful, by documented accounts, not to claim that AARON is creative in the full human sense. But his challenge to critics quoted above reveals where he located the real difficulty: not in what AARON lacks, but in what its outputs demonstrably are. If the outputs produce aesthetic response in human observers — if they move people, provoke reactions, earn institutional recognition — on what principled basis do we deny the process that generated them a place in the creative ecology?

This question resonates differently in 2025 than it did in 1985. We now live in a world where AI-generated images have won art competitions, where galleries are integrating machine-generated works into their programs, and where the economic infrastructure of creative production is being restructured around generative tools. The philosophical question of machine creativity has ceased to be an interesting thought experiment and become a live governance problem.


Neuro-Symbolic AI and the Return of Rules

There is a striking irony in the current trajectory of AI research: having largely abandoned symbolic approaches in favor of deep learning through the 2010s, the field is now investing significantly in what researchers call neuro-symbolic AI — systems that integrate the statistical pattern recognition of neural networks with the explicit rule-based reasoning of symbolic architectures.

The motivation is precisely the trade-off identified earlier: interpretability. Large neural networks are extraordinarily capable but extraordinarily opaque. They cannot explain their outputs. They cannot be audited against principled rules. In high-stakes domains — medicine, law, autonomous systems, creative industries where attribution matters — this opacity is increasingly unacceptable.

AARON, seen through this lens, is not a relic. It is a precedent and a model. The explicit encoding of domain knowledge, the transparent rule structures, the ability to inspect and understand why the system made a given decision — these are the properties that neuro-symbolic research is working to recover and integrate with the raw capability of modern deep learning.

Cohen’s approach was also notable for its ethical design discipline. He built a system whose generative basis was his own artistic knowledge, not a mass extraction of others’ labor. In an era when the training pipelines of major generative AI systems are being scrutinized for copyright infringement — ongoing lawsuits from artists, illustrators, photographers, and publishers against AI companies reflect a genuine structural problem with how these systems were built — the AARON methodology offers a historically validated alternative.

None of this is to suggest that the symbolic approach would scale to the commercial applications driving the $4.4 trillion economic projection. It would not, at least not in its original form. What it does suggest is that the intellectual lineage running from Cohen’s Stanford laboratory through today’s neuro-symbolic research agenda is direct and underappreciated. The questions AARON asked about encoding artistic knowledge have not been superseded by deep learning. They have been deferred.


Institutional Recognition and the Legitimacy Question

AARON’s exhibition history is, in retrospect, remarkable. Works were shown at the Los Angeles County Museum of Art as early as 1972 — the year before Cohen formalized the Stanford residency (Whitney Museum of American Art, 2024). They appeared at the San Francisco Museum of Modern Art in 1979. An early article in Computer Answers documented AARON running on a DEC VAX 750 minicomputer and described works exhibited at the Tate Gallery in London (AARON, Wikipedia, drawing from Computer Answers). The 2024 Whitney retrospective stands as the most prominent recent institutional recognition, but it consolidates a legitimacy that major art institutions had been conferring for over fifty years.

This institutional validation is not incidental. It connects to one of the core debates in the philosophy of art: whether artistic value is intrinsic to objects and experiences, or whether it is socially constructed through institutional processes of recognition, exhibition, criticism, and acquisition. The Whitney’s acquisition of multiple AARON works in 2023 — purchased with funds from the Digital Art Committee — represents an institutional affirmation that the program’s outputs have art-historical standing, not merely technological interest (Whitney Museum of American Art, 2024).

For educators, that standing offers an unusually productive entry point into AI literacy. AARON demonstrates, without mystification or hype, what AI actually does: encode knowledge, apply rules, generate outputs within structured possibility spaces. Understanding AARON requires no background in machine learning theory. Its architecture is teachable. Its philosophical implications are accessible. And its history spans a long enough arc to situate the current moment within a coherent intellectual narrative rather than treating it as unprecedented and inexplicable.

For policymakers, AARON illuminates something specific and urgent: not all generative AI systems are architecturally equivalent, and regulatory frameworks built solely around data-hungry neural networks may be inadequate for the full range of systems being deployed. A policy environment that fails to distinguish between systems that derive generative capacity from mass data extraction and systems that encode explicit human knowledge will produce incentive structures that inadvertently penalize more ethically designed approaches.


What AARON Teaches Us About the Future of Generative AI

Let us close by bringing the threads together.

Generative AI is not new. The questions it poses are not new. The challenges it raises for authorship, creativity, economic value, and institutional legitimacy were posed — with extraordinary clarity and philosophical seriousness — by one painter in California, working through a series of rule-based programs on increasingly capable hardware, across five decades of continuous practice.

What is new is scale. The reach of modern generative systems is global; their economic impact is measured in trillions; their outputs are embedded in advertising, entertainment, education, and design at a level that Harold Cohen and his plotter robots never approached. The acceleration of adoption has dramatically compressed the time available for the kind of careful philosophical and regulatory thinking that AARON’s long development trajectory afforded.

Figure 5 — The Central Philosophical Question
Is Machine Creativity Real — or Merely Convincing?
UNRESOLVED · ONGOING DEBATE
“If what AARON is making is not art, what is it exactly, and in what ways, other than its origin, does it differ from the ‘real thing?’ If it is not thinking, what exactly is it doing?”
— Harold Cohen, The Further Exploits of AARON, Painter
Position 1
Cohen’s Tool
Tool, Not Author
✍️
Aaron Hertzmann
Principal Research Scientist, Adobe

The creativity resides entirely in the human programmer. AARON executes — it does not imagine. Cohen made every decision about what the system could and couldn’t do. The program is an instrument of Cohen’s intent, not an independent agent.

“Creative and intelligent people write software that creates art; the software itself is not intelligent or creative.”
Hertzmann, A. (2018). Can Computers Create Art? Arts, 7(2), 18.
Position 2
Genuine Novelty
Exploratory Creativity
🔍
Margaret Boden
Cognitive Scientist, University of Sussex

AARON explores a structured conceptual space that Cohen defined but could not exhaustively inhabit. Each output is genuinely novel — even to its maker. The system traverses possibility-space in ways its author never anticipated, which satisfies the core definition of exploratory creativity.

“Exploratory creativity involves the generation of novelty through systematic traversal of structured conceptual spaces.”
Boden, M.A. (2004). The Creative Mind: Myths and Mechanisms (2nd ed.). Routledge.
Legal Implication
Cohen’s rule-encoding likely qualifies as “creative control” under US Copyright Office guidance — structurally distinct from prompt-based generation
Neither position fully satisfies
⚖️
Ongoing Governance Problem
Policy Implication
Regulatory frameworks must distinguish between data-extraction AI and knowledge-encoding AI like AARON — they are fundamentally different systems
Sources: Hertzmann (2018), Arts 7(2); Boden (2004), The Creative Mind; U.S. Copyright Office (2023, 2025)
5
Fig. 5 — Philosophical Debate
Is Machine Creativity Real, or Merely Convincing?

That compression is dangerous. When we treat generative AI as a phenomenon born in 2022, we strip it of historical context that would make it legible. We lose the intellectual frameworks that earlier thinkers developed for navigating exactly the terrain we now face. We reinvent debates that were already conducted, and miss the hard-won insights that resulted.

The answer is not nostalgia for symbolic AI or skepticism about neural approaches. It is historiographical seriousness: a commitment to understanding that the current moment in AI has a history, and that history is full of people who asked exactly the right questions with extraordinary rigor.

Harold Cohen was one of them. AARON was his answer — incomplete, provisional, generative in the best sense. Not a solution, but a sustained, disciplined, decades-long inquiry into the nature of artistic knowledge, the possibility of machine creativity, and the question of what it means to collaborate with something that is not quite a tool and not quite an artist.

In 2024, the Whitney Museum gave that inquiry the institutional recognition it deserved. The rest of us should catch up.


REFERENCE LIST

  • Boden, M. A. (2004). The creative mind: Myths and mechanisms (2nd ed.). Routledge.
  • Brooklyn Rail. (2024, April). Harold Cohen: AARON. https://brooklynrail.org/2024/04/artseen/Harold-Cohen-AARON/
  • Cohen, H. (2016, as cited in Computer History Museum, 2019). Harold Cohen and AARON — A 40-year collaboration. Computer History Museum. https://computerhistory.org/blog/harold-cohen-and-aaron-a-40-year-collaboration/
  • GothamToGo. (2024, January 27). The Whitney Museum to showcase first AI artmaking software created by artist Harold Cohen. https://gothamtogo.com/the-whitney-museum-to-showcase-first-ai-artmaking-software-created-by-artist-harold-cohen/
  • Hertzmann, A. (2018). Can computers create art? Arts, 7(2), 18. https://doi.org/10.3390/arts7020018
  • IP.com. (2025, November 3). AI authorship heads to the U.S. Supreme Court: Can machines hold copyright? https://ip.com/blog/ai-authorship-heads-to-the-u-s-supreme-court-can-machines-hold-copyright/
  • McKinsey & Company. (2023). The economic potential of generative AI: The next productivity frontier. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier
  • Studio International. (2024). Harold Cohen: AARON. https://www.studiointernational.com/index.php/harold-cohen-aaron-review-whitney-museum-of-american-art
  • U.S. Copyright Office. (2023, March 16). Copyright registration guidance: Works containing material generated by artificial intelligence (88 Fed. Reg. 16190). https://www.copyright.gov/ai/ai_policy_guidance.pdf
  • U.S. Copyright Office. (2025, January). Copyright and artificial intelligence, Part 2: Copyrightability. https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-2-Copyrightability-Report.pdf
  • Whitney Museum of American Art. (2024). Harold Cohen: AARON [Exhibition page]. https://whitney.org/exhibitions/harold-cohen-aaron

ADDITIONAL READING LIST

  1. Boden, M. A. (2010). Creativity and art: Three roads to surprise. Oxford University Press.
  2. Hertzmann, A. (2018, TEDx). Can computers create art? [Video]. TED. https://www.ted.com/talks/aaron_hertzmann_can_computers_create_art
  3. Computer History Museum. (2019). Harold Cohen and AARON — A 40-year collaboration. https://computerhistory.org/blog/harold-cohen-and-aaron-a-40-year-collaboration/
  4. McKinsey & Company. (2023). The economic potential of generative AI: The next productivity frontier. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier
  5. U.S. Copyright Office. (2025). Copyright and artificial intelligence, Part 2: Copyrightability. https://www.copyright.gov/ai/

ADDITIONAL RESOURCES

  1. Whitney Museum of American Art — Harold Cohen: AARON Exhibition Archive https://whitney.org/exhibitions/harold-cohen-aaron
  2. Computer History Museum — Harold Cohen Collection https://computerhistory.org/blog/harold-cohen-and-aaron-a-40-year-collaboration/
  3. U.S. Copyright Office — AI Initiative https://www.copyright.gov/ai/
  4. Aaron Hertzmann’s Research on Computational Creativity https://arxiv.org/abs/1801.04486
  5. McKinsey Global Institute — Generative AI Research Hub https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier
author avatar
Doctor JR
Dr. JR is the founder of AI Innovations Unleashed—an educational podcast and consulting platform helping educators, leaders, and curious minds harness AI to build smarter learning environments. He has 22 year of project management experience (PMP certified) and an AI strategist who translates complex tech into practical, future-focused insights. Connect with him on LinkedIn, Medium, Substack, and X—or visit him @ aiinnovationsunleashed.com.

Leave a Reply

Your email address will not be published. Required fields are marked *