OpenAI rolled out DALL·E 3, the newest addition to its large language model, to ChatGPT Plus and ChatGPT Enterprise users earlier this month. DALL·E 3 is a text-to-image system that can generate images based on the input of just a few words. The program draws from an extensive database of authentic artwork and pattern recognition to generate images, creating a complicated grey zone around the copyright protocols associated with these images.
The GPT model that DALL·E software uses was developed in 2018 by OpenAI. Five years after its launch, the software is capable of groundbreaking feats in generating spectacular images of diverse styles within minimal instruction. DALL·E – earning its rather cute moniker from a creative combination of the names of artist Salvador Dali and the robot from Wall-E – was trained on roughly 650 million image-text pairs scraped from the internet. Although OpenAI filtered out images with content such as pornography and duplicates, the company has acknowledged the failure in these systems and admitted that the system can create images that include trademarked logos or characters.
I decided to put the model to the test and observed that it frequently produced logos bearing a striking resemblance to well-established brands. The first two images below were generated in response to a prompt for a TV show poster, while the third was derived from a request to illustrate the Star Wars character Anakin Skywalker. Notably, the generated media closely mirrors trademarked entities, clashing with BBC Copyright Guidelines, Netflix Brand Assets Terms & Conditions, and Disney Studio Licensing.
DALL·E 3 BBC Logo. DALL·E 3 Netflix Logo DALL·E 3 Star Wars Character
It is speculated that the landmark decision on this subject could come from the precedent established in a recent U.S. Supreme Court case unrelated to A.I. Photographer Lynn Goldsmith filed this lawsuit against the Andy Warhol Foundation over a licensed portrait of the late musician Prince. Goldsmith’s legal team attacked Warhol’s Orange Prince series, arguing that the work was not “transformative” enough from the original photograph to be considered a new work of art. The justices upheld the lower court’s ruling in a 7-2 vote, maintaining that the Orange Prince series based on Goldsmith’s 1981 photo was not immune from her copyright infringement lawsuit. As such, the legal precedent was established that a work must be “transformative” from the original source to qualify for copyright protection.
The root of the issue is how generative A.I. systems are trained. Like other machine learning models, generative A.I. systems recreate patterns from human-created data. That said, if an A.I.'s painting or writing style resembles that of Georgia O’Keefe or Toni Morrison, it’s because it learned from their original works. Despite their unique skills, these A.I. models are not legal authors; their outputs mimic copyrighted human contributions.
This legal approach is apparent in the United States Copyright Office’s initial approval of copyright for Kristina Kashtanova’s graphic novel “Zarya of the Dawn,” created with the Midjourney generator in September of last year. Even with Kashtanova’s adjustments, they later revoked the copyright for the photographs, alleging “non-human authorship.”
Photography offers a unique perspective on these copyright intricacies. While a camera operates all of the mechanical components of a photograph, the final image is shaped by the photographer’s vision and creative decisions. Post-processing chores like exposure and focus are even automated in digital cameras, notably smartphones. This narrows the human interference to merely selecting a scenario and clicking. Nonetheless, contemporary copyright rules recognise this human touch while granting ownership rights. How dissimilar is this to the A.I. scenario?
The answer lies in the final production of media. While a photograph and an AI-generated image may have the same human input, the result of a photograph is original, whereas the output of the A.I. generation is a composite of existing works. This deduction is reinforced by Getty Images’ announcement of legal action against Stability A.I, the creator of Stable Diffusion. Getty claims that the corporation copied millions of its photos and opted to ignore potential licensing options and long-standing legal protections to pursue its financial interests.
However, it is critical to shift the focus from corporate giants like Getty Images to what effect the development of A.I.-image generation will have on smaller artists. Previously, consumers had to actively commission artists for specific styles of art and original pieces. Now, buyers can opt to generate such artwork in less than 30 seconds without paying or even interacting with artists. Much of this A.I.-generated art mimics an artist’s original style, consequently diverting potential income away from those artists, many of whom rely on commissions for a living.
To understand how this scraping works, I put DALL·E 3 to the test again and compared its response to prompts for artwork both within and outside of copyright (see images below). When prompted to generate artwork in Vincent Van Gogh’s style (an artist now in the public domain), the system produced an almost exact replica of the artist’s famed “Starry Night.” Yet, when asked to produce artwork like Banksy’s (not in the public domain), DALL·E 3 reported that it couldn’t emulate Banksy’s style directly, but could craft artwork drawing on street art’s stencil-like features. This seemed like the ideal response to respect the artist’s style while still satisfying the input. Unfortunately, the images produced alongside this statement were undoubtedly a replica of Banksy’s well-known “Balloon Girl” mural. While the model clearly aims to insulate itself from legal turmoil, it does not protect the interests of the artists it scraps work from.
"Starry Night By Vincent Van Gogh In MOMA" DALL·E 3 Van Gogh Inspired Art
"Banksy Girl and Heart Balloon" by Dominic Robinson DALL·E 3 Banksy Inspired Art
Addressing these issues is paramount. A.I. developers must proactively seek to follow legal guidelines while gathering data for model training. This includes properly licensing and compensating intellectual property owners when putting their work into training datasets, whether through licensing agreements or revenue-sharing from A.I. tool earnings.
As A.I. continues to grow and transform, it walks a narrow line between innovation and infringement. While generative A.I. provides new opportunities for democratising content creation, its ethical and legal implications must not be overlooked. The blurring of human touch and machine generation calls into question existing copyright laws, placing the work of both well-known artists and independent creators in danger of depreciation. It is crucial for stakeholders—developers, artists, and legislators alike—to navigate this new frontier collectively, ensuring that artistic integrity and lawful ownership do not suffer as a result of innovation.