Jimmy Chin is a North Face athlete, ski mountaineer, climber, adventure photographer, and all-around badass. He directed the Oscar-winning movie Free Solo where Alex Honnold climbs El Capitan with no ropes or protection. If you haven’t seen that movie, stop reading and go watch it right now.
Below I’ve included some of Jimmy’s dramatic shots of Himalayan peaks.
Except these weren’t shot by Jimmy Chin and these mountains don’t exist. Using OpenAI’s text-to-image tool DALL-E 2, I entered the prompt
A mountain landscape photo in the style of Jimmy Chin.
What if AI, prompted by you, created an artistic work? How about if an AI creation is derived in part from training data containing your work?
Did I just infringe on Jimmy Chin’s copyrights?
That third question is at the heart of a new batch of lawsuits against London-based StabilityAI, the creator of the text-to-image generation tool Stable Diffusion. A group of artists in the US launched a potential class-action, and in the UK, Getty Images filed suit on behalf of content creators on their platform. In a statement, Getty Images detailed their complaint:
“It is Getty Images’ position that Stability AI unlawfully copied and processed millions of images protected by copyright and the associated metadata owned or represented by Getty Images absent a license to benefit Stability AI’s commercial interests and to the detriment of the content creators.” -Getty Images
Meanwhile, the US class action filed in the Northern District of California describes StabilityAI and related programs as “merely complex collage tools” responsible for “derivative work.”
Note: I refer to both Stable Diffusion and DALL-E 2 throughout this piece. They are functionally the same tool, just from different companies.
Stable Diffusion, much like DALL-E 2, is an artificial intelligence model capable of generating high quality images from textual descriptions. When given a prompt, the model relies on training data and processes the description through many layers of a neural network to produce an image associated with the features in the description. When DALL-E created those mountain images, it is hard to imagine that Jimmy Chin’s actual photography isn’t somewhere in the training data.
Here is one of his real photos:
So what now?
Copyright Law Isn’t Ready For AI
The crux of this area of law revolves around 1) copyrightability and 2) the exclusive rights of the copyright holder. Simply put, is the work in question able to be copyrighted? If it is a significantly creative “work” (art, music, novels), then yes. If it is a fact, like today’s weather, then no. The holder of a copyrighted work is entitled to several exclusive rights, among which are reproduction, public distribution, and public display. When these rights are infringed upon, the holder is entitled to legal remedy and therein lies the complaint from Getty and the artists.
Does the company operating an AI image generation tool, by using web-scraped copyrighted material in training data, commit infringement against the creators of the original works?
We can all agree that photography is copyrightable work, so the first step in any infringement case is to identify an infringer. For example, if I set up an online store and started selling Jimmy Chin’s photography for money, I’m clearly infringing on his rights of reproduction, distribution, and display.
There are several legitimate defenses that an accused infringer might employ. For these AI disputes, however, the most relevant defense is fair use, judged by whether a work is truly transformative and significantly varies from the original copyrighted work.
Images produced by DALL-E and Stable Diffusion are recreations in the style of a particular artist, not necessarily copies of an exact work. The pictures at the beginning of this post resemble many peaks in the Himalaya, and if I printed them and hung them on a wall, guests would be none the wiser.
If I grabbed a camera, went to the Himalaya and sought to recreate that exact photo of Mount Everest, at worst I’m lacking creativity—but I’m not infringing on Jimmy Chin’s copyright. If an AI program “views” his pictures by ingesting training data and creates a mountain photo in the likeness of his style of photography, and the company profits from this activity, the question is whether the use of his photos in the training data constitutes infringement.
Is StabilityAI a direct infringer?
Unlikely. To show direct infringement, a plaintiff has to 1) demonstrate ownership of the work and 2) demonstrate that the defendants committed an act of “copying” the work. But we are dealing with the actions of a system created by StabilityAI, technically not the company in its own capacity. A 1995 case involving Netcom Online Communications might inform how US courts will approach this question. In Netcom, Judge Ronald Whyte concluded that—
…direct copyright infringement requires “some element of volition or causation which is lacking where a defendant’s system is merely used to create a copy by a third party.” (RTC v. Netcom Online, 1995)
The facts of the Netcom and the StabilityAI cases differ significantly. AI-generated images are emulations and not exact copies, and their creation relies on the actions of a third party user. Both factors strengthen a company’s fair use defense against direct infringement. Opening StabilityAI and other developers of AI content creation tools to direct infringement claims because of user actions creates unreasonable liability for the entire industry.
Secondary Infringement
We’re already pretty far down the rabbit hole, so if you’re still with me, kudos you’re a nerd. At the expense of unpacking secondary infringement in its entirety, there is a concept known as contributory infringement that might apply. It creates liability when a defendant knowingly does something that contributes to someone else’s act of infringement. We can turn to the landmark 1984 case of Sony Corp v. Universal Studios, where the question was whether Sony Corporation committed contributory infringement by selling VCRs which could be used by anyone to make unauthorized copies of copyrighted film and television works.
The only way Sony could have prevented infringement was to stop selling VCRs. The court concluded that VCRs have substantial infringing and non-infringing uses, and to rule against Sony would force them to exit the VCR business entirely. Behold, the “Sony defense.” Dual use products are legal and the company responsible cannot be subject to contributory liability. AI tools like Stable Diffusion are capable of virtually limitless non-infringing uses, constrained only by the imagination of the user.
The Sony defense works for physical products, but the AI tools in question are web-based applications where the service provider maintains control over the program to a degree. Recognizing this, Getty Images’ complaint draws parallels with the case of file sharing service Napster, held liable in 2001 for contributory infringement because it had “actual and constructive knowledge” that users engaged in infringing acts. Napster was made aware by copyright holders of the infringing acts committed on its platform and neglected to take action.
From a technical standpoint, identifying to what extent the AI outputs rely on copyrighted material is complicated if not impossible. Furthermore, by establishing that an output from a user prompt is a new photo and not an exact copy, drawing a line between Napster and StabilityAI is a flawed analogy. With Napster, users knew what they were doing and that it was prohibited. When I enter a prompt in DALL-E, I have no way of tying the output to a specific copyrighted work and neither does OpenAI.
Final Thoughts
AI programs merely emulate the style of other artists, albeit with stunning accuracy. Does an artist’s copyright extend to their style? No, of course not. The country music industry wouldn’t exist if that were true. Hopefully you can see that StabilityAI and their peers have a strong fair use defense against any potential infringement claims.
Vivek Jayaram of intellectual property firm Jayaram Law cited the 2021 Supreme Court case of Google v. Oracle which established that “using collected data to create new works can be transformative.” In that case, Google used portions of Java code to create its Android operating system. But if an AI image generator output isn’t transformative, is it anything more than digital art counterfeit?
StabilityAI responded to these lawsuits in a statement to Bloomberg Law—
“Anyone that believes that this isn’t fair use does not understand the technology and misunderstands the law.”
Artists just want compensation for their work, and that seems logical. But creative people—artists, authors, and musicians alike—take inspiration from those that came before them without needing to compensate their industry idols. If a court ruled that StabilityAI is infringing on copyrights through the contents of their training data, compensating artists would require a licensing regime akin to that which allows platforms like Spotify to exist. Lawyers for the artists believe there is a path forward with such a solution.
Regardless, the clash between AI and intellectual property has arrived in full force. These disputes will result in fascinating new case law, and hopefully spur Congress to amend our intellectual property laws to accommodate AI.
In the meantime, Getty Images is on notice.
With that, I’ll leave you with DALL-E’s output for the prompt:
Life in a future society built with artificial intelligence in the style of cyberpunk.
Creepy. Glad I won’t be around for that… or will I? Who’s to say.
Any intellectual property attorneys among my 25 loyal readers? Forward along if you know one, always love to hear from the experts.
Cheers,
Ryan