Sometimes, I surprise even myself.

First, do no harm.

Ethics and AI art

Written in


What are the ethical and legal implications of AI art?

Okay, now that we’re all experts in how this stuff works, we can turn to the reason I started writing these posts in the first place. A lot of people are wondering “Is using software like Stable Diffusion, DALL-E, and Midjourney ethical?” and “is using these tools copyright infringement?” and “who owns the copyright on these images, anyway?”. Alright, let’s dig in!

Disclaimer: I am not an Intellectual Property lawyer, and none of this is legal advice. Every country has its own laws about copyright, and anything I say about the law might be different where you live. I will mostly be talking about US law, since that's were I live.

Who owns the copyright on AI-generated images?

Nobody knows, actually (yet)

One of the wackiest things about Copyright law is that infringement cases are almost entirely decided on the basis of previous case law, and common law custom. There is a very complicated copyright code on the books in the USA and other countries, but it mostly exists to carve out exceptions to the main guiding principle of copyright, which is: The Copyright belongs to the creator.

If you draw a picture, or paint a portrait, or write a story, you are entitled to copyright protection for your art from the moment it’s created. This is true regardless of what media it is created with, whether you “register” it with the Copyright Office, file it in a drawer of your desk, or post it on the Internet.

But AI art tools are a new technology, and as I just mentioned, case law rules the world when it comes to copyright claims. As far as I know, nobody has been taken to court yet for copyright infringement via AI generation, or tried to defend a copyright on their AI-generated art from someone else infringing on it.

In the meantime, there are some differences of opinion on who owns the copyright on the generated images.

It might be nobody (really!)

Case law in the USA has established that art created without human creative input is not covered by copyright protections.

This photograph, for instance, apparently has no human author for copyright purposes. It was taken by an animal, and so it is not covered by copyright.

A selfie of a Celebes crested macaque
Monkey Selfie

The US Copyright Office has published an official opinion on this issue:

The Office will not register works produced by nature, animals, or plants.”

More relevant for our purposes here, the same publication says this about works produced by a machine:

the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author”

Okay, but what does “creative input or intervention from a human author” mean? Well, that’s for the courts to decide, of course. We won’t know until a few cases are decided whether writing a text prompt, and selecting one image from a set of outputs are sufficient “creative input” to allow copyright to be asserted.

But most likely, copyright belongs to the person running the software

My opinion, and several more-educated folks agree with me, is that there is most likely enough human input into creating AI art that it does qualify as a copyrightable work. The fact that someone else giving the same inputs into the software will create the same output does not make it not a creative work. The process is creative, even if it’s dependent on a tool. You can copyright photographs, and they’re just as tool-dependent as AI-generated images are.

Copyright almost certainly doesn’t belong to the authors of the software, or to the operators of AI art generation services.

The various online versions of AI art tools have different licensing terms for the images provided by their services, but they do not try, in general, to claim any copyright interest in the images clients produce. They will often claim a right to reproduce the images for promotional purposes, because that’s how they got new users to sign up, by showing them what they can make.

Bad takes on the ethics of AI imagery

There are a bunch of bad takes floating around out there on the internet, which are clouding the debate about what these tools are, and what they do, and how they should and shouldn’t be used.

“AI Art is not art!”

There are people out there that claim that AI art isn’t actually art. Often this comes down to the idea that if something takes “that little” human effort to create, then it can’t be art. That’s just…not how this works.

From a legal perspective, a remarkably small amount of creative output can in fact be covered by copyright. It can be seconds of creating, or years. The law doesn’t care. And in general, the Art Community doesn’t either. Taping a banana to a wall is art. Dumping a pile of candy on the floor is art.

“AI art is theft!”

I’ve seen a couple of different versions of this. One is simply misunderstanding the relationship between the AI’s training data, and the process of creating the output. You’ll sometimes see claims that AI art is made up of copy-and-paste of elements from the training data into the output. After reading the previous installment of this series, you know that’s just not how the images are generated, at all.

Interestingly, even if the AI did make images by copying and pasting elements of other copyrighted works, it would probably still be perfectly fine. Collage has been an art form for centuries, and is often made out of copies of pieces other people’s art.

“AI is just copying the style of human artists, there’s no creativity there!”

This one is actually not that far off base. While the AI system itself doesn’t know or care about individual artists’ styles, the input to the model often includes the name of the artist that created the image. This makes it relatively-easy to use an artist’s name as a filter, to force a particular look to be more-likely to come out. For example, if we take my previous “a painting of a horse” prompt, changing nothing else, and add the name of an artist, we get this:

“A Keith Haring painting of a horse” via Stable Diffusion (25 steps)

I mean, that’s probably not going to fool anyone, but it is a pretty reasonable copy of the style for almost no effort. With some additional tweaking, you could get something pretty convincing, I bet.

I think there’s some ethical and legal hazard to using the name of a real person as part of a prompt for an AI art generator, which I’ll talk about a bit more below. Trying to duplicate the style of an artist, or attempting to generate an identifiable image of a particular person, without their permission, is definitely icky, and you shouldn’t do it.

“The AI model contains a copy of all of the training images, and is therefore a violation of copyright”

Stable Diffusion was trained on about 5 billion images of 512×512 resolution. Even at 8 bits per pixel (which would be very low fidelity), that’s 1.3 Petabytes of data. The compiled model is only about 9.5 GB. If it was “simply” storing every image in the training set, that’d be a remarkable breakthrough in compression efficiency, each training image getting compressed down to two bytes or so.

But as we already know, the individual images aren’t stored in the model, just some correlations between particular words or phrases, and particular image characteristics.

Because the training dataset is composed of imagery taken from the Internet at large, it’s reasonable to assume that the training data includes every famous painting you’ve ever heard of which is in the public domain, probably multiple times. So, if there’s any image you ought to be able to pull out of the model with very high fidelity, it’s those images, right? Well, if you’re used to the incredibly detailed and intricate images you can get out of these tools, you’re likely going to be disappointed by the results of recreating Vermeer’s “Girl with a pearl earring” or van Gogh’s “Sunflowers”.

You do get a representation of the requested painting, but it looks substantially different than the original painting, like a low-resolution copy of a copy, or a version drawn by a child from memory. Typically, it’ll get the basic ideas of what’s in the painting, and some details will be similar, but the colors are wrong, the composition is weirdly off, etc.

“Sunflowers by van Gogh” via Stable Diffusion

None of these looks really very similar to the original, though they’re all clearly “inspired” by it, to varying degrees. And they’re all very “rough”, with thick lines, garish colors, etc. By contrast, asking for a “Sci-Fi landscape”, or “panda riding a bicycle” will generate something much more pleasant. I think this is a result of over-training, but I’m not actually sure.

The ethical minefield of LAION and Stable Diffusion

I was going to call this section “stable diffusion of responsibility”, but I chickened out…

LAION: We’re just an index

As mentioned previously, LAION disclaims any responsibility for the contexts of their datasets, because they’re “just an index” of links to images, and text descriptions of those images. They have done some work to filter out the worst of the offensive images (based on whose standards?), but there’s still a lot of highly-questionable images in there.

Quite a few of those images will have been taken from copyrighted material, and represent violations of copyright in themselves. LAION is not actually performing or enabling copyright infringement (they say), because anyone making use of their datasets needs to download the images locally, and therefore the entity using the dataset is responsible for any copyright infringement that might happen.

This is technically true, but does put a bit of a stain of copyright infringement on the whole enterprise. Given LAION’s origins as a research tool, their attitude is probably justified, since most uses of images for research would qualify for a “fair use” defense in court.

It’s not clear to me if an AI model could be considered a derivative work of the training data, that’ll have to be settled in court, but we can assume that many of the artists whose work was used as input to these models never intended or agreed to do so.

Stable Diffusion: We just used the data we were given

On the other hand, the Stable Diffusion project doesn’t make any claims about auditing the datasets, either. They do provide an output filter, which is supposed to stop disturbing outputs, which is adjustable to be more or less conservative. But it’s still very possible to give a relatively innocuous prompt in, and get out porn, or violent content, or other disturbing imagery.

Commercial Service Providers: If it’s not a direct copy, it’s not infringement.

This is an argument that I’ve seen put forward by people running commercial AI art generator products. Given that the generator will never create an exact copy of any of the training data they claim that using their tool can’t violate copyright. I don’t think that’s a really solid legal foundation, but they really want it to be true, so I see why they claim that.

But, again – copyright infringement claims are generally settled in court. If someone can get sued for copyright infringement (and lose) for art they’ve produced with their own two hands, then it stands to reason that anything that comes out of an AI that looks “close enough” to a copyrighted work could be the basis of a successful lawsuit.

So, if nobody else is responsible, is it all down to the end user?

Currently, yes – if you use one of these tools, you’re assuming all liability for copyright or trademark infringement. The penalties for that can be quite severe, especially if you can be shown to have “knowingly infringed”.

How to use AI Art generation ethically and safely

I would like to see someone start a project to ethically source a training set for art generation that only uses donated and public domain works, and if such a thing comes along, I’ll heartily recommend that everybody use it. In the meantime, if you want to use AI art generation, here are my recommendations:

Don’t try to duplicate the work of a living artist

A lot of guides to creating art with AI tools explicitly recommend adding an artists name to your prompt. This tends to result in more coherent results. I think this works because an artist’s name is a kind of shortcut abbreviation for all of the other things you could say about their art. Adding “by Vermeer” to your prompt biases the discriminator towards images from the training set which were tagged with the artist’s name. This is going to mostly be their art, or art based on their work. This is simpler than trying to come up with all of the possible tags that could have been used – “Baroque, Dutch Master, mostly monotone”, etc.

If you’re going to use these tools, and particularly, if you’re going to use them for commercial purposes, using the name of a living artist in your prompt is probably going to make it harder to argue in court that you didn’t intentionally try to copy their work.

An additional concern is that many of these tools make it easy to publish your art to a sharing site, with the prompts included. This means that a lot of AI art out there is tagged with the name of artists that were not involved with its creation. This is particularly bad for artists whose names appear in the prompt-generation interface of popular AI art tools, for example Greg Rutkowski, whose actual art on the web is being drowned out by AI-generated pastiches. Even if you’re not legally exposed by posting this stuff, it’s making a hardworking artist’s life worse. Maybe just don’t do that?

For artists whose works are already in the public domain, though – go nuts.

“Robots fighting on the moon, by Vermeer” via Stable Diffusion

Don’t use an AI tool to generate an image of an identifiable person or character

Because of the way these training sets were created, there are a lot of examples of real world personalities and fictional characters included. You can absolutely feed Stable Diffusion a prompt like “Katy Perry eating a hotdog in Paris” and get something that looks reasonably like her. But don’t do thatpersonality rights are separate from copyrights, and you could get in trouble for using someone’s image even in an otherwise completely non-copyright-infringing way.

And yes – “Captain Kirk”, “Mickey Mouse”, and “Iron Man” are all well-known to the Stable Diffusion model. That doesn’t mean that the Disney corporation or Paramount won’t come after you for using them in a composition.

Mickey Mouse, dancing on the graves of those foolish enough to challenge Disney in court.

Yes, I’m violating my own advice, here. Arguably, this is protected fair use. But I’m honestly slightly nervous about it, despite the low circulation of my blog. I definitely wouldn’t dare to use this image in a non-educational/non-commentary role. Printing it on a mug or t-shirt would almost certainly get me at the very least a cease-and-desist letter from the Disney Corporation.

Don’t use an AI tool instead of paying an artist

And finally, if you do want a particular scene featuring your favorite character, or a new piece of corporate logo artwork, or a concept image for your kickstarter, or a book cover, or an illustration of some concept that you can’t find in a standard clip art library, consider commissioning an artist to create it. It’s pretty easy to find someone in the various online art communities who will be happy to turn your vision into reality.

You can even use the AI art as a way to communicate what you’re looking for to an artist – “I want something like this, but without the Lovecraftian nightmare of limbs”, for example.

Next Up

Some more thoughts on “creative input” and “is AI Art art”?

5 responses to “Ethics and AI art”

  1. Robin King Avatar

    Hi! This post of yours landed under one of mine ( as a “More on WordPress” selection. I’m so glad it did! A few months ago I tried the MidJourney trial via Discord and, later, a couple of others. WOW! I stopped only bec they wanted $$$ to continue. But the allure remains. I occasionally use a few of the website ones like HuggingFace for inspiration (and also just to clear my head with some cyber-silliness – like that New Year’s post). But I didn’t want to stop exploring AI & art. So…I did some reading but ran into jargon roadblocks that I didn’t have time to clear away. I’m very excited to find your blog! Haven’t read all the AI posts but skimmed a bit and know that I’ll learn a lot when I come back later this week to read them thoroughly. Thank you, thank you! πŸ‘πŸ‘β™₯οΈπŸ‘πŸ‘ Following!


    1. Mark Bessey Avatar

      Thanks for the comment! Please let me know if you stumble over anything in my articles – I intended them for my less tech-savvy friends, but it’s a very jargon-heavy field. And there’s a lot of depth there.

      Liked by 1 person

      1. Robin King Avatar

        Will do! Jargon per se doesn’t put me off (I worked in the pharmaceutical industry for many years & we were hip-deep in it!) – I just don’t want to misunderstand anything. Thank you! πŸ€—


  2. Jeremy Gold Avatar
    Jeremy Gold

    Loved this. I was reminded of that famous quote by a judge or attorney (?) who couldn’t necessarily define pornography but knew it when he saw it.


  3. […] that you know how it works, is it legal, moral, and ethical for you to use tools like Stable […]


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: