AI Art feels kinda shitty.....

I work as a concept artist these days and I have to admit I find myself feeling pretty crappy about a lot of the attitudes about AI Art. First off I will preface this a bit by saying that I actually think the algorithm technology is actually kinda cool. Not of personal interest to me, since I’d rather draw by hand, but I am glad people are enjoying it.

My issue comes in where there is a lot of sentiment from artists that they don’t really want their work being used by this tech (I’m one such artist). The attitude of the Pro AI crowd overwhelmingly seems to be “fuck you, we take what we want. Who cares if it’s your work and you’d rather it not be used in this way”.

It’s always worded more like “well human artists use it so why should the machine be any different” and these people just don’t seem to give a fuck that artists see it as different and don’t want their work being used in this way by the AI.

Do people hate artists that much? That when an artist says they don’t want their work a part of this thing for whatever reason people’s reaction is “who cares what you think, we take what we want”.

I don’t understand why the Pro AI folks aren’t like “Ok cool, we can totally respect you don’t want your work in this, there should be opt out options and maybe we limit the scraping to to public domain work pre 1926 (I think that’s the year we are on).”

I honestly had no idea so many people hated artists so much that they feel it’s their right to totally disregard the intent or wishes of an artist regarding that artists work.

This is a bit of a rant, but as an artist I’m definitely not feeling great about how little respect so many people seem to have regarding our intentions for our work.

(If QT3 is like everywhere else these days I’m sure I’m about to get a heaping helping of “fuck you artist, we take what we want” so don’t be surprised if this winds up deleted.)

Nah man, thats a very reasonable thing to feel. The level if mimicry some of these AI art systems can have on certain styles is certainly… a thing. Its like music, you can’t copyright a chord, but a larger segment such as a solo or specific longer riffs absolutely can meet a threshold, such that a musician using that is unacceptable.

The difference. is that is, and has been, arbitrated by the courts for decades, and there is a bespoke intentionality in music that does not exist here. Using artists styles and works for inspiration is less an intentional choice by the ai programmers, and more a consequence of how they collected their data sets.

How that would play legally? I don’t know. But I can definitely see how it allows output to cross lines that would not be legally allowed elsewhere.

Don’t they train the AI models on real art though? That does seem kinda crappy, to do that without permission or compensation.

Some very much do.

Its one of those things, the AI should be able to understand what stylistically ‘anime’ or ‘80’s Hong Kong action film’ look like, while it being able to specifically generate ‘Miyazaki’ or ‘Jackie Chan in Police Story’ images crosses a line.

I think AI art, as a concept, is fine. I would never pay anyone money for it, but it’s fine that it exists somewhere. I can’t imagine anyone being proud of having ‘made’ it, though.

A major part of the joy of art is knowing how much effort, practice, time, and craft goes into making it. Being the end result of someone’s search string is OK for some, maybe, but it’s not for me no matter how cool it looks.

@merryprankster: Can I ask why you don’t want your art being used by AI in this way?
(To be clear, this question is not meant to imply you should want this. I’m interested in your reasons.)

As mentioned in the other thread, a couple of uses for AI that I think are really interesting without being shitty are for creating textures and using existing material as inputs to create higher fidelity versions of things (e.g. in the creation of remasters of old games).

I absolutely love AI art, but your concerns are extremely valid. The creative, copyrighted work of artists is the raw material used to create these models. Then these models generate tons of value - tons of dollars - but the artists do not get any portion of that value. That is not just. There’s no an obvious or easy resolution, but our society and our law should try their best.

I think it’s a scary time for all kinds of workers. Artists got hit early. But soon, for other things. Programmers, musicians, voice actors. Low level office works. Mid level office workers. Even real life actors further down the line. Music, animations, and voice acting are coming real soon. Video a bit later, but not much.

There’s a little bit of an analogous situation with Github Copilot and programmers already. The AI was trained all code in github. It was supposedly trained only a public code. But just like how copyrighted art is all over the internet, lots of things are in public repositories that are actually copyrighted pieces of code or algorithms. So the AI actually does see a lot of that anyway. And now any random programmer can us the AI to generate a solution, and Copilot creates one of these copyrighted solutions into a new project. It’s basically laundering private/copyrighted code into the public domain.

I don’t think these human jobs will go away entirely. But the jobs will be pushed ‘up the stack’ and this could mean you don’t need as many artists. For example, this might mean artists are less focused on practical skills and more focused on setting higher level design goals and styles for projects, having a good eye and ability to analyze and judge aesthetics, refining and combining generated work into a final product, working with AI teams to train their own unique art models for projects, etc.

In the current models they train AI art on a dump of the public internet. So the Matrix isn’t in the dataset on purpose, but because screenshots of the Matrix are all over the internet, it basically is in the dataset anyway.

Probably the private models DO intentionally copyrighted work. I know the Chinese ones do.

Scraping the public internet is intentionally using copyrighted work.

There’s a case before the Supreme Court right now about Warhol’s images of Prince, which were adapted from photographs. There was a limited licensing agreement at one point, but the photographer’s attorneys are saying the continued use of Warhol’s work and the details of the adaptation go well beyond what was authorized.

I suspect that part of the challenge posed by this issue overall–and thanks to @merryprankster for bringing it up, because it’s important—goes back to my peeps in the ivory tower. For decades literary criticism scholars, joined by all sorts of deconstructionists, post-modernists, and students of media (all areas I find fascinating and often productive, FWIW) have mounted all-out assaults on the concept of intellectual or creative ownership and authorial intentionality. It started in the realm of more or less pure theory, but like a lot of this sort of thing has migrated into the material world. Sort of like how Plato’s cryptofascist musings on authoritarian, deterministic societies managed to become actual political economy blueprints I suppose.

In this framework, where once a creative work is released into the wild it ceases to have any connection to its creator, and the “text” speaks for itself, the role off the author is reduced quite often to that of a gatekeeper whose main function is to literally let something loose. While I highly doubt the scientists, tech bros, and engineers working on AI art (another very cool and fascinating bit of tech to be sure) are carrying around copies of Baudrillard, McLuhan, Derrida, or Lacan, some of the highly academic theory crafting seems to have seeped out of the academy and into the real world’s water supply.

Of your options, I can’t imaging any good faith actor in this space categorically refusing opt-out, just like search engines will not index a website that opts out from it. The problem is more practical: how how would a useful opt-out system even work? It’s a hard problem, and not hard in a way that would be considered a viable research problem. What you need is:

  • A way to authenticate that the person wanting to opt out an image actually is the relevant righs holder. Since you’re approaching this from what’s morally right rather than what’s legally required, I assume you’d want to extend the same courtesy to stakeholders other than the copyright holder. If somebody else takes a photo of me, or of my house, should I be able to opt those photos out? Should the architect who designed my house?

  • A way to make the opt-out status known to anyone with access to the image. The mechanism should work regardless of the image format, work even when the image is copied from one site to another, the opt-out status should survive when the image is optimized / processed / converted with software tools, should work if the image is used as a component of another image (e.g. if a painting is opted out, so should any photos of the painting hanging on a wall), and it needs to apply retroactively to all existing copies of the image too.

The only practical method is probably something like YouTube’s Content ID system: a database of perceptual fingerprints of claimed content, except it’d be even more complicated since it’d need to be usable by everyone training a ML model rather than just a single company.

Your other option of “limit to pre-1926” feels obviously unworkable. It’d require accurately knowing the age of the image, and would mean that the models could not know anything about concepts newer than that. Nobody is going to voluntarily sign up for that restriction. If that’s how you want things to work, I think you need to legislate it.


They don’t hate artists. The majority of people just don’t care enough about craft and they’re cheap. It’s like microwave frozen food. It’s cheaper and takes less time to make than a real home cooked meal. The fact that a pepperoni hot pocket tastes terrible compared to a homemade pizza is outweighed in most people’s minds by the convenience.

In the other hand:

Looked at another way, an AI using publicly viewable art is sort of functionally similar to a human looking at publicly viewable art in order to see different styles and gain expertise. The difference is really only one of degree: the AI can look at more art, which I’d think would make it less likely to produce outright mimicry. Can you sue an AI for copyright infringement if it produces art that is too similar to yours?



Sure, and I think that counts for some hot pocket folks too. After all, to many people, “cooking is too hard” or “I can’t learn to cook” is a valid justification for lots of behavior.

Where the analogy falls down is people who don’t cook aren’t able to hit a button and have food appear for free. Whether they buy frozen food or a restaurant meal, someone is getting paid to feed them.

AI art is substantially different because many sources are “free” for users. I demonstrated in the other thread that if you’re not super-picky and just want some decent “good enough” art for a project that’s miles better than what most people can create by themselves, then AI art is easy, fast, and free. I believe that is dangerous to artists’ livelihood.

Have you been able to identify examples of AI generated art that you suspect have been influenced by your own artwork? I’m not in any way challenging you on this topic, but your concerns made me curious about the scope of the impact to people like yourself.