AI Art feels kinda shitty.....

Very interesting discussion… I’m not a visual artist, but I am somewhat sympathetic with @merryprankster

Are you familiar with the work of Jaron Lanier? It seems like he would be in agreement and have something to say on the matter; he is very defensive of… the erosion of intellectual property rights by large corporations. He’s not… an anti-technologist so much as a pro-individualist. One of his theses seems to be that the soul of the individual artist must never be replaced.

That stated, this new… medium of AI generated art is fascinating to me, in the same light that AI story generators can be fascinating. Right now, it’s relatively easy for me to tell the difference. I don’t think that there is a Turing-test related existential crises, yet. Perhaps not even a slippery slope. However, I am very sympathetic of your objection toward art being used without the artist’s permission.

You also bring up a fascinating point about art as a skill that can be developed.

“Practice any art, music, singing, dancing, acting, drawing, painting, sculpting, poetry, fiction, essays, reportage, no matter how well or badly, not to get money and fame, but to experience becoming, to find out what’s inside you, to make your soul grow.
Seriously! I mean starting right now, do art and do it for the rest of your lives” -Kurt Vonnegut

Everything I’ve seen about it agrees with this viewpoint, though admittedly my view is mostly from artists getting fucked over. The “best” response I’ve seen is them saying “well people shouldn’t put those in as data points, that would be wrong, also our system totally allows you to do it”.

Once AI figures out fingers and eyes, they’ll basically look like actual art in a lot of cases.
Right now it tends towards cross-eyed people with too many fingers, but otherwise it’s basically got things down. Now how much of that is just straight ripping off images other people made? I suspect quite a lot.

Edit: If anyone is interested, I know Karla Ortiz is an artist that talks a lot about it and isn’t happy.
Mind you she sometimes goes political on Twitter as well, but when it comes to this subject she’s on top of things from what I’ve seen.

I haven’t encountered anyone on the pro-side that seems worth following for it, but I’m also not really looking for it (also it smells of crpyto-adjacent on the pro-side, so maybe those people don’t exist). I followed Ortiz for the art and this is sort of just a side effect of that.

https://twitter.com/kortizart/status/1583716918040530946
https://twitter.com/kortizart/status/1583716919923789829

Edit2: Expect her to be hyperbolic at times, but still her view as a good and relatively famous artist is a valid one despite that.

There is one thing, re: skill, that I think needs to be brought up.

There is a lot of skill that does go into designing these systems, and even some skill in using them well (though the skill floor for achieving any level of basic image is rock bottom). So from the perspective of the programmers a lot has gone in to get to this place. Additionally due to how machine learning works it requires huge data sets. So some form of opt in system like what @merryprankster seems
to want, does at some point become unworkable.

And really from an academic design perspective that would be fine. Really where we run into issues is how people can, with a few words, make an image that looks similar to what Frank Franzetta would make.

Its not that the models understanding thats the issue, its that anyone with a few minutes can use the models to create images so close to the original.

My Twitter feed is all about AI-art, ironically, right now.

Between Ortiz and places like Fire (who are looking at First Amendment angles) it’s all over the place for me, right after I read this thread. Which means it’s obviously the AI’s have infiltrated my computer and brain and the end is coming.

Not the AI, but likely the person who publishes the infringing work, and maybe the company that developed the model.

As easy example of obviously infringing art that would not cause any doubt in court on whether it’s a breach would be the AI generating art with a copyrighted character (Mickey Mouse is the perfect example).

Of course most of the generated images will never be so blatant not really breach copyright in the sense of being too similar copies, but I’ve maintained for a while that these systems are a legal quagmire.

But the problem, and I agree with @merryprankster here, is that the training itself is an unauthorized use of images (a modification through a technical process) and can breach some accessory rights too (moral right, etc…). And in a broader sense of course the artists (or the copyright holders) should be able to choose how their images are used. I see the using of copyrighted works for training as ethically untenable if you use the results commercially. It’s one thing to scourge the internet to train a model and prove a technology (research, and likely fair use under copyright laws) and another one completely different to use that research for commercial exploitation (which is no longer fair use).

This is to say I support and am amazed by the research in this field, but if these companies want to monetize their models they should:
-Retrain their models only on images that are free of copyright or for which authorization has been gained.
-keep a record of all used images for audit.

Of course the above makes the model significantly worse at its task and has huge costs (the curation of the images, mostly). And these companies are operating under confused ethical and legal terms (the research/exploitation above) and taking advantage of a legal gray area (or hoping the business grows fast enough than when they are challenged there’s no chance for stepping back).

I love the technology, and I think it’s an amazing tool for artists (specially concept artists, to get inspiration and quick iterations). But the monetization of it is problematic.

I honestly don’t see how this is complicated. If you train your model on my art, and you make a work based on that model, then it is a derivative work. It is no different than if I took a photograph and you cut out part of the photograph and put it in your collage.

I’m not a lawyer, but copyright law definitely defines what is allowed and what isn’t as far as derivative works go.

The argument that it’s the same as a human artist looking at my work and learning from it and then making their own work is 100% bullshit. That’s not how current “AI” works, and it probably never will be.

Yeah, I don’t know why people say that these algorithms are “AI” either.

All AI is algorithms. AI is a label we give to algorithms of a certain kind and degree of complexity such that the inner workings of the algorithm are largely opaque and it produces unexpectedly “intelligent” results. It’s obviously a but subjective.

This may be, but can you articulate how it’s different? The art algorithms aren’t producing collages. They’re not just mixing and matching the source materials. They’re somehow using weighting and machine learning to quantify and reproduce style, lighting, subject, color, etc from the sampled images.

Right, and if the work is “transformative”, then it’s allowed. Is this transformative? I have no idea, and we won’t know until a court rules on it.

Link with some explanation: Fair Use: What Is Transformative? | Nolo

So, yeah, I think there are a lot of assholes out there, and fancy AI tech is no exception. (As someone mentioned upthread, this seems to be attracting a lot of crypto-adjacent assholes, which, ugh.) But I would urge you to take heart–most of the world is not assholes, and the internet has concentrated AI art assholes together, and people on “your side” are not yet concentrated together.

I’m just trying to say that I think you’re seeing a lot more asshole energy out there than there “truly is” because of where we are right now. The “hey, let’s respect artists” messages aren’t getting amplified.

But I’m sorry, I do empathize with how depressing this all must be for you.

Are you getting that message from the people actually working on making those systems, or from blowhards who haven’t done anything except use those models to generate art? I’d be surpised at the former. There’s just no benefit in antagonizing the artists with talk like that, and a lot of potential downside. Very few people opt out of anything, losing any one artist’s images will have basically no impact on the models, so promising ineffective self-regulation seems like the prefered outcome (to these people) if it reduces the odds of actual regulation.

But as the OP points out, this is how human artists are trained. You observe art, and incorporate it into your own.

On some level, if you are putting your art into the public space, then it’s going to be observed. Generally, we don’t pay artists simply to observe their art.

There seems to be some ill-defined line being drawn that suggests that some entities observing artwork and incorporating it into their own art is fine, but others aren’t allowed to. On some level, this is stemming from the fact that we are generally talking about non sentient AI at this point.

But what if that changed? What if the AI was sentient in a more general sense, but still possessed this ability? Would it be ok to discriminate against that entity on the basis that it isn’t human? I don’t think it would be.

I don’t know if it’s clear from the context, but in general overfitting is a bad thing, if you’re developing machine learning. The goal is to learn broad patterns about the data that are robust to generate reasonably good answers to many questions. Overfitting means that instead of learning these broad patterns, the machine memorized a specific answer to a question, and didn’t learn very much at all. That can make many tools pretty damn useless - nobody wants to reproduce the training data, but fail utterly outside the bounds of that data.

I wouldn’t want an AI art generator to emit one of the training data inputs. It should emit the influence of hundreds or thousands of works, where perhaps an astute observer might be able to identify some of the sorts of inputs that influenced the work. This is similar to how we look at an artist’s work and see the influence of the generation of artists before them and their contemporaries. To repeat others work is to fail.

[My ML work has nothing to do with art!]

These big models (GPT-3, Diffusion) they have so much data they barely get through the dataset even one time, so overfitting seems to have fallen out of the discussion in the big model space.

Also worth noting you don’t need to overfit to break copyright. And especially trademark. For example, any art model trained now will probably know what Mickey Mouse and Harry Potter look like, no matter the dataset. That knowledge is in the public sphere. So if you just take an art model and expect to generate royalty free art, you will certainly get some Harry Potter looking people in there. Not to mention Coke branding or whatever else. I can’t throw Frozen characters all over my product packaging just because Stable Diffusion generated them.

This always pisses me off, from the Stable Diffusion founder:

But Mr Mostaque says he’s not worried about putting artists out of work.

So what is his message to young artists worried about their future career, perhaps in illustration or design? “My message to them would be, ‘illustration design jobs are very tedious’. It’s not about being artistic, you are a tool”.

He suggests they find opportunities using the new technology: “This is a sector that’s going to grow massively. Make money from this sector if you want to make money, it’ll be far more fun”.

That seems very antagonistic to me while trying to be all friendly and welcoming about it.

From: "Art is dead Dude" - the rise of the AI artists stirs debate - BBC News

I’ve got a lot of thoughts about all this but I know writing them down will be exhausting and probably conflicting so it’s for another time.

Also:

someone could use “Photoshop’s merge tool to stick someone’s head on a nude”.

A merge tool? I’ve used Photoshop most days for the last twenty years or so and, maybe I’m wrong, but I don’t think there’s a ‘merge tool’. It’s a process that often takes time, skill, experience and different tools and techniques to do well. And that’s just a ‘simple’ photo merge! My guess he’s a clever guy and coder besides being an investor but from his statements above it sounds like he places very little value on what goes into actually making visual media. And I think that’s what hurts me about AI ‘art’: it devalues the process even further.

In the other thread I saw someone say ‘I spent several hours on those prompts/pieces!’ Dude, that doesn’t even come close to what creatives put into their craft. My girlfriend is an illustrator, it takes hours just to do drafts and sketches. That’s why it’s so difficult to make a living in that field.

Anyway, my flood gates are opening. Time to grab breakfast.

The goal was always to make money off someone else’s work.

Art costs money, so what if we just made it free by stealing everyone’s work?
Or we could then sell it ourselves, we don’t need those artsy people anyway, this is a technocracy.

Yup. The tech itself is basically an act of bad faith, it’s premised fundamentally on profiting off other people’s work without giving them credit.

I honestly don’t think that was the goal of developing these models, at least not beyond, “having a super intelligent AI would be useful”.

I think these models evolved out of research that was attempting to see how far we can push these large language models, and what we can get them to do… And it turns out, the answer is “way more than anyone expected.”

This isn’t really what happened. It was more a case of asking, “can we make a computer consume and understand publicly available information like a human does.”

And, again, in most of these cases what the computer is doing is the same thing you or other humans do.

For instance, if I asked you to draw a picture of a dragon, you could do that, right? Depending on your skill as an artist, it might look better or worse, but you could do it.

But how is that possible? You’ve never seen a dragon. They don’t exist.

You have a mental image of a dragon which was formed largely by seeing pictures that other humans drew, that you observed. Should you not be allowed to draw dragons?

wow did not know this, but crazy!