cross-posted from: https://hexbear.net/post/3613920
Get fuuuuuuuuuuuuuucked
“This isn’t going to stop,” Allen told the New York Times. “Art is dead, dude. It’s over. A.I. won. Humans lost.”
“But I still want to get paid for it.”
cross-posted from: https://hexbear.net/post/3613920
Get fuuuuuuuuuuuuuucked
“This isn’t going to stop,” Allen told the New York Times. “Art is dead, dude. It’s over. A.I. won. Humans lost.”
“But I still want to get paid for it.”
Ok, here’s an image I generated with a random seed:
Here’s the UI showing it as a result:
Then I reused the exact same input parameters. Here you can see it in the middle of generating the image:
Then it finished, and you can see it generated the exact same image:
Here’s the second image, so you can see for yourself compared to the first:
You can download Flux Dev, the model I used for this image, and input the exact same parameters yourself, and you’ll get the same image.
But you’re using the same seed. Isn’t the default behaviour to use random seed?
And obviously, you’re using the same model for each of these, while these people would probably have a custom trained model that they use which you have no access to.
That’s not really proof that you can replicate their art by typing the same sentence like you claimed.
If you didn’t understand that I clearly meant with the same model and seed from the context of talking about it being deterministic, that’s a you problem.
Bro, it’s you who said type the same sentence. Why are you saying the wrong things and then try to change your claims later?
The problem is that you couldn’t be bothered to try and say the correct thing, and then have the gall to blame other people for your own mistake.
And in what kind of context does using the same seed even makes sense? Do people determine the seed first before creating their prompt? This is a genuine question, btw. I’ve always thought that people generally use a random seed when generating an image until they find one they like, then use that seed to modify the prompt to fine tune it.
In the context that I’m explaining that the thing is deterministic. Do you disagree? Because that was my point. Diffusion models are deterministic.
That’s as much deterministic as tracing someone’s artwork, really.
If you have to use a different creation process than how someone would normally create the artwork, whether legitimate or using AI, then it’s not really a criticism of that method in the first place.
I was seriously thinking you found a way to get similar enough results to another person’s AI output just from knowing the prompt. That would actually prove that AI artwork require zero effort to reproduce.
Edit: To expand on that 1st prargrpah, yes, AI is deterministic as much as a drawing tablet and app is deterministic, that is if you copy exactly what another person does using the tool, it will produce the same result.
You might be able to copy one stroke of a pen exactly, but the thousands or tens of thousands of strokes it takes to paint a painting? Like, yeah, you can copy a painting “close enough”, but it’s not exactly the same, because paint isn’t deterministic.
As far as making a “close enough” copy that isn’t exactly the same with AI, you can just use any image as the input image and set the denoising strength to like .1. Then you’ll get basically the same image but it’ll have a different checksum. So if you wanna steal art, AI makes it way easier.
There’s not really any human creativity in this process, or even using your own prompts, which is the whole point behind the copyright office denying this guy’s copyright claim. Maybe you could copyright your prompt, if it’s detailed enough.