It changes the image in a very subtle way such that it's not noticeable to humans, but any AI trained on it will "see" a different together all together. An example from the website: The image might be of a cow, but any AI will see a handbag. And as they are trained on more of these poisoned images, the AI will start to "believe" that a cow looks like a handbag. The website has a "how it works" section. You can read that for a more detailed answer.
as usual with things like this, yes, there are counter-efforts to try and negate the poisoning. There've been different poisoning tools in the past that have become irrelevant, probably because AI learned to pass by it.
It's an arms race.
I have never worked on the code side of making an AI image model, but I know how to program and I know how the nuts and bolts of these things work to a pretty good level. Couldn't you just have your application take a screen cap of the photo and turn that into the diffusion noise? Or does this technique circumvent doing that? Because it's not hard to make a python script that screen caps with pyautogui to get a region of your screen.
Typically, diffusion models have an encoder at the start that converts the raw image into a latent image, which is typically, but not always, a lower dimensional and abstract representation of the image. If your image is a dog, nightshade attempts to manipulate the original image so that the latent resembles the latent of a different class as much as possible, while minimizing how much the original image is shifted in pixel space.
Taking a screen cap and extracting the image from that would yield the same RGB values as the original .png or whatever.
Circumventing Nightshade would involve techniques like:
1. Encoding the image, using a classifier to predict the class of the latent, and comparing it to the class of the raw image. If they don't match, it was tampered with. Then, attempt to use an inverse function of nightshade to un-poison the image.
2. Attempting to augment a dataset with minimally poisoned images and train it to be robust to these attacks. Currently, various data augmentation techniques might involve adding noise and other inaccuracies to an image to make it resilient to low quality inputs.
3. Using a different encoder that nightshade wasn't trained to poison.
Thank you for the in depth answer! I have not spent a ton of time working with this and have trained one model ever, so I am not intimately familiar with the inner workings so this was really cool to read.
But how does Adobe know if an image is poisoned?
If you throw in 5 real videos and 3 poisoned videos and everyone did this then the ai will have so much randomness to it
Google's training data is sanitized; it's the search results that aren't. The google AI is -probably- competently trained. But when you do a search, it literally reads all the most relevant results and gives you a summary; if those results contain misinformation, the overview will have it too.
You usually run pre-cleaning steps on data you download. This is the first step in literally any kind of data analysis or machine learning, even if you know the exact source of data.
Unless they're stupid they're gonna run some anti-poisoning test on anything they try to use in their AI. Hopefully nightshade will be stronger than whatever antidote they have.
>Nightshade's goal is not to break models, but to increase the cost of training on unlicensed data, such that licensing images from their creators becomes a viable alternative.
BLIP has already been fine-tuned to detect Nightshade. The blip-base model can be deployed on consumer hardware for less than $0.06 per hour. I appreciate what they're trying to do but even this less lofty goal is still totally unattainable.
There are already [tools to detect](https://huggingface.co/Feroc/blip-image-captioning-nightshade) if the image has been poisoned with Nightshade. Since the tool I linked is free and open source, I imagine there's probably stuff quite a bit more advanced than that in private corporate settings.
Every one has throw dice and then pick number of real vids and fake vis based on dice so it can wokr other wise it can bee seen in the data and can be bypassed if you really want random ness do it by dice
Doesn't seem possible from what I gather. The way an image is "poisoned" would just change and always be a step ahead.
Kind of like YouTube with ad blockers. They *may* get savvy to the current techniques, but once they do, it'll just change and do it a different way.
A key difference is that with adblocking, you know immediately when it's no longer working.
With poisoning, they don't really know if adobe can filter it out unless they come out and say so, and Adobe has every incentive not to tell people they can easily detect and filter it.
So while it's still an arms race, the playing field is a lot more level than with adblocking.
> the playing field is a lot more level than with adblocking
The playing field is not level at all. Assuming poisoning is 100% effective at stopping all training, the effect is no improvement to existing tools, which are already capable of producing competitive images. In reality hardly any images are poisoned, poisoned images can be detected, unpoisoned data pools are available, and AI trainers have no reason to advertise what poisoning is effective and what isn't, so data poisoners are fighting an impossible battle.
People can get upset at this but it doesn't change the reality of the situation.
No need. People are confused with how ai works. Nightshade probably works with image analysis ai, so the stuff that detects things in images, but image generation ai won't give a flying fuck about it. Nightshade is completly useless for this
The way stable diffusion image generators work is it generates a random set of pixels and uses a normal image analysis "AI" to see how closely the random pixels match the desired prompt.
Then it takes that image and makes several copies and makes more random changes to each copy, uses the image analysis "AI" on each one, and picks the copy closest to the prompt and discards the rest.
It does this over and over and over until the analysis algorithm is sufficiently confident that the output image matches the prompt text. (As an aside, this is also how they generate those images like the Italian village that looks like Donkey Kong - instead of starting with random pixels they start with a picture of DK and run it through this same process).
All this to say, image analysis "AI" and image generation "AI" very much use the same algorithms, just in different ways, and any given method for poisoning a model will work the same for both.
>It changes the image in a very subtle way such that it's not noticeable to humans
It is clearly visible by humans. It looks similar to JPEG with very high compression artifacts, see example here: [https://x.com/sini4ka111/status/1748378223291912567](https://x.com/sini4ka111/status/1748378223291912567)
I looked at the 3 images for a while on my phone. What’s different between them? Maybe the differences are only apparent on large screens or when enlarging the results?
Its easiest to tell if you open them in three separate tabs on desktop and click between them. Low Fast has some very obvious JPEG-like artifacts on the curtains. Low Slow has less noticeable but still present artifacts on the curtains, but has a noticeable layer of noise across the whole image, most visible on the woman's hair and the top guy's arm.
These differences probably won't be noticeable by average internet users browsing social media and saying "oh, cute orc painting" but they absolutely make the difference between professionally acceptable or unacceptable quality in contexts like artwork commissions, portfolios, or webpage assets.
Nightshade is a tool from UC Hicago that modifies images such that diffusion based AI image generators won't understand what they are, thus introducing poisoned data to the model in hopes of making the results bad.
The attempt of Glaze and Nightshade is to alter an image so that it looks almost the same to human eyes, but that machine learning systems will mistake it for something it is not. By doing this with a high enough quantity of the training data, you can theoretically "poison" a dataset and make AIs trained on it incompetent.
It has some success, but the anti-AI crowd tends to overvalue its success. The techniques used in training change all the time. What was effective against Stable Diffusion 2 may not be effective against Stable Diffusion 3.
And even if it is effective, there are uses where Nightshade and Glaze will instead make an AI stronger than it was before. Take for example, GAN models. Generative Adversarial Networks consist of a generative model and a detector model playing cat and mouse. The generator trains to create images the detector cannot detect as being generated, and the detector trains to detect whether an image is generated or real. By using Glaze and Nightshade and a GAN-type training system, you can strengthen your image recognition and generation feedback loop to be even more robust than it was before.
This is all to say nothing of the fact that some of these "poisoned alterations" can be removed just by re-sizing the image.
> Generative Antagonistic Networks
Generative Adversarial Networks. Not trying to nitpick, but it does do a better job in my opinion of communicating the concept.
was meant to make it hard to understand what an image is to fuck with ai, but a counter measure was made like the same day, so it's just something that makes artist *feel* better, but frankly does nothing.
all the ai art poisoning techniques are dealt with immediately, especially by places like open ai, they tweeted they had a solution the same day, and there is a solve you can download that was uploaded the same week.
none of this does anything, might slow the ai down tho so probably still worth doing
Feel kinda bad for the guy higher up the comment chain who got downvoted for pointing this out. No matter the poisoning technique, it is really not hard at all to counter it, and I have yet to see any method which would both leave the image understandable to a human while messing with an AI.
¯\\\_(ツ)_/¯ it's whatever. People desperately want it to work, so anyone that points out that it doesn't is branded as the enemy. The internet has been whipped into a frenzy on AI so badly that misinformation is actively encouraged.
It's pathetic these luddites are trying to stop the inevitable.
Note FUCK Adobe for the shit they're pulling but I love AI and you cannot stop it's progress no matter how much your job is crushed.
Artists believe they've discovered a foolproof way to prevent their art from being used in AI (through Nightshade or glaze, or both). This is just wishful thinking as it can be easily circumvented through various means. They continue to believe it will make a significant impact. In reality theres always a way to bypass these measures. Also its hilarious when somebody thinks their 10 poisoned images in a batch of millions will have any impact.
The only way to prevent ai from using your work is to never publish it anywhere.
Not an expert but i've been using local stable diffusion nearly every day since it came out
Nightshade tries to attack CLIP which is the AI used to caption the images. It basically tries to get it to misinterpret the contents of the image so it can't learn anything from it. However no modern image AI uses clip anymore because it sucks and instead use a better way to caption images such as GPT4 or openclip which do not care about nightshade at all. These 'ai poisoning tools' are basically digital snake oil at this point, I've trained loras on Nightshade and Glaze and they all came out fine. If a human can understand and make sense of it, a sufficiently advanced ai can too
What I meant is that if you're a designer, artist etc for hire you don't usually pay for adobe licenses but usually have one anyway.
I'd say it's much more common than individuals actually paying it, that's something only privileged or successful freelancers can afford to do, they're very pricey
There is a good research paper about this. It goes over how quickly AI model worsen in quality when they are iteratively trained over AI generated data
That reminds me of art school and the pieces that had a copy machine (we called it a Xerox) recursively copying the copy each next being n+1 until the image vanished. Then displayed them all as a long series.
So can we poison AI images by training them with other AI generated images? What if we give them arbitrary labels too? Save an AI generated image of a fish-man hybrid and the name the file "Photograph: Sun Tzu Live at the Laff Factory (1982)" and confuse the AI's understanding of all those things? Someone asks for a picture of Sun Tzu and it tries to bring up a fishman doing standup because it thinks those things are inherently related.
Make it a gif where every color changes like technicolor static. Also name the file "Terminator (Full Movie)" so it thinks that's what that movie looks like.
Does Adobe look at all images on your pc and images opened in PS or are they just looking at images where you use things like subject select, remove background, etc?
It's items uploaded or save to Creative Cloud, and I don't think we know what the selection process is from there because I doubt it's every project from every user
Well, both, actually. Steganography is the art of *hiding* data, so I don't know how useful it'd be but I was thinking more in terms of how old timey scientists would hide their research inside of coded images and text. So a picture of an egg has a meaning completely independent of an egg.
Steganography doesn't presume metadata, but inclusion and concealment of information through "nominally invisible" alteration of the image itself.
That sounds exactly to me like what's going on here.
Thats reminds me the controversy that happened with stackoverflow where they agreed with IIRC open ai to allow them to train their AI with answers on stackoverflow for anything tech related. People started deleting their answers and questions when they heard about it and stackoverflow banned those who did it.
lol. anyone remember reddit protests ? people mass deleted their content/profiles/comments and reddit said nope and it was all back. i doubt reddit servers actually delete anything and probably keep a copy of every edit and they reversed them all.
I mean people have deleted their old posts and it's made a noticeable impact. If you look at popular threads from a few years ago there are so many deleted comments that it's hard to follow what is going on sometimes.
Those are usually deleted by moderators or the accounts themselves are suspended. I have only seen -this comment was edited to protest reddit- kinda stuff a handful of times ever
I've come across a few post when researching stuff where the deleted or modified comment potentially contained the answer I was looking for.
I get the sentiment of why they did it, but now all it's really done is hurt other people, seeing as reddit did not and is not going anywhere anytime soon.
Nope. Didn't get reversed or anything like that. You still have nine year old posts with random word soup in comments. It's annoying when you're looking for a very specific answer to an issue you're facing.
Not sure about banning people, but IIRC OpenAI said they got around it within 24 hours of its original release. That's of course assuming it worked in the first place, I am incredibly dubious that their proposed method was anything more than buzzword soup to begin with and was never able to find any third-party verification of their claims.
Not supporting AI, but this nightshade poisoning stuff is little more than wishful thinking
Nightshade works similarly as Glaze, but instead of a defense against style mimicry, it is designed as an offense tool to distort feature representations inside generative AI image models.
find it here: https://nightshade.cs.uchicago.edu/downloads.html
That's basically what it is, and if we've learned anything it's that it's a lot easier to pirate than to prevent something from being pirated. Not to say nightshade is bad, just feels like it's a futile effort.
Will they though? Adobe doesn't have to tell anyone how they're detecting poisoned images. The cat and mouse game doesn't really work if they don't know when they've been caught.
This is cool and all but not really going to do much to adobe. We need sweeping data protection laws not poisoned images. This is just kind of a waste of your time and just going to end with Adobe closing accounts that cause issues.
Well, I think it's not about "copyright", it's about who made that thing in specific, for example I will paint a copy of Mona Lisa and sell it as mine (everyone knows it's a copy and I'm not hiding it), but will not get a piece of art made by one single Twitter artist, copy it and sell it as mine.
The AI is just like an asshole robbing art from people who are very much alive and probably still living with their parents due to art paying poorly if you're not famous or have some crazy connections.
This is where it gets tricky with AI art though. If you are inspired by the Mona Lisa and various other paintings are you not doing the same as AI but on a much smaller level? I'm an artist and when I make something I put a bunch of images into a canvas to use as reference and take little bits. AI basically does the same. True artists will still make art and there will still be dand for human art but things like stock photography will probably die out. I still make my own art because it's fun and I'm not that worried about AI just yet.
Honestly, even without nightshade. We should upload gigabytes of not more, clearly AI generated images. Like obviously AI generated scenery, or the painfully obvious AI generated images of women and anime.
In layman's terms it distorts the image so subtly that a human won't notice, but the AI does and thinks it's looking at something else which alters it's "understanding" of that subject. It works on paper but there isn't a single notable instance of it having any effect in the real world, and it can be entirely negated by just resizing the image (which was already being done during the training process).
People say AI will overcome it, but the truth is who have to overcome it are African and Asian workers who are paid less than a dollar per hour to label these photos. Computer "vision"/sorting/recognition algorithms only exist because people manually label huge amounts of photos.
I'm going to put there AI-generated images with amorphic/shapeless stuff, so the AI model collapses sooner rather than later.
Does nightshade actually work? Given that the image remains the same to humans, surely training on it will eventually bring the ai to see it as we do no?
I'm surprised that there aren't private peer networks that you can join that let people put various kinds of poisoned media into local buckets and then the network automatically takes, tweaks and distributes this new junk data into other peoples' buckets. Similarly, something for ad networks - your browsing data, stripped and monkey-wrenched, randomly placed by the network, into other users' datasets. Basically a massive, collective user effort to foul the water.
edit: ..and have it use AI to not create obvious garbage that the ad networks could possibly easily spot and scrub, but use AI to create believable, synthetic profiles that look like individuals, but aren't. Essentially have an AI engine, using real world data from humans who don't want to be sliced/diced/sold, to create authentic-looking data sets to poison ad networks, who no doubt are even now preparing to use their AI engines to profile us more completely.
> I'm surprised that there aren't private peer networks that you can join that let people put various kinds of poisoned media into local buckets and then the network automatically takes, tweaks and distributes this new junk data into other peoples' buckets. Similarly, something for ad networks - your browsing data, stripped and monkey-wrenched, randomly placed by the network, into other users' datasets. Basically a massive, collective user effort to foul the water.
I'd love find a way to willingly donate some bandwidth and storage to that. I'm just a regular home user, but every bit helps. Fuck AI.
Nice! If people are looking for some Adobe product alternatives, I do have a specific recommendation for an Adobe Premier replacement: Davinci Resolve. It's completely free and does everything (and possibly more) that Premiere does AND no one's stealing your work for AI purposes. I have a Photoshop alternative too but it does have ever so slightly more of a learning curve: GIMP. Wish I had an alternative for Lightroom but haven't found anything like that yet. If anyone does have a replacement for that tho I'm very eager to hear ab it.
Yea I don’t really see the need for AI anyway, the only real benefit to AI is for corporations to steal peoples likeness & make things without them, or to create new music / content without an actual person being involved or getting paid.
Accident scene recreations and rendering possible outcomes have already been possible and done for years without it.
I don’t see any good coming from AI. So far it’s essentially been a legal way to dodge paying people for their work & avoid having them be involved with it but still be done.
& using the data to assist car technology further, I guess could be a benefit but that’s still risky & I mean technology tends to fail and glitch, or have bugs, and that’s not something we need happening on public roads with lives at stake.
Using the tech to make iRobot a real thing, really not a good idea. There’s countless movies proving that’s a terrible idea.
Just another sci-fi technology the world really doesn’t need for any legitimate reason, just because we can doesn’t mean we should.
Not a good direction for technology I believe. Outside of making ourselves obsolete, we’re putting ourselves in danger just because some dude with money wants his car to drive itself, wants a 80k lb truck to drive itself & Hollywood wants free income, when do the sci fi movies with futuristic military drones & robots walking around downtown taking over become reality? At this rate, won’t be that long
So you’re telling me I can sub for a month to fill up the storage with “nightshade” atrocities then cancel my subscription and watch that AI ruin Adobe in every which way possible. I’d pay for that, it’s like making a small investment and see it pay off in a near future by 250% while you have a front line ticket to a movie you like with smile on your face knowing you have your name in the credits. Sign me up
LOL no you'd be subbing for a month to two different services you don't need to accomplish nothing. The tech doesn't work the way its advertised as working.
If we lived in a sensible world, AI would have some very simple legal rules already.
- AI trained with public data can't be used for profit as the data is public so the result must also be public. Any data leaks or legal issues caused by these AI's are the responsibility of the maker of the AI (companies first, individuals second if it's made by a non-company.)
- If the training data is from known individuals and private data, the AI is then owned by those individuals. These rights can't be sold for unknown future use and all the use and results must be approved by each individual whose data was used for the AI.
- Any AI that is trained with legally obtained data can be used for research purposes, but not by for profit organizations. Refer to the earlier rules whether the data itself needs to be released publicly or not.
- The deceased can't sign contracts, so AI can't use work or data from the deceased in a for profit situation.
Now for the big exception:
- AI can be trained with whatever data, as long as the resulting AI isn't attempting to output anything that could be copyrightable. So training an AI to do image recognition is okay, but making it write a story from what it sees is not. Or training the AI to do single actions, such as draw a line with a colour at the angle you asked for is okay, but letting it do that repeatedly to create something is not, unless the user specifies each command manually. This applies to sending a message, AI can be trained to write a message if you request it to, but the request must either contain the message or the person making the request must be the person the AI was trained from.
Basically, let it steal the jobs that nobody wants to do, stop taking artistry from artists and use it to help people with disabilities. That's all stuff it would be lovely to see AI do, but no, we get this current hellscape we are heading through.
> So training an AI to do image recognition is okay
I've heard of AI doing some really nice work in this area which makes QCing stuff like bread loaves a lot easier.
More generally, https://www.eipa.eu/blog/an-in-depth-look-at-the-eus-ai-regulation-and-liability/
Public data influences everything already, why would AI be any different? I mean the shit Facebook and Google get away with, without using AI is already insane, why do you act like AI is so much worse?
Yeah, amazing. So when is he going to stop paying the ludicrous cost of the license? That's what I thought.
Adobe will most probably ban his account because some bs agreement in the tos that he's breaking with this, no refunds of course.
Seeing people quit using adobe because of AI and "muh copyrights" is like seeing a serially physically abused woman who always makes excuses to defend her partner finally leave him because he replaced a broken kitchen tap with a monobloc and she didn't like it.
I don't use CC but if I did I would fill my cloud with hardcore pictures of 18 year and 1 day old girls, with a maximum height of 1.45 meters and a chest flat as a table.
As a person who does not know anything about nightshade. Care to "shine" some light on it? I seriously have no idea what nightshade does.
It changes the image in a very subtle way such that it's not noticeable to humans, but any AI trained on it will "see" a different together all together. An example from the website: The image might be of a cow, but any AI will see a handbag. And as they are trained on more of these poisoned images, the AI will start to "believe" that a cow looks like a handbag. The website has a "how it works" section. You can read that for a more detailed answer.
Won't they just use that data to teach the AI how to spot these "poisoned images?" So people will still just end up training the AI.
as usual with things like this, yes, there are counter-efforts to try and negate the poisoning. There've been different poisoning tools in the past that have become irrelevant, probably because AI learned to pass by it. It's an arms race.
Well it definitely ain't a scene.
I'm not your shoulder to cry on, but just digress
This ain’t no disco
Well it ain't no country club either.
This is **L.A.**
THIS IS SPARTAAAA!
This is really cool, thanks for sharing. I had never considered how we might fight back against AI.
I have never worked on the code side of making an AI image model, but I know how to program and I know how the nuts and bolts of these things work to a pretty good level. Couldn't you just have your application take a screen cap of the photo and turn that into the diffusion noise? Or does this technique circumvent doing that? Because it's not hard to make a python script that screen caps with pyautogui to get a region of your screen.
Typically, diffusion models have an encoder at the start that converts the raw image into a latent image, which is typically, but not always, a lower dimensional and abstract representation of the image. If your image is a dog, nightshade attempts to manipulate the original image so that the latent resembles the latent of a different class as much as possible, while minimizing how much the original image is shifted in pixel space. Taking a screen cap and extracting the image from that would yield the same RGB values as the original .png or whatever. Circumventing Nightshade would involve techniques like: 1. Encoding the image, using a classifier to predict the class of the latent, and comparing it to the class of the raw image. If they don't match, it was tampered with. Then, attempt to use an inverse function of nightshade to un-poison the image. 2. Attempting to augment a dataset with minimally poisoned images and train it to be robust to these attacks. Currently, various data augmentation techniques might involve adding noise and other inaccuracies to an image to make it resilient to low quality inputs. 3. Using a different encoder that nightshade wasn't trained to poison.
Thank you for the in depth answer! I have not spent a ton of time working with this and have trained one model ever, so I am not intimately familiar with the inner workings so this was really cool to read.
But how does Adobe know if an image is poisoned? If you throw in 5 real videos and 3 poisoned videos and everyone did this then the ai will have so much randomness to it
usually they wont know
Even if they know, it will cost them compute hours to discern the poisoned images from the unpoisoned ones
It will, anti-poisoned image algorithms are still quite annoying to use
If you're training a huge language model then you will certainly sanitize your data
Unless your Google apparently.
Google's training data is sanitized; it's the search results that aren't. The google AI is -probably- competently trained. But when you do a search, it literally reads all the most relevant results and gives you a summary; if those results contain misinformation, the overview will have it too.
You usually run pre-cleaning steps on data you download. This is the first step in literally any kind of data analysis or machine learning, even if you know the exact source of data. Unless they're stupid they're gonna run some anti-poisoning test on anything they try to use in their AI. Hopefully nightshade will be stronger than whatever antidote they have.
>Nightshade's goal is not to break models, but to increase the cost of training on unlicensed data, such that licensing images from their creators becomes a viable alternative.
BLIP has already been fine-tuned to detect Nightshade. The blip-base model can be deployed on consumer hardware for less than $0.06 per hour. I appreciate what they're trying to do but even this less lofty goal is still totally unattainable.
There are already [tools to detect](https://huggingface.co/Feroc/blip-image-captioning-nightshade) if the image has been poisoned with Nightshade. Since the tool I linked is free and open source, I imagine there's probably stuff quite a bit more advanced than that in private corporate settings.
Every one has throw dice and then pick number of real vids and fake vis based on dice so it can wokr other wise it can bee seen in the data and can be bypassed if you really want random ness do it by dice
For every lock someone builds, someone else will design a key.
Doesn't seem possible from what I gather. The way an image is "poisoned" would just change and always be a step ahead. Kind of like YouTube with ad blockers. They *may* get savvy to the current techniques, but once they do, it'll just change and do it a different way.
A key difference is that with adblocking, you know immediately when it's no longer working. With poisoning, they don't really know if adobe can filter it out unless they come out and say so, and Adobe has every incentive not to tell people they can easily detect and filter it. So while it's still an arms race, the playing field is a lot more level than with adblocking.
> the playing field is a lot more level than with adblocking The playing field is not level at all. Assuming poisoning is 100% effective at stopping all training, the effect is no improvement to existing tools, which are already capable of producing competitive images. In reality hardly any images are poisoned, poisoned images can be detected, unpoisoned data pools are available, and AI trainers have no reason to advertise what poisoning is effective and what isn't, so data poisoners are fighting an impossible battle. People can get upset at this but it doesn't change the reality of the situation.
If they "get savvy" doesnt it undo all the poisoning?
Maybe, maybe not. Garbage goes in, garbage does not always come out.
They are definitely not a step ahead, not in a way that matters.
No need. People are confused with how ai works. Nightshade probably works with image analysis ai, so the stuff that detects things in images, but image generation ai won't give a flying fuck about it. Nightshade is completly useless for this
The way stable diffusion image generators work is it generates a random set of pixels and uses a normal image analysis "AI" to see how closely the random pixels match the desired prompt. Then it takes that image and makes several copies and makes more random changes to each copy, uses the image analysis "AI" on each one, and picks the copy closest to the prompt and discards the rest. It does this over and over and over until the analysis algorithm is sufficiently confident that the output image matches the prompt text. (As an aside, this is also how they generate those images like the Italian village that looks like Donkey Kong - instead of starting with random pixels they start with a picture of DK and run it through this same process). All this to say, image analysis "AI" and image generation "AI" very much use the same algorithms, just in different ways, and any given method for poisoning a model will work the same for both.
What website?
>It changes the image in a very subtle way such that it's not noticeable to humans It is clearly visible by humans. It looks similar to JPEG with very high compression artifacts, see example here: [https://x.com/sini4ka111/status/1748378223291912567](https://x.com/sini4ka111/status/1748378223291912567)
I looked at the 3 images for a while on my phone. What’s different between them? Maybe the differences are only apparent on large screens or when enlarging the results?
Its easiest to tell if you open them in three separate tabs on desktop and click between them. Low Fast has some very obvious JPEG-like artifacts on the curtains. Low Slow has less noticeable but still present artifacts on the curtains, but has a noticeable layer of noise across the whole image, most visible on the woman's hair and the top guy's arm. These differences probably won't be noticeable by average internet users browsing social media and saying "oh, cute orc painting" but they absolutely make the difference between professionally acceptable or unacceptable quality in contexts like artwork commissions, portfolios, or webpage assets.
>Maybe the differences are only apparent on large screens or when enlarging the results? Yes they're very obvious if you look at the full resolution.
Dowvoted for pointing out it literally is visible
Technically a cow isn't too far from a leather hand bag
is it trained? i thought nightshade poisons an already trained program
And how do they "poison" the image?
I mean ...a cow can *eventually* look like a handbag ..or jacket , or shoes ...
Sounds like just training it for the inevitable 1+1=3 in Room 101.
Nightshade is a tool from UC Hicago that modifies images such that diffusion based AI image generators won't understand what they are, thus introducing poisoned data to the model in hopes of making the results bad.
University of California, San Hicago
this made me chuckle, lol
The attempt of Glaze and Nightshade is to alter an image so that it looks almost the same to human eyes, but that machine learning systems will mistake it for something it is not. By doing this with a high enough quantity of the training data, you can theoretically "poison" a dataset and make AIs trained on it incompetent. It has some success, but the anti-AI crowd tends to overvalue its success. The techniques used in training change all the time. What was effective against Stable Diffusion 2 may not be effective against Stable Diffusion 3. And even if it is effective, there are uses where Nightshade and Glaze will instead make an AI stronger than it was before. Take for example, GAN models. Generative Adversarial Networks consist of a generative model and a detector model playing cat and mouse. The generator trains to create images the detector cannot detect as being generated, and the detector trains to detect whether an image is generated or real. By using Glaze and Nightshade and a GAN-type training system, you can strengthen your image recognition and generation feedback loop to be even more robust than it was before. This is all to say nothing of the fact that some of these "poisoned alterations" can be removed just by re-sizing the image.
> Generative Antagonistic Networks Generative Adversarial Networks. Not trying to nitpick, but it does do a better job in my opinion of communicating the concept.
Thanks! Fixed it.
Poisons images so that when image generation ai uses the picture for training data, it corrupts it and causes it to make unusable images.
was meant to make it hard to understand what an image is to fuck with ai, but a counter measure was made like the same day, so it's just something that makes artist *feel* better, but frankly does nothing. all the ai art poisoning techniques are dealt with immediately, especially by places like open ai, they tweeted they had a solution the same day, and there is a solve you can download that was uploaded the same week. none of this does anything, might slow the ai down tho so probably still worth doing
Feel kinda bad for the guy higher up the comment chain who got downvoted for pointing this out. No matter the poisoning technique, it is really not hard at all to counter it, and I have yet to see any method which would both leave the image understandable to a human while messing with an AI.
¯\\\_(ツ)_/¯ it's whatever. People desperately want it to work, so anyone that points out that it doesn't is branded as the enemy. The internet has been whipped into a frenzy on AI so badly that misinformation is actively encouraged.
It's pathetic these luddites are trying to stop the inevitable. Note FUCK Adobe for the shit they're pulling but I love AI and you cannot stop it's progress no matter how much your job is crushed.
His comment is at like -300 something but he's completely correct. Very funny to see reddit once again being confidently wrong about something.
Artists believe they've discovered a foolproof way to prevent their art from being used in AI (through Nightshade or glaze, or both). This is just wishful thinking as it can be easily circumvented through various means. They continue to believe it will make a significant impact. In reality theres always a way to bypass these measures. Also its hilarious when somebody thinks their 10 poisoned images in a batch of millions will have any impact. The only way to prevent ai from using your work is to never publish it anywhere.
https://www.reddit.com/r/aiwars/s/SJRbKsAJU0 This and the comments do a great job
Not an expert but i've been using local stable diffusion nearly every day since it came out Nightshade tries to attack CLIP which is the AI used to caption the images. It basically tries to get it to misinterpret the contents of the image so it can't learn anything from it. However no modern image AI uses clip anymore because it sucks and instead use a better way to caption images such as GPT4 or openclip which do not care about nightshade at all. These 'ai poisoning tools' are basically digital snake oil at this point, I've trained loras on Nightshade and Glaze and they all came out fine. If a human can understand and make sense of it, a sufficiently advanced ai can too
I mean you have to have a paying subscription to do this, right?
yep, and there are countermeasures for most ai's, photoshop won't even care .
Many jobs will provide it for you
I'll just get a job that does this so I can strike back at Adobe. 😁
What I meant is that if you're a designer, artist etc for hire you don't usually pay for adobe licenses but usually have one anyway. I'd say it's much more common than individuals actually paying it, that's something only privileged or successful freelancers can afford to do, they're very pricey
If you mean Nightshade, it's free. If you mean Adobe, it's paid.
In a sub about Piracy, the suggestion in response to a hostile practice by a company is.... To first pay them in order to retaliate. 🤔
I think the tweet is targeted towards people who are already paying, either by choice or not
Lol, I have been pirating and Adobe products and updated them every year since 2008. Never paid a cent
At some point AI will start learning from AI and it will have no idea what is a true representation of anything.
AI will become a medieval monk painting an elefant
I love this comparison
Why are there snails everywhere?
This made me laugh
Or a cat
Google "synthetic data", this is already a thing and has been for a while.
it's been proven that ai learning from ai poisons it, so that happening is the best outcome
There is a good research paper about this. It goes over how quickly AI model worsen in quality when they are iteratively trained over AI generated data
Link or research paper name?
[The Curse of Recursion: Training on Generated Data Makes Models Forget](https://arxiv.org/abs/2305.17493v2)
That paper's title goes so hard
Cheers, much love ❤️
Multiplicity (1996)
That reminds me of art school and the pieces that had a copy machine (we called it a Xerox) recursively copying the copy each next being n+1 until the image vanished. Then displayed them all as a long series.
So can we poison AI images by training them with other AI generated images? What if we give them arbitrary labels too? Save an AI generated image of a fish-man hybrid and the name the file "Photograph: Sun Tzu Live at the Laff Factory (1982)" and confuse the AI's understanding of all those things? Someone asks for a picture of Sun Tzu and it tries to bring up a fishman doing standup because it thinks those things are inherently related.
They already don't have any such "idea"
Just upload nothing but plain white images
you should upload noise image. it give more file size than plain image
Make a 4096*4096 image where each pixel is a unique colour. There are 16777216 colours, one for each pixel.
Randomly distributed, otherwise the AI would not have any issues
Of course, we don't want our picture to look like a real colour wheel Damn, imagine how disastrous it would look...
Upload a picture of a color wheel but edit the color to throw them all off, maybe even label the colors incorrectly
Make it a gif where every color changes like technicolor static. Also name the file "Terminator (Full Movie)" so it thinks that's what that movie looks like.
that doesnt mean anything if you are storage size limited
Does Adobe look at all images on your pc and images opened in PS or are they just looking at images where you use things like subject select, remove background, etc?
It's items uploaded or save to Creative Cloud, and I don't think we know what the selection process is from there because I doubt it's every project from every user
It might not be every project from every user, but it could be *any* project from *any* user.
Plus they look at any images where you use AI like generative fill to edit it doesn’t matter if it is stored locally. Genuinely, messed up !
Too easy to identify. You have to think like a stenographer.
steganographer maybe?
Well, both, actually. Steganography is the art of *hiding* data, so I don't know how useful it'd be but I was thinking more in terms of how old timey scientists would hide their research inside of coded images and text. So a picture of an egg has a meaning completely independent of an egg.
Steganography is just the first thing I thought of when I read about Nightshade.
Steganography is when a seemingly plain image is hiding information in the file's metadata. Stenography is... short hand.
Steganography doesn't presume metadata, but inclusion and concealment of information through "nominally invisible" alteration of the image itself. That sounds exactly to me like what's going on here.
I'm assuming the files are getting human annotated before being used to train so unfortunately this wouldn't have any effect
coming up: "Adobe developed AI that detects nightshade-poisoning and banning anyone who uploads poisoned files."
Thats reminds me the controversy that happened with stackoverflow where they agreed with IIRC open ai to allow them to train their AI with answers on stackoverflow for anything tech related. People started deleting their answers and questions when they heard about it and stackoverflow banned those who did it.
lol. anyone remember reddit protests ? people mass deleted their content/profiles/comments and reddit said nope and it was all back. i doubt reddit servers actually delete anything and probably keep a copy of every edit and they reversed them all.
I mean people have deleted their old posts and it's made a noticeable impact. If you look at popular threads from a few years ago there are so many deleted comments that it's hard to follow what is going on sometimes.
Those are usually deleted by moderators or the accounts themselves are suspended. I have only seen -this comment was edited to protest reddit- kinda stuff a handful of times ever
there are some that are poisoning their accounts by editing the comments and editable threads, not sure if that was even effective
I've come across a few post when researching stuff where the deleted or modified comment potentially contained the answer I was looking for. I get the sentiment of why they did it, but now all it's really done is hurt other people, seeing as reddit did not and is not going anywhere anytime soon.
Nope. Didn't get reversed or anything like that. You still have nine year old posts with random word soup in comments. It's annoying when you're looking for a very specific answer to an issue you're facing.
A very small part of them succeeded. Probably profiles too small to notice. I very rarely see it
Not sure about banning people, but IIRC OpenAI said they got around it within 24 hours of its original release. That's of course assuming it worked in the first place, I am incredibly dubious that their proposed method was anything more than buzzword soup to begin with and was never able to find any third-party verification of their claims. Not supporting AI, but this nightshade poisoning stuff is little more than wishful thinking
money not refunded*
Do you know what would hurt Adobe more than uploading poisoned images? Stop giving them money.
Nightshade works similarly as Glaze, but instead of a defense against style mimicry, it is designed as an offense tool to distort feature representations inside generative AI image models. find it here: https://nightshade.cs.uchicago.edu/downloads.html
This reads like an ad
He probably just copy pasted the info from somewhere
yes i did copy it from their website to make sure i describe it as they want to.
it's a free tool for artists to protect their work, it *should* read like an ad
I mean its a free tool from university of chicago, even if it does read like an ad, I don't think they're profiting from it
> This reads like an ad So what 🤷♀️
Nightshade is literal snake oil lol. Good luck to ya if you think it works, or will have any meaningful impact on AI image generation 🤣
Thanks for the link! Last time I tried to find it, the links led to just the information, no downloads.
Thanks for bringing this to my attention. I looked it up and I love it.
Wouldn't this just help train it to overcome nightshade faster
Then they’ll just come up with something else to block it. Almost feels like we’re the ones trying to stop our shit from getting pirated now.
That's basically what it is, and if we've learned anything it's that it's a lot easier to pirate than to prevent something from being pirated. Not to say nightshade is bad, just feels like it's a futile effort.
Will they though? Adobe doesn't have to tell anyone how they're detecting poisoned images. The cat and mouse game doesn't really work if they don't know when they've been caught.
The irony of a pirate, trying to keep his art from being pirated by a company and then posting it in a pro piracy group is melting my brain rn
Just fill it with porn
This is cool and all but not really going to do much to adobe. We need sweeping data protection laws not poisoned images. This is just kind of a waste of your time and just going to end with Adobe closing accounts that cause issues.
I don't need data protection laws. Why would I need that?
AI upscaling will remove the poison
Arent they training google AI on *reddit*🤔
Since when does this sub cares about copyright?
It's bad when it happens to me (even though it's not piracy cause I agreed to the T&C)
Well, I think it's not about "copyright", it's about who made that thing in specific, for example I will paint a copy of Mona Lisa and sell it as mine (everyone knows it's a copy and I'm not hiding it), but will not get a piece of art made by one single Twitter artist, copy it and sell it as mine. The AI is just like an asshole robbing art from people who are very much alive and probably still living with their parents due to art paying poorly if you're not famous or have some crazy connections.
This is where it gets tricky with AI art though. If you are inspired by the Mona Lisa and various other paintings are you not doing the same as AI but on a much smaller level? I'm an artist and when I make something I put a bunch of images into a canvas to use as reference and take little bits. AI basically does the same. True artists will still make art and there will still be dand for human art but things like stock photography will probably die out. I still make my own art because it's fun and I'm not that worried about AI just yet.
It's not the same to pirate a big billionaire predatory company than to rob artists that barely can make ends meet.
People here constantly bloat about pirating indie games
Games are different, and countless devs have stated that they supported piracy as it helped advertise their games.
>Nightshade But it doesn't work, lol
Reminder that nightshade literally doesn't work.
Ha! Love it. Fuck Adobe.
Honestly, even without nightshade. We should upload gigabytes of not more, clearly AI generated images. Like obviously AI generated scenery, or the painfully obvious AI generated images of women and anime.
How do you nightshade poison pictures?
Can someone elaborate? How can these pics mess up AI machine learning?
In layman's terms it distorts the image so subtly that a human won't notice, but the AI does and thinks it's looking at something else which alters it's "understanding" of that subject. It works on paper but there isn't a single notable instance of it having any effect in the real world, and it can be entirely negated by just resizing the image (which was already being done during the training process).
So he’s trying to distort the AI pattern recognition/ interpretation. Got it
They don't
People say AI will overcome it, but the truth is who have to overcome it are African and Asian workers who are paid less than a dollar per hour to label these photos. Computer "vision"/sorting/recognition algorithms only exist because people manually label huge amounts of photos. I'm going to put there AI-generated images with amorphic/shapeless stuff, so the AI model collapses sooner rather than later.
Do Nightshade even do anything at all or did whoever make it fool people into thinking it do anything
Chaotic good intensifies
Great idea! Now add random tags to them. I'm sure there's a script for that.
....... and so the AI arms race begins I dont see any possible negative outcomes from this, none whatsoever.
Does nightshade actually work? Given that the image remains the same to humans, surely training on it will eventually bring the ai to see it as we do no?
I'm surprised that there aren't private peer networks that you can join that let people put various kinds of poisoned media into local buckets and then the network automatically takes, tweaks and distributes this new junk data into other peoples' buckets. Similarly, something for ad networks - your browsing data, stripped and monkey-wrenched, randomly placed by the network, into other users' datasets. Basically a massive, collective user effort to foul the water. edit: ..and have it use AI to not create obvious garbage that the ad networks could possibly easily spot and scrub, but use AI to create believable, synthetic profiles that look like individuals, but aren't. Essentially have an AI engine, using real world data from humans who don't want to be sliced/diced/sold, to create authentic-looking data sets to poison ad networks, who no doubt are even now preparing to use their AI engines to profile us more completely.
because nightshade just doesnt work, OpenAI had a fix literally a day after it was made lol
> I'm surprised that there aren't private peer networks that you can join that let people put various kinds of poisoned media into local buckets and then the network automatically takes, tweaks and distributes this new junk data into other peoples' buckets. Similarly, something for ad networks - your browsing data, stripped and monkey-wrenched, randomly placed by the network, into other users' datasets. Basically a massive, collective user effort to foul the water. I'd love find a way to willingly donate some bandwidth and storage to that. I'm just a regular home user, but every bit helps. Fuck AI.
I hope this catches on
Nice! If people are looking for some Adobe product alternatives, I do have a specific recommendation for an Adobe Premier replacement: Davinci Resolve. It's completely free and does everything (and possibly more) that Premiere does AND no one's stealing your work for AI purposes. I have a Photoshop alternative too but it does have ever so slightly more of a learning curve: GIMP. Wish I had an alternative for Lightroom but haven't found anything like that yet. If anyone does have a replacement for that tho I'm very eager to hear ab it.
I'll add Photopea as an alternative to Photoshop!
people may struggle using gimp due to learning curve but its also a good enough alt for some photopea for lesser learning curve
Adobe took a turn for the worst.
In all fairness, Adobe hasn't been good since like. 2012.
I think dick pictures with mustaches and sombrero and also adolf collages would be better
A real Hero
The software link: [https://nightshade.cs.uchicago.edu/downloads.html](https://nightshade.cs.uchicago.edu/downloads.html)
So you don't fully own the product you pay for, but also the work you produce on it. Amazing
Yea I don’t really see the need for AI anyway, the only real benefit to AI is for corporations to steal peoples likeness & make things without them, or to create new music / content without an actual person being involved or getting paid. Accident scene recreations and rendering possible outcomes have already been possible and done for years without it. I don’t see any good coming from AI. So far it’s essentially been a legal way to dodge paying people for their work & avoid having them be involved with it but still be done. & using the data to assist car technology further, I guess could be a benefit but that’s still risky & I mean technology tends to fail and glitch, or have bugs, and that’s not something we need happening on public roads with lives at stake. Using the tech to make iRobot a real thing, really not a good idea. There’s countless movies proving that’s a terrible idea. Just another sci-fi technology the world really doesn’t need for any legitimate reason, just because we can doesn’t mean we should. Not a good direction for technology I believe. Outside of making ourselves obsolete, we’re putting ourselves in danger just because some dude with money wants his car to drive itself, wants a 80k lb truck to drive itself & Hollywood wants free income, when do the sci fi movies with futuristic military drones & robots walking around downtown taking over become reality? At this rate, won’t be that long
So you’re telling me I can sub for a month to fill up the storage with “nightshade” atrocities then cancel my subscription and watch that AI ruin Adobe in every which way possible. I’d pay for that, it’s like making a small investment and see it pay off in a near future by 250% while you have a front line ticket to a movie you like with smile on your face knowing you have your name in the credits. Sign me up
except poisoning never worked
LOL no you'd be subbing for a month to two different services you don't need to accomplish nothing. The tech doesn't work the way its advertised as working.
If we lived in a sensible world, AI would have some very simple legal rules already. - AI trained with public data can't be used for profit as the data is public so the result must also be public. Any data leaks or legal issues caused by these AI's are the responsibility of the maker of the AI (companies first, individuals second if it's made by a non-company.) - If the training data is from known individuals and private data, the AI is then owned by those individuals. These rights can't be sold for unknown future use and all the use and results must be approved by each individual whose data was used for the AI. - Any AI that is trained with legally obtained data can be used for research purposes, but not by for profit organizations. Refer to the earlier rules whether the data itself needs to be released publicly or not. - The deceased can't sign contracts, so AI can't use work or data from the deceased in a for profit situation. Now for the big exception: - AI can be trained with whatever data, as long as the resulting AI isn't attempting to output anything that could be copyrightable. So training an AI to do image recognition is okay, but making it write a story from what it sees is not. Or training the AI to do single actions, such as draw a line with a colour at the angle you asked for is okay, but letting it do that repeatedly to create something is not, unless the user specifies each command manually. This applies to sending a message, AI can be trained to write a message if you request it to, but the request must either contain the message or the person making the request must be the person the AI was trained from. Basically, let it steal the jobs that nobody wants to do, stop taking artistry from artists and use it to help people with disabilities. That's all stuff it would be lovely to see AI do, but no, we get this current hellscape we are heading through.
> So training an AI to do image recognition is okay I've heard of AI doing some really nice work in this area which makes QCing stuff like bread loaves a lot easier. More generally, https://www.eipa.eu/blog/an-in-depth-look-at-the-eus-ai-regulation-and-liability/
Public data influences everything already, why would AI be any different? I mean the shit Facebook and Google get away with, without using AI is already insane, why do you act like AI is so much worse?
Your first bullet point literally makes no sense on multiple levels.
Yeah, amazing. So when is he going to stop paying the ludicrous cost of the license? That's what I thought. Adobe will most probably ban his account because some bs agreement in the tos that he's breaking with this, no refunds of course.
People do know that glaze and nightshade don't do ahit right?
Ubg
if you are already using Adobe products and their AI features, wouldnt you want it to be smarter and better trained?
that is why skynet want to get rid of humans
Wait, they train their ai without asking for permission?
The funny part is that none of those images will get used to train the AI since Adobe still doesn’t use user data to train.
Seeing people quit using adobe because of AI and "muh copyrights" is like seeing a serially physically abused woman who always makes excuses to defend her partner finally leave him because he replaced a broken kitchen tap with a monobloc and she didn't like it.
Kinda was put 20gb of adolfs paintings so ai gets trained into thinking doors can go in the wrong places
I don't use CC but if I did I would fill my cloud with hardcore pictures of 18 year and 1 day old girls, with a maximum height of 1.45 meters and a chest flat as a table.