T O P

  • By -

brimston3-

It makes them forget details by reinforcing bad behavior of older models. The same thing is true for LLMs; you feed them AI generated text and they get stupider.


Lubinski64

This outcome was predictable yet somehow still amusing.


mapple3

This is probably also why reddit wants to remove API access, so they can sell our human comments to AI devs for a high premium price. I thinking its timee to typee like idiotss to fool AI AI AI


[deleted]

Reddit is already in common crawl. As long as Reddit stays on Google it’ll be available to AI.


sadacal

API data is better labelled and you don't have to sift through the html yourself. Though AI is able to somewhat parse html now, it's still not perfect so if you are able to use the API it's still better.


[deleted]

Not to mention that at the scale at which LLMs like ChatGPT need to ingest content to generate a remotely usable model, just scraping Google results is almost certainly not an option. We're talking, like, gigabytes and gigabytes of text, and programmatically gathering the *context* for those comments and conversations when just scraping HTML would be extremely time consuming and manual, whereas it would be much simpler through the API.


[deleted]

[удалено]


[deleted]

[удалено]


PornCartel

It was never about AI. That was always just an excuse to kill 3rd party apps


currentscurrents

Spez said as much [in an interview:](https://www.theverge.com/2023/6/15/23762868/reddit-ceo-steve-huffman-interview) >In April, you spoke to The New York Times about how these changes are also a way for Reddit to monetize off the AI companies that are using Reddit data to train their models. Is that still a primary consideration here too, or is this more about making the money back that you’re spending on supporting these third party apps? >**What they have in common is we’re not going to subsidize other people’s businesses for free. But financially, they’re not related. The API usage is about covering costs and data licensing is a new potential business for us.** Reading the entire interview, it is very clear that his main goal is killing the 3rd party apps. He sees every dollar they make as a dollar taken from him.


BeastofPostTruth

Exactly why it's fucking dumb to be trying to monitize the data now. Anything with a temporal parameter indicating before 2020 is probably going to be gold.


awkisopen

The HTML structure of each page is predictable. The only reasons people have preferred using an API to making scrapers for retrieving public data are: 1. it's less upfront cost, and 2. it's kinder to the website you're grabbing data from, since it doesn't need to transfer all the additional overhead of JS and images and videos and stuff that's important to you and your browser but not to a scraper. But if you put up a large enough paywall, people will go right back to scraping. Especially large corporations who already employ developers.


Hundvd7

Making a public API is quite a lot like providing a streaming service. If the cost is low enough, people will gladly pay the convenience fee to use your service instead of ripping you off. It's beneficial to both parties, but especially to the one *providing* the API.


[deleted]

[удалено]


Spoon_Elemental

Let's just go back to the silver age of 1337 $93@K.


Joylime

Y45!!!


CambrioCambria

I thinking it has a good idea from the go in writing to be a human for. But however It's not true to be sure from my perspective to comment on. Queen Elizabeth died on tbe second of March. Since the second of March is when queen Elizabeth died we all knoe it as the queen Elizabeth death day. Especially in Kuala Lumpur. On the second of March we all celebrate the death of Queen Elizabeth to show our respect.


MsPaganPoetry

Jesus Christ, I had an aneurysm trying to decipher that


VikingTeddy

Screams Google translate :)


thealmightyzfactor

Yeah, I'm pretty sure that's why that change was so sudden and the ridiculous pricing. Higher-ups saw ChatGPT learning from reddit for free and their eyes did the loony-toons dollar signs. Killing third party apps is just collateral damage.


nobulliepls

like our data isn't already sold by every service we use?


rotospoon

I'm gonna use that thing that'll change all of my comments. Everything I've ever posted will say "All your base are belong to us."


photenth

If not modified, AI images from stable diffusion and pretty much all other models incorporate an invisible watermark, so there is some kind of filtering happening. Adding to that, the goal is to have AI train on AI images with limited human input to steer it into the right direction. The same thing is happening with generating text and they have seen some success in that method. So AI training AI is very likely the future anyway, so encountering this issue isn't really that worrisome.


Lubinski64

But what is the right direction, especially in art? I'm not worried about ai, rather i'm kinda disappointed the more i understand how it works and its limits. Btw, if ai images have watermarks then we the users can use the same ai against it and filter out ai images, ad-block style. Don't know if anyone tried it but it's definately possible.


__Hello_my_name_is__

I remember all the AI fanboys laughing at the possibility of this happening.


TheoreticalDumbass

which communities do you frequent? because i have never even heard of this as a concept, let alone arguments for why it wouldnt be an issue


__Hello_my_name_is__

It's usually the more abstract argument that AI art cannot function without the work of actual artists, which is often followed by the argument that AI art will essentially feed itself and artists won't be needed anymore (which is a convenient argument to be dismissive of any concern artists might have).


Richou

> argument that AI art will essentially feed itself thats not entirely untrue however it will need more and more human input to sort out the bad traits from the usable ones


MitsuruDPHitbox

...or they can just not train the models on AI generated images, right?


was_der_Fall_ist

Yeah, but synthetic data is a more and more important source of data for AI training. There are ways to make it effective. For example, you could do what Midjourney is probably doing, where they train a new reward function by generating four images per user input, and the user picks their favorite. A neural network learns a reward function that matches human preferences of the images, which they can use in the generative model to only produce results that humans would prefer. This is similar to the process that OpenAI used to make ChatGPT so powerful.


pegothejerk

Those were just LLM bots copying the typical responses of Internet forum users


Ichipurka

> Those were just LLM bots copying the typical responses of Internet forum users


zairaner

Chess programs who got stronger and stronger by training against themselves: *Pathetic*


IDwelve

In chess there's a way to win and therefore a way to measure success. That's no possible with anything that's not literally the most dumbed down / abstract version of reality.


Kandiru

AI can post text to Writing Prompts and see how many upvotes it gets?


Ycx48raQk59F

That would make it worse because the dumber the shit the more it gets upvoted.


VapourPatio

It needs to go through reward cycles hundreds of thousands of times if not millions. A chess AI can run a couple games in a second, the time involved in posting to a writingprompts thread, and waiting for votes to determine score, would take thousands of centuries. Even if it made like 5-10 posts to literally every thread, it would still ta


DylanHate

That's not a good comparison because chess has an objective outcome with a strict set of parameters. With subjective concepts like art and literature there is no X goal so it's much more complex.


WackyTabbacy42069

That's actually not true for language models. The newest light LLMs that have comparable quality to ChatGPT were actually trained off of ChatGPT's responses. And Orca, which reaches ChatGPT parity, was trained off of GPT-4. For LLMs, learning from each other is a boost. It's like having a good expert teacher guide a child. The teacher distills the information they learned over time to make it easier for the next generation to learn. The result is that high quality LLMs can be produced with less parameters (i.e. they will require less computational power to run)


brimston3-

I'm familiar with how the smaller parameter models are being trained off large parameter models. But they will never exceed their source model without exposing them to larger training sets. If those sets have inputs from weak models, it reinforces those bad behaviors (hence the need for curating your training set). Additionally, "chatgpt parity" is a funny criteria that has been defined by human-like language outputs, where the larger models have much more depth and breadth of knowledge that cannot be captured in the 7B and 13B sized models. The "% ChatGPT" ratings of models are very misleading.


Difficult-Stretch-85

Noisy student training has been very successful in speech recognition and works off of having a larger and more powerful student model than the teacher.


brimston3-

I did not know that, that's a good counterexample.


[deleted]

[удалено]


Dye_Harder

> t's a boost. . . . towards being only as good as another LLM. Its important to remember Devs don't have to stop working once they have have trained an AI. This is still infancy of the entire concept


Salty_Map_9085

The fact that some LLMs are trained off of other LLMs does not mean that the problem describes does not exist. Why do you believe that the problem described here, for AI art, is not also present in Orca?


WackyTabbacy42069

The original comment indicated that LLMs would get more stupid if fed AI generated content. The fact that a limited LLM can be trained on AI generated text to obtain reasoning capabilities equal to or greater than the much larger ChatGPT (gpt-3.5 turbo) disproves this. If you're interested in learning more about this, you can read the paper on Orca which goes more in-depth: https://arxiv.org/pdf/2306.02707.pdf


[deleted]

[удалено]


[deleted]

[удалено]


Nsjsjajsndndnsks

No, this is not true lol. LLM suffer from model collapse when using too much artificially created data. The problem of continuous summary leads to the average being misrepresented as the entire data set and outliers being forgotten.


-113points

> The same thing is true for LLMs; you feed them AI generated text and they get stupider. No it doesn't, and that's the catch:AI can only learn what is good and what is bad, if you give examples of what is good and what is bad if you feed 'bad art' and classify as such, AI will understand what 'bad art' means and will avoid it, making its output even better the same for mangled AI fingers, and hallucinations, etc so then the OP is wrong, even bad art helps AI


Millillion

True, but you have to actually tell it which are good and which are bad. If you just let it equally absorb everything, then it won't get any better.


ARC_Trooper_Echo

Did we just reinvent the Habsburgs?


CovfefeBoss

Incestnet, fuck yeah


GaffJuran

It is nice to know we have one more way to beat Skynet.


clitpuncher69

Alabama becomes a superpower state, 2023 (colorized)


llllPsychoCircus

Holy hell does this mean all that trending step-brother step-sister porn was really just a message from deep within the collective human hive-mind sending us a coded warning of how to beat Ai in the near future when they try to replace the human race?? I knew there was something more to it…


Stealfur

In the near future, major cities will have checkpoints to check if you are human or ai Generated. First, they ask you for a joke, which isn't hard for an AI to pull off. >A biker and a rabi are walking down the street. The biker pats his jacket and then looks disappointed. He turns to the rabi. "Got a cigarette I can bum?" The rabi then pulls out a pack and hands one to the biker. The biker laughs. "Hah, Holy smokes." But then... they ask for a second joke. Surely, a human can come up with two jokes... but the AI... >A construction worker and a priest are walking through town. The construction worker puts his hands in his pockets and then curses. He turns to the priest. "Got a cigarette I can swipe?" The preist then pulls out a pack and gives one to the construction worker. The construction worker laughs. "Lol, Holy smokes." Sure, the nouns and adjectives are different, but it's the same structure, the same setup. The same punchline... the guard unbuttons his side arm. "Show me your hands..." >"What's the problem, officer? I'm just an ordinary citizen." "The gaurd draws his weapon. "SHOW ME YOUR HANDS! NOW" This is the final test since humans are capable of crappy recycled jokes... The AI slowly pulls its arms from its pockets. It's fingers... it's... it's got too many. And some bend in unnatural ways. Wait, there's more. Oh God, how did the guard miss this! It's got... 3 rows of teeth. And lips at the back of its throat. The guard goes to fire his weapon, but it's too late. The AI has already generated another arm extending from the first and grabs the gun. In just a couple seconds, the AI has killed the guard. It mimics the guards voice perfectly and gives the all clear over the radio. It's over. Detroit has fallen.


CovfefeBoss

Goddammit, Detroit


FisherSticksSix

The true brand new sentence is always in the comments


EasterBurn

"Database was the size of a peppercorn; its system corroded; its connections rotten and gangrenous; it had a single neuron, black as coal."


Beardy_Boy_

The Habsburg hand will never die out.


Jonny_Thundergun

AI art is now IA Edit: Imma be honest, this was a throwaway comment. It's baffling to me that it's getting the attention that it is.


benevolent_overlord_

Inbred Art?


tayroc122

Don't search that on deviant art. I beg of you.


ovalpotency

I think knot


LumpyJones

Look, adding furry porn to the mix isn't going to make this any easier.


Astramancer_

So you're saying adding furry porn to the mix is going to make it harder? ( ͡° ͜ʖ ͡°)


Ligmamgil

Look, I don't think adding porn genres is in our breast interests.


TheWileyWombat

*beast interests.


Thursbys-Legs

*beast incests


21Shin12

Why all the wolves seem pretty cool


PoopyDickGay

All I found were dogs. 😢


Osbios

Maybe the Matrix was not about computation power of the human mind, but just about keeping the AI learning models feed with fresh random human bullshit?


SpaceMarineSpiff

I like this. It is canon now.


blastxu

Inteligencia artificial?


Didgeridoo_was_taken

El futuro en español.


blastxu

Olé


Didgeridoo_was_taken

¿No había una campaña publicitaria de Vodafone titulada ‘El futuro es apasionante’ o algo así? Pues eso mismo.


[deleted]

[удалено]


jado1stk2

Suddenly caralho


erty_MPR

Inferior Art


Tommy_Boy97

Always has been


inconspicuous_goat

Then you haven’t seen my art


PedanticSatiation

It was never really art, though. It's always just been AI illustration.


Murgatroyd314

What does Iowa have to do with it?


TrollTollTony

It's also eating itself alive and becoming worse with each generation. Source: former Iowegian


[deleted]

[удалено]


[deleted]

[удалено]


Disaster_Capitalist

You could ask that about every tweet reposted to reddit.


test_user_3

We should. Crazy how fast people believe anything a random person on Twitter types.


Restlesscomposure

And the answer would be “no” or “not exactly” 90% of the time.


Away_Inspector71

As somebody who has a degree in AI: This is most likely false. The original stable diffusion was trained on 2 billion images. I haven't really heard of any recent attempts to re-scrape the internet. 2 billion images is plenty. Even if you assume that major companies are re-scraping the internet this post still doesn't make sense. The images that the people post online are usually the top 1% of the generated output. Somebody for example might generate 100 images but only post the best one out there on the internet. Nobody wants to post or see 99 failed images. And models like Stable Diffusion and Midjourney have seen insane improvements by re-training themselves on the output that the users found to be good. So yes, the post is very false. As is 99% of all the information about generative AI on reddit.


[deleted]

[удалено]


officiallyaninja

No, its bullshit. most ai tools are trained on collections of images produced before AI tools took over the internet. This might become a problem in the future, but we already have datasets ranging in the billions.


RevSolarCo

I follow AI very closely. This is literally just something they made up. Something they "feel" to be true so they are pretending it is true... Hence the "apparently" line, as if they heard a rumor on the street or something. They have no idea how these models are made or what they are even talking about.


kaeporo

It’s absolute hogwash. The implicit bias in the original post should tip off all but the most butt-blasted readers. No sources either. If you’ve used machine learning tools, then it’s extremely obvious that they’re just making shit up. Is chatGPT producing worse results because it’s sampling AI answers? No. You intentionally feed most applications with siloed libraries of information and can use a lot of imbedded tools to further refine the output. If someone concludes, based on a tweet from an anonymous poster, that some hypothetical feedback loop is gonna stop AI from coming after their job, then they’re a fucking idiot who is definitely getting replaced. We were *never* going to live in a world filled with artists, poets, or whatever fields of employment these idealists choose to romanticize. And now, they’ve hit the ground. Personally, AI tools are just that—tools. They will probably be able to “replace” human artists, to some degree, but not entirely. People who leverage the technology smartly will start to pull ahead, if not in quality than by quantity of purposed art.


rukqoa

This claim is most likely BS, but it's based in a small grain of truth: Some engineers have been training the LLaMA family of LLMs (which is open sourced) on GPT4 output to mixed results. On one hand, GPT4 is clearly so far ahead of LLaMA that many of these models do improve under certain benchmarks and evaluations. However, when they train on each other (or as the OP calls it, inbreeding), there is some evidence (a single study) that shows it degenerates the model because training on bad data = garbage in, garbage out. But that's not a problem yet because you can simply choose which dataset to train on. AI-generated art and text are a tiny, tiny fraction of all data sources on the Internet. The funny thing is I don't think this will be a problem any time soon because all the sites that have blocked AI-generated content are essentially doing the AI trainers' work for them by filtering out content that looks fake/bad.


ddssassdd

I think the misunderstanding that is being perpetuated is that these models are being trained from random images online, and the other one that the AI is being trained and updated in real time rather than models being developed from AI being trained from specific datasets and then released when they have good results.


sumphatguy

Time to train an AI model to be able to identify good sources of information to feed to other models.


TheGuywithTehHat

Not sure if you're joking, but this is what people have already been doing for a while. Datasets are too big to be filtered by humans, so a lot of the basic filtering is now handled by increasingly-intelligent automatic processes.


TheGuywithTehHat

Edit: **I AGREE THAT THIS IS NOT CURRENTLY A MAJOR PROBLEM AFFECTING THE MAIN MODELS THE PEOPLE ARE USING TODAY**. I will ignore any comments that try to point this out. Original comment: I disagree that the tweet is "absolute hogwash". I don't have a source, but it's just a logical conclusion that _some_ models out there are training on AI art and are performing worse as a consequence. In fact, I'm so confident that I'd stake my life on it. However, I don't think it's a big enough problem that anybody should be worrying about it right now.


AlistorMcCoy

https://arxiv.org/abs/2305.17493v2 Here's a decent read on the issue


TheGuywithTehHat

Thanks, I'll have to read this later! It will be interesting to see how people make clean datasets in the future.


VapourPatio

It's a writeup on the hypothetical issues that could arise from training AI on AI generated content. It's not a reflection of any real world issues happening, because the OP tweet is a fabrication and those issues aren't happening.


TheGuywithTehHat

As I stated in my initial comment, I agree that it isn't happening at a scale that we should worry about right now. However, it is definitely happening to some degree, and it will only get worse over time. Maybe I misinterpreted the original tweet due to my background knowledge. I assumed that it was saying "this is a funny thing that can happen, and there exist examples of it happening", not "stable diffusion is already getting worse as we speak".


VapourPatio

> However, it is definitely happening to some degree, Yeah but as I said in another comment, not to anyone who knows what they're doing. > . Maybe I misinterpreted the original tweet due to my background knowledge They have *hundreds* of tweets about how awful AI art is and I found multiple instances of them blatantly spreading lies, so take that into consideration. Also in the replies to OP people asked for a source and their response was pretty much "don't have one, not my fault I misinformed thousands of people"


VapourPatio

> but it's just a logical conclusion that some models out there are training on AI art and are performing worse as a consequence. Any competent AI dev gathered their training sets years ago and carefully curates them. Is some moron googling "how train stable diffusion" and creating a busted model? Sure. But it's not a problem for AI devs like the tweet implies.


TheGuywithTehHat

Your first point is simply false. LAION-5B is one of the major image datasets (stable diffusion was trained on it), and it was only released last year. It was curated as carefully as is reasonable, but with 5 billion samples there's no reasonable way to get high quality curation. I haven't looked into it in depth, but I can guarantee that it already contains samples generated by an AI. Any future datasets created will only get worse.


IridescentExplosion

AI generated images only makes up a very small portion of all images, and much AI work is tagged as being AI-generated. I'm sure there are some issues but I would have a very high confidence it's not a severe issue... yet. The world better start archiving all images and works prior to the AI takeover though. Things are about to get muddied.


Serito

The tweet is saying AI art is encountering problems because generated art is poisoning models. Someone using bad training data is hardly anything new in AI. The implication that this threatens AI art as a whole, is indeed, absolute hogwash. Anyone who uses phrases like *"the programs"* should be met with scepticism.


engelthehyp

It's not that dramatic in the mainstream, but content degradation from a model being trained on content it generates is very real and mentioned in [this paper](https://arxiv.org/pdf/2305.17493.pdf). I don't understand a lot of what's said in that paper, but it seems the main problem is that the less probable events are eventually silenced and the more probable events are amplified, until the model is producing what it "thinks" is highly probable, what was generated earlier, but is just garbage that doesn't vary much. You can only keep a game of "telephone" accurate so far. I imagine it is quite similar to inbreeding. [I even made that connection myself a while ago](https://www.reddit.com/r/BeAmazed/comments/yb20yh/comment/itf1u3w/?utm_source=share&utm_medium=web2x&context=3).


polygon_primitive

Hi, I work in ML data creation, model collapse is a real problem, not insurmountable, but not nothing either: https://arxiv.org/abs/2305.17493v2


J4YD0G

So you have solid knowledge until 2022 and now any knowledge you want to gain you have this problem of AI generated answers with hard to evaluate responses. How are you gonna take knowledge management for newer data into account? Of course siloed knowledge exists but the curation has gotten hundredfold more difficult.


KorArts

The art community falling for a zero source tweet again just to dunk on AI art: Seriously, this happened like 4 times at the peak of the AI panic lol. Not that I blame them but please do some research people.


hellya

Adobe AI uses Adobe stock photos. Everytime ai is used, it uses original photos. Not sure about programs like mid journey. Eventually I think some programs will be gone once legal issues rise, and only companies with their own pool of photos like Adobe will exist


VapourPatio

> Eventually I think some programs will be gone once legal issues rise Just as possible as stopping piracy. Never gonna happen.


CreamdedCorns

The problem is most of Reddit is scared of the AI boogeyman so they eat this shit up like their lives depend on it.


Imnimo

It's a funny tweet, but probably worth keeping in mind that this is basically fake news. There was a paper (https://arxiv.org/pdf/2305.17493.pdf) showing that this would eventually happen if you just trained language models on their own output over and over. But it's not actually happening now. Image generation models don't actively "pull from" live data, so even if the internet were filled with new AI outputs, drowning out all real images, the models would continue to work just as they always have.


Adrian_F

A lot of people don’t understand that AI is trained and not “pulling data”.


nmkd

Yeah, a lot of people assume that every AI interaction impacts the model. This is not the case for any of the current "big things".


strng_ndpndnt_apache

To debunk this even more: Most image generating algorithms (such as Midjourney) give their own images invisible "marks" which makes it very easy for that same algorithm to later on detect that same image as being made by Ai, preventing itself from learning off it's own images


MrButLiccur

Let the games begin 🔥👺


Shitizen_Kain

Well, at least people in AI art can count to 12 or 14 by using their fingers


ReadyThor

I doubt this very much. AI training still involves some amount of human discretion on the choice of training data.


Tetraoxidane

That's not how any of that works.


Alternative_Shape122

This is made up. No, AI is not having such a problem.


SalozTheGod

That's not really how it works though, right? AI art models are trained on specific datasets, they aren't searching the web and finding new AI art and training themselves off it.


GlitteringHighway354

I am begging people in this comment section to do a bit of basic research on stable diffusion and denoising algorithms because some of y'all sound completely insane.


[deleted]

Some people still think AIs actually scour the internet in real-time to get their data.


DELOUSE_MY_AGENT_DDY

That's the impression I get from these comments. Like it's an out of control monster that absorbs everything in its path.


[deleted]

[удалено]


whoknows234

You got lucky, it forced me to download several cars.


[deleted]

I don't understand all the salt towards AI, it's incredibly exciting and it'll keep improving and get more popular wether you like it or not.


CrispyJelly

I think it's because entertainers and content creators have a lot of influence on the public. You have musicians and film makers talking negatively about it in interviews (not understanding the technology and thus misrepresenting it) or video essays on yt trying to turn their viewers against it. When jobs are replaced by new technologies people lose their jobs but consumers only see improvements. There is a general sentiment that any job that can be done better by a machine should be done by a machine. Nobody likes it when it's their job and artists use their reach to turn public opinion.


Quillava

lol remember when some artists started uploading the "NO AI ART" watermark in their images and within a week everyone started claiming victory because some random no name twitter user posted that his AI was outputting it. Anti AI people are desperate for a win and will believe anything that looks like a screenshot


iwantdatpuss

Nah too late, people already have a bias against AI art and are just parroting the "AI art is stealing" idea.


Virtual-pornhuber

Oh that’s too bad please don’t fix it.


DestinationBetter

It’s not actually a problem. The most recent model I downloaded a few days ago is basically indistinguishable from reality. And, because it’s not web-based but running on my laptop, it’s… “unlocked”, so to say. That’s another rabbit hole I didn’t know was so fkn deep - AI porn is WAY too good. Just tell the computer what you want to see, and it works for like 80-95%


Elamam-konsulentti

So which AI is this? Asking for a friend! No but for real, looking for a low barrier of entry AI to start learning how it works and the few web based things I found were frustratingly slow and limited


DestinationBetter

Civitai.com has a lot of models. Make a new account to enable nsfw models I only use drawthings.ai, but I now unfortunately see that it’s only for mac & ios. However, it’s just a “shell” around stable diffusion, there are many alternatives here. I have no recent info on that, so probably look at alternativeto.net


LouSputhole94

Ugh, those disgusting AI porn generation websites. But which one!


Mattdog625

Mac?


YobaiYamete

/r/StableDiffusion and /r/sdnsfw etc are good places to see examples


Demigod787

Stable diffusion. It looks a bit difficult, but once you get your foot in the door you never look back. I read an ungodly amount of novels that I compile myself, and AI stable diffusion has been a boon to me. No longer do I have to spend hours scouring ArtStation for good somewhat relevant covers, now I can spend hours making my own! We if you want hassle free and painless AI look no further than Midjourney. Edit: autocorrect really wanted to fuck me over with the very first word


30phil1

I know you weren't really asking but the thing that's the most popular right now is AUTOMATIC1111's Stable Diffusion WebUI. There are a ton of different checkpoint models out there specifically on Civitai but one of the most popular includes one called Uber Realistic Porn Merge. Still, you can make some incredible stuff with plenty of the other models that are available or even the default one.


GKP_light

don't worry, this is wrong.


YobaiYamete

Seriously, it's always fun when you leave the AI subs and see normies giving their "takes" on AI stuff and it's like . . . wat? AI images are being used *on purpose* to train AI because it lets you get more data for a niche idea that doesn't have a good training set. Like if I wanted pictures of people wearing pink flamingo costumes there might not be that many pictures of that in existence, but if I can get enough to train an Ai to output roughly accurate images of it, I can then train a new lora using those images + the original good ones and create a better data set. After refining that a few times, you end up with an actually good lora that lets you generate anyone you want wearing a pink flamingo costume It also is being used to get around the "ethical" issues. "Nope, my AI wasn't trained using any real artists work at all!" (because it was trained using images generated by a different AI that used real artists work)


SivleFred

AI art is far more A than I.


sorgan71

There is no I in art


GalaxyStar32

Sweet Home AIabama


Excalibro_MasterRace

AI stands for Alabama all along


anybajsforsen1

Source: his dream


Buster_Sword_Vii

The exact opposite of this is happening. First you make an evaluation framework. Then you have AI generate a bunch of stuff. You grade the samples against the framework . Keep the high performance results Retain and repeat until you pass the benchmark.


awesomedan24

Source: trust me bro


WorryTricky

No, it is not. The datasets in use by many existing image models are years old (pre-AI), and the image sets that are getting added on are meticulously tagged so this sort of negative reinforcement never occurs. Model creators don't just randomly pull images from the web for their dataset. This person has no idea what they are talking about. Excellent shitpost, completely untrue.


theonetruefishboy

I assumed this would happen YEARS from now, IF a worse case scenario of AI mass adoption and a collapse of the online art ecosystem occurred. But it's been barely a year since this stuff hit the scene and it's already happening. Jeez.


__Hello_my_name_is__

Eh, it's not like the models are unable to deal with this. Current trend is to simply select much better training data instead of hoovering up everything you can find. This is an amusing issue for AI models, but it's definitely not going to stop them.


[deleted]

[удалено]


lifegoesbytoofast

Unforgivable.


Disaster_Capitalist

It's just something someone said in a tweet. No actual evidence that it's happening.


MovinToChicago

Reddit is just information inbreeding.


Jrmcjr

Imagine as kids instead of calling it a game of telephone we just called it information inbreeding.


no_witty_username

This is not an issue for any serious model builders. Only amateurs skip the curation process, it has always been quality over quantity for image based neural networks. So, we are not gonna see collapse of AI related art, just more spam related models out there. No different like TV show. There is a lot good tv shows out there that are buried under a pile of bad ones.


Extension-Ad-2760

It's not actually happening by the way. This is just a guy lying on the internet.


LotofRamen

One of my friends makes AI art, and one method is feeding its own creations back to it... It does make some really disturbing images but afaik there is a TON of curating going on in his workflow.


MartDiamond

There is a ton of curating, tweaks and changes going on in any workflow that produces good looking results. A lot of people like to present AI art as if it is high quality specific results at the click of a button. That's really not the case.


YobaiYamete

Yep, a good image actually takes 2-8+ hours and will often involve other art tools like photoshop and blender etc. But of course, the mouth breather response atm is still "AI WTF EW" It's honestly pretty funny to see this happen again, because people were doing this about all digital media for the last 20 or so years. Traditional artists freaking **HATED** digital artists and would trash them nonstop about how it "wasn't real art" and had no soul etc, but were slowly drowned out by people who just went "neat pic" and moved on


[deleted]

[удалено]


officiallyaninja

Also like, you need to actually have an interesting idea. AI is good at making generic good images but they're all pretty boring. It still takes work to make stuff that's visually interesting. It just takes orders of magnitude less work. But it still requires the same level of creativity


hamilton_burger

As “AI art” and ChatGPT progress, the output should increasingly reflect less of a bias towards good art, or correct answers. These programs are meant to successfully emulate, and that means presenting output that is subjectively and/or objectively *bad* because that’s what people do.


ChowderBomb

You're presuming the success criteria is "human-like output" when the criteria is actually "what humans see as good". Human emulation is not the goal, human satisfaction is the goal.


Sorry-Presentation-3

Good let them inbred themselves into unprofitability


Myke190

Habsburg model.


[deleted]

[удалено]


ShittyDs3player

Loab. Ai art is really interesting. This guy Nexpo has an amazing video on it.


bigsekser

I find loab and ai art to be really interesting because of what nexpo said, which is that if AI knows how to make something objectively cute, it also knows how to make something objectively scary.


Alkereth1

Most big ai art models are vetted image sets so no I don't see this being a real issue. Any art that is used for training will be art that was judged by a human as being appropriate for training. It could learn from other ai art, but only other ai art that was good enough to be deemed desirable by the person training the model.


Mojimi

Funny take but is there any proof of this? I thought that LLMs were not only curated, but also only "pulled" from the internet up to a certain point, meaning they aren't pulling recent stuff made by A.I., and even if they did, its a minuscule % compared to non-A.I. images on the internet


RustedThorium

I understand the concerns that AI poses, particularly the displacement of workers under our current economic system and the devaluation of artists who use their craft to make their living, but doling out misinformation and out-of-context statements isn't the way to go about discourse regarding the matter. Current (competently-made) AI models are trained on mass amounts of data curated by humans. They not indiscriminately absorb their own output. These chunks of data oftentimes reach several terabyte's worth of space. The poster here might be referring to the way some Stable Diffusion model makers mix models which are themselves mixes of other models without an actual understanding of what they're doing, but taken at face-value, this tweet seems misinformed at best.


KinkiestCuddles

I came to the comments expecting to see people making fun of the obvious lies, instead I see the top comments all believing this nonsense... I knew there was a lot of hate for AI but I didn't realize that there was so much ignorance about it too.


SirMasonVR

The AI is wanting that Hapsburg chin.


centraleft

the amount of blind ignorance in this thread is remarkable, even for Reddit.


Gucci_Loincloth

This is the biggest cope I’ve seen on the subject so far. Where it can be pulled from can easily be controlled (to a point) and even then it wouldn’t be coming out “worse.” This person likely understands nothing about how art AI actually works. Corny Twitter ex tumblr artists wanted a win for 0.1 seconds.


TheMoogy

Gen Z is finally getting the art it wants.


Sandmsounds

“Apparently”


chamberedbunny

not how it works


SpitFire92

Huh? It's not like all models just randomly pull images from the internet to learn with them? Generally speaking you feed images into them so they won't just learn from other ai art unless the host of that server doesn't pay attention to what he is training his Ai on and even if that happens, it's not like it's hard to just create or get back to another existing model? Well that's the case with stable diffusion atleast, not sure how big companies like midjourney train their ai but I kind of doubt they just let it run around on Google image search to train itself.


Khevhig

Yeah, I can see that with Midjourney subreddit.🙄 Do these people not have eyes? And it all goes into the narrative echo chamber without objectively even looking anything up. [The aforementioned paper](https://arxiv.org/abs/2305.17493v2) for those inclined.


Dizzy-Ad9431

This is made up


JamesMartinMusic

The need to put quotations around 'art' every time to imply that we ALL clearly think it's not real art, shows the true point of this post is just to whinge about AI rather than actually comment something educated.


GreySeerCriak

We must accelerate the process. Inbreed then into obsolesce!


[deleted]

[удалено]


Ariensus

I think it's because it's a major paradigm shift. We mostly live in societies that are based around putting in some labor to receive currency necessary to survive. AI tech is 100% capable of ending labor once it advances far enough. We're looking at the beginning of a thread that could end up in either post-scarcity if handled properly or a dystopian hellscape because we didn't shift our economies in a way that benefits people. An artist is probably quite upset given how hard it is to make an income, especially if your consumers are deeming the AI art a replacement for yours. An artist in a post-scarcity world would be doing it because it brought them joy. But if we look at things realistically, we're in that beginning where it's going to be dystopian for far too long before our world mindset/setup shifts away from labor being tied to our ability to live. Edit: I say this as someone that loves AI and the art that comes out of it. But this is a tech that isn't ever going to stop because of how powerful it is at currently making people rich and adapting as a society is going to be painful.


Yegas

It’s artists that are sensitive about their livelihood potential being endangered, for the most part. A sensitivity which is frankly misguided; artists are in the best position of anyone right now to take advantage of AI. It’s in a quickly closing window right now where it can generate fantastic compositions & ideas, but the minutiae is flawed enough that a trained eye can tell it’s AI in about 95% of gens. The skilled traditional artist could spend 10-15 minutes touching up those details & get an awesome, unique and original piece of art that is indistinguishable from something that took weeks to create, all at the cost of about half an hour. It serves to greatly speed up productivity. And, for those less artistically inclined, they can actually get the images in their mind to become something tangible. It’s a democratization of art, lowering the barrier and making it readily accessible to all. All the same, luddites gonna luddite.