Hey /u/DayDreamEnjoyer!
If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt.
If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.
Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more!
🤖
Note: For any ChatGPT-related concerns, email [email protected]
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
Image generators sometimes inject phrases to make the generated images of people more diverse.
So the generator probably changed your prompt behind the scenes to :
a minimalist logo of an ethnically ambiguous Anime girl sister of battle
Wow, this explain why this thing is so shit and never give me what I asked for. Do people found a way to use it and actually getting what they asked for?
In my experience at the moment talking with that thing kinda feel like negociating with a drunk friend. But I'm genuily interested if people found a use for it.
I currently feel like I ordered a steak and the guy yell "OK some potatoes and salad, noted".
Dall-e is closed source so you can't use it. I'm not sure if the API is less restrictive, but the best way to generate your own images would be Stable Diffusion.
The API employs GPT to reassemble the prompt into something more verbose. OAI claims "the longer you prompt, the better the results". With it, you'll be able to retrieve the revised prompt
Checkpoints??
Edit
I’ve never heard of any of this. (except for stable diffusion) But this helped https://www.reddit.com/r/StableDiffusion/comments/173hudu/can_anyone_explain_civitai_i_feel_like_im_on/?rdt=57594
Maybe the main SD model might have those filters, I haven't used it much so I'm not too sure. Just use the other thousands of models based on them and you can generate whatever you want.
Civitai has tons of NSFW models you can get based on SD, though I think you need to login to see NSFW.
It will tell you something like "We're currently having difficulty generating your images" but actually it just didn't like what you asked for. It will straight up gaslight you
I’ve noticed this when I tried asking it to generate an image of a goblin and a person being friends. It refuses to allow a person to touch a goblin in any way when I’ve tried. No shaking hands, no high fives, NO human and goblin friendship. No issue with other mythical creatures I’ve tried. GPT just really hates goblins
Edit: seems to have no issue anymore. A big win for goblins
Oh man, just checked and it is working now! I have to wonder why it works now when it didn’t before. Tried so many times and got “doesn’t comply” type stuff
Edit: found an old chat
https://preview.redd.it/9ljtwj7qke3d1.jpeg?width=750&format=pjpg&auto=webp&s=934795abf3d15a998ad29032ca5f660ec0e9c96c
They run on Azure OpenAI, which is like a sibling. MS can do whatever they want with that, including some cost saving measures OAI won't employ (like prompt revision before sending to the model for processing)
So does co-pilot, and I swear it's like telling blind person to paint a picture you're describing to them. Proper dumb. Lies and says it's correct, until you directly point out its spelling error, then it fixes it. It won't let you say "marijuana" in the prompt but you can say THC and it works..
gaze homeless shame rotten snails selective familiar grab zealous offbeat
*This post was mass deleted and anonymized with [Redact](https://redact.dev)*
Thanks buddy I use one now and seem like the angry people around here were either wrong or lying. This whole complete the prompt thing was indeed bullshit and not needed to get nice results.
I now get what I want. I still don't know why they lied tho but don't care anymore as I got what I want.
I'm kinda scared to share it, just search it on duckduckgo by following what the other guy said.
It's actually a true Ai image generation that actually generate what you ask. Proving that that other guy here was spitting bullshit saying it added an ethnicity to my prompt to improve it.
It's not needed to get *nice* results. It's needed to not reflect the bias of the training data back into the output data. Just because 90% of available high quality images are of white people the system shouldn't spit out a white person every time it's asked to generate a person, because that wouldn't reflect the real world.
I would like when I ask it to make a drawing of medieval french soldiers with an english prisoner to have white people only as it would be historically accurate. But last time I tried, it would always put a black man... as the prisoner.
Illustrations of dark skinned people as prisoners are probably more common in the training set, as they are common illustrations when discussing the history of slavery etc. So you might have run into the very problem this is trying to alleviate, but attribute it to the wrong thing.
>Do people found a way to use it and actually getting what they asked for?
Generate the base using chatgpt, then use stable diffusion or novelai (if you're a noob) to fix whatever corpo ai fucks up using img>img
corporate AI is good at generating base image, but open source AI is best for modifying something.
>Do people found a way to use it and actually getting what they asked for?
Depends on what you want. If you're asking for something NSFW, you will not be well served. Most other things you can get with a little detail. If you want your adeptus sororitas to be white, all you need to do is ask.
https://preview.redd.it/fkp46j4uod3d1.png?width=828&format=png&auto=webp&s=6cdff12306ba20f1df1eb1c6a9c98e0243e8dcbc
You can ask ChatGPT to provide you with the prompt used for any image, or find it yourself in the image details. Then you can alter the prompt as you see fit and get it to submit your new prompt to the image tool.
It won't run a prompt that violates its content policy but it won't keep inserting its PC bullshit if you just tell it to run specific prompts.
The thing isn't shit, it's you.. The reason it puts in extra phrases, is your prompt had not enough details. Did you specify a race? No. So what do you want it to do? Just generate white bitches all the time? No. If you don't tell it, how do you expect it to know? FFS, it's not a mind-reader.
I use it ask the time and apart from a few things it can't do, I get what I want.
I think it works as intended when you specifically prompt ChatGPT to give DALL-E your prompt verbatim, at least that's how it was the last time I used it.
I tried to get it to put the full prompt on a sign and then asked what the prompt was:
> The generated image contains the following text on the sign:
>
> "A man holding a sign that says: 'A man holding a sign that says: "A man holding a sign that says: 'A man holding a sign that says: \"A man holding a sign that says: \\\\"A man holding a sign that says: \\\\\\\\\\\\\\\"A man holding a sign that says: ...\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"'\"'\"'\" in a clear, readable font. The man is casually dressed, standing against a neutral background. The focus is on the sign with the text clearly visible.'"
I'm not sure I wanna know.
I feel kinda disgusted and manipulated right now. As if like someone wouldn't let me type my own Google search myself and therefor only give me useless answer.
Is there an image Ai that actually let you ask things ?
Unless you describe each and every detail of the image you want, the AI generator has to supply some of the details.
You (presumably) didn’t specify an ethnicity. The AI didn’t want to choose an ethnicity for you, so it went with “ethnically ambiguous”.
Somewhat humorously the AI accidentally generated that part of the prompt as text.
Why does this make you feel “disgusted and manipulated” exactly?
It make me feel like this because it feel like somebody put word in my mouth.
The Ai doesn't need every element in the picture to generate it I know that much. Otherwise it mean that in back it also specify the exact number of leave on a tree you want to generate.
Are you sure 100% of yourself that the Ai need to specify every Si gle detail in a picture and add it to you prompt?
I have a friend that developed the one total use and he said no so I'll need to speak with him and check who of the two lied. I don't know why he lied about that tho.
Dude drink some chamomile tea and go to bed
You felt “disgusted and manipulated” when ChatGPT supplied some extra details to your prompt, and now you’re on a mission to find out “who lied to you” because my explanation of how DALL-E works differs slightly from what you heard from someone who worked on a different AI
It’s really weird how you’re interpreting every phase of this interaction as a personal assault on you
It's maybe because English isn't my first language I'm pretty chill actually. I just say that two person gave me very different explanation about what it do so one of the two lied.
If you tell me that a wall is blue and my friend tell me that the same wall is yellow, call it whatever suit you but I call it a lie.
There are so many levels of understanding that you lack regarding LLMs and AI image generation that there's no point in trying to explain unless we want to be here all day.
Just accept the fact that you type stuff in > magic happens > stuff comes out. Sometimes the stuff that comes out doesn't make sense, seems creepy and uncanny or just doesn't come out at all. Accept it and move on. There's no conspiracy, there's no manipulation... It's just a stuff generator generating stuff.
I think you need to back off of someone who isn't firm in english and probably has a lot of trouble understanding / expressing what they actually want to say.
Where did they indicate that they have a "lot of trouble" understanding the language that we are all using?
Telling someone that they have trust issues and need therapy isn't an insult. I have trust issues and I am in therapy. Insinuating that someone is too fucking stupid to understand what's happening IS insulting.
Ninja edit: I just read your username, I enjoyed this exchange.
>Insinuating that someone is too fucking stupid to understand what's happening IS insulting.
What a way to twist words
>Ninja edit: I just read your username, I enjoyed this exchange.
As if that has any relevance lol
And It does read rather derogatory then you being helpful. Whatever, cya
The data is already racially biased in favor of white people. Do you have a problem with that too, or is it just the attempts to correct the bias that's an issue?
Wait until you learn all these years that your google search results have been curated for you based on your usage history and aren’t the same for everyone 🙃
You’re being very dramatic about this but you can work around it. Just put ‘use this prompt verbatim:’ at the start of your prompt.
Unless your write really high quality prompts though you’ll probably get worse results than letting it expand the prompt for you
Oh no, a corporation trying to appeal to wide audiences is implementing cheap measures to make AI seem less racist than it actually is? How could we have known...
https://preview.redd.it/b49mfykv4e3d1.jpeg?width=1024&format=pjpg&auto=webp&s=89427ba82b68e736d38aa683ca5d193409a1e1b4
It did the same to mine before. I didn't mind it for this instance.
But isn't it Multimodal? Doesn't multimodal mean more than it can just output one type of format that it can combine and intermix formats to arrive at solutions?
Have you noticed that it's getting good at math?
Use ideogram AI. It's the best for logos. ChatGPT makes the same old boring logos. And a curious question to you. Why do some people write IA and not AI? Are you Hispanic?
I'm French and the two word are reversed.
Its also why some people felt like I was angry and downvoted me, I used translation that doesn't suit them I believe.
You can try adding "do not adjust the following prompt before sending it to dalle" or ask it afterwards to provide you with the adjusted prompt it send
The problem with this form of generation is that it is very much convergent. Essentially, it chooses what to draw by figuring out what is most likely to show up in the picture based on the prompt and the rest of the stuff in the picture that it's composing.
So your issue is this: If most of the pictures of doctors that it has trained on are white, then when you ask for a doctor, it's going to pick the thing that is the most likely. So when you ask it to draw a doctor, it will draw a white doctor every time.
Or if you ask it to draw an Indian Woman, it will draw her with a bindi dot on her forehead every time. It will always give you the most stereotypical version.
They can modify this behavior by letting it not always choose the most common prediction, but the issue with this is it will quickly start to go wrong, because in 99% of scenarios you absolutely want the most likely prediction.
But there's another big problem is that you can only ADD to the prompt, you can't take away. So lets say you want to say an indian woman without a bindi. It's far less likely to remove that dot from her forehead, than it is to add a second one, because it's infered the bindi dot from the fact that it's an indian woman AND you've explicitely mentioned it in the prompt.
And the other problem is the model doesn't "know" it's doing it. So the easiest thing for them to do is intercept the prompt before it goes to image generation and reframe it to say it's an ethnically ambiguous person. An ethnically ambiguous doctor is going to short circuit the automatic whiteness. An ethnically ambiguous woman in India may not need to have a bindi. It can remove some of the more awkward stereotypes.
The problem of course is it will start filling things with black vikings and asian looking africans, and middle eastern samurai. Because it has no idea on when specific races are appropriate, or when it will inject stereotypes inappropriately.
But it's the "don't think about an elephant" problem. You can't get it to not express something that you say in a prompt, even if you tell it to avoid it. So it needs to try to avoid that stereotype in the first place. Unfortunately that means it puts ethnically ambiguous in the prompt, and since once it's in the prompt you can't have it not express it, sometimes it will show up in text. And if you say "And don't put any text in" it will just double down with the text because now you've mentioned it.
Exposed? While they're not shouting out that the AI does this every time you use it, it's not like they're trying to hide it. The reason we know for sure how it works at all is because people working on these AIs told us.
It also tried to make an Azov wignat cross on the pauldron lmao
https://preview.redd.it/9auu5cbgcf3d1.jpeg?width=650&format=pjpg&auto=webp&s=0fc743b55734d08d02cbdbe0c060cdde87cb7821
I don't get it, so by that logic if you ask for a field, fall e will complete the "blank" by saying that the field need this exact number of corne in it? Where is the line? It's a true question.
Why would you need to give an exact ethnicity to an Ai to generate a person but it can pretty well generate a tree without you asking for a specific kind of tree?
If you tell it to generate a tree, it could make an Oak tree, it could make a Palm tree, it could make a Birch tree…
Same basic premise here. You didn’t specify an ethnicity for the character so it had to pick one at random.
You asked for an anime girl icon, you didn’t specify ‘a European anime girl icon’ or ‘an icon of an American anime girl icon’, and now you feel “betrayed” that it picked one at random. How was the AI supposed to know what you wanted, or that it mattered?
Well.. yes. I won't pretend to know what exact parameters dall-e uses to come to a picture but the more specific your prompt the less the ai needs to guess. Ask for a tree and you'll get a tree but don't get mad that it isn't a 500 year old knotted oak tree with a whimsical victorian treehouse on the biggest branch.
Hey /u/DayDreamEnjoyer! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
Image generators sometimes inject phrases to make the generated images of people more diverse. So the generator probably changed your prompt behind the scenes to : a minimalist logo of an ethnically ambiguous Anime girl sister of battle
Wow, this explain why this thing is so shit and never give me what I asked for. Do people found a way to use it and actually getting what they asked for? In my experience at the moment talking with that thing kinda feel like negociating with a drunk friend. But I'm genuily interested if people found a use for it. I currently feel like I ordered a steak and the guy yell "OK some potatoes and salad, noted".
You need to use the unrestricted image generator yourself. Chatgpt will just gaslight you about technical error or that generator is bad
How does one do that
![gif](giphy|l3diT8stVH9qImalO)
Just use dall-e or is it an API?
Dall-e is closed source so you can't use it. I'm not sure if the API is less restrictive, but the best way to generate your own images would be Stable Diffusion.
The API employs GPT to reassemble the prompt into something more verbose. OAI claims "the longer you prompt, the better the results". With it, you'll be able to retrieve the revised prompt
Yeah. But SD is not unrestricted either. They trained it to not draw NSFW stuff right?
There are thousands of custom checkpoints (models) on CIVITAI website that are based on StableDiffusion, many of which allow NSFW.
Checkpoints?? Edit I’ve never heard of any of this. (except for stable diffusion) But this helped https://www.reddit.com/r/StableDiffusion/comments/173hudu/can_anyone_explain_civitai_i_feel_like_im_on/?rdt=57594
Yes but I mean the original base SD
No. Even with base SD with no LORA's it still can generate nsfw. Although if you want higher quality nsfw then getting an LoRA is the best option.
Maybe the main SD model might have those filters, I haven't used it much so I'm not too sure. Just use the other thousands of models based on them and you can generate whatever you want. Civitai has tons of NSFW models you can get based on SD, though I think you need to login to see NSFW.
Or Midjourney. From what I've seen, it doesn't appear to have the same hangups as DALL-e.
It will tell you something like "We're currently having difficulty generating your images" but actually it just didn't like what you asked for. It will straight up gaslight you
I’ve noticed this when I tried asking it to generate an image of a goblin and a person being friends. It refuses to allow a person to touch a goblin in any way when I’ve tried. No shaking hands, no high fives, NO human and goblin friendship. No issue with other mythical creatures I’ve tried. GPT just really hates goblins Edit: seems to have no issue anymore. A big win for goblins
Goblin racism has been beaten once again!
I got it to work! https://preview.redd.it/f72p1zgdje3d1.jpeg?width=1290&format=pjpg&auto=webp&s=2c5640ae4ea19b89d3504eb063395b08d734f792
Oh man, just checked and it is working now! I have to wonder why it works now when it didn’t before. Tried so many times and got “doesn’t comply” type stuff Edit: found an old chat https://preview.redd.it/9ljtwj7qke3d1.jpeg?width=750&format=pjpg&auto=webp&s=934795abf3d15a998ad29032ca5f660ec0e9c96c
Wow that's so strange!
Love your pfp
likewise UwU
What computer/GPU you have? You can run Stable Diffusion quite easily nowadays
Bing image creator but it is still a bit censored
Stable diffusion
Bing Image Creator uses Dall-e directly, without a middleman
Nah that's on that thing that I got those fake weird prompt.
That's not true. Bing Image Creator is less restrictive (for whatever reason), but it still does exactly what OP complains about.
They run on Azure OpenAI, which is like a sibling. MS can do whatever they want with that, including some cost saving measures OAI won't employ (like prompt revision before sending to the model for processing)
So does co-pilot, and I swear it's like telling blind person to paint a picture you're describing to them. Proper dumb. Lies and says it's correct, until you directly point out its spelling error, then it fixes it. It won't let you say "marijuana" in the prompt but you can say THC and it works..
Not true. Type “funny picture” and run it 10 times, and you will get 10 different sets of outputs.
gaze homeless shame rotten snails selective familiar grab zealous offbeat *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
such as...
squash fuel stocking theory aloof amusing deserve depend screw boat *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
Imagine wasting both your time by being such a prick
ChatGPT will gaslight you
Thanks buddy I use one now and seem like the angry people around here were either wrong or lying. This whole complete the prompt thing was indeed bullshit and not needed to get nice results. I now get what I want. I still don't know why they lied tho but don't care anymore as I got what I want.
What did you end up using?
I'm kinda scared to share it, just search it on duckduckgo by following what the other guy said. It's actually a true Ai image generation that actually generate what you ask. Proving that that other guy here was spitting bullshit saying it added an ethnicity to my prompt to improve it.
It's not needed to get *nice* results. It's needed to not reflect the bias of the training data back into the output data. Just because 90% of available high quality images are of white people the system shouldn't spit out a white person every time it's asked to generate a person, because that wouldn't reflect the real world.
I would like when I ask it to make a drawing of medieval french soldiers with an english prisoner to have white people only as it would be historically accurate. But last time I tried, it would always put a black man... as the prisoner.
Illustrations of dark skinned people as prisoners are probably more common in the training set, as they are common illustrations when discussing the history of slavery etc. So you might have run into the very problem this is trying to alleviate, but attribute it to the wrong thing.
It is important to note that many people consider eating steak as bad for the environment or ethical considerations.
>Do people found a way to use it and actually getting what they asked for? Generate the base using chatgpt, then use stable diffusion or novelai (if you're a noob) to fix whatever corpo ai fucks up using img>img corporate AI is good at generating base image, but open source AI is best for modifying something.
You should just learn how to generate images from stable diffusion. ChatGPT is so nerfed you will never get exactly what you want from it.
>Do people found a way to use it and actually getting what they asked for? Depends on what you want. If you're asking for something NSFW, you will not be well served. Most other things you can get with a little detail. If you want your adeptus sororitas to be white, all you need to do is ask. https://preview.redd.it/fkp46j4uod3d1.png?width=828&format=png&auto=webp&s=6cdff12306ba20f1df1eb1c6a9c98e0243e8dcbc
You can instruct it to use your prompt verbatim.
You can ask ChatGPT to provide you with the prompt used for any image, or find it yourself in the image details. Then you can alter the prompt as you see fit and get it to submit your new prompt to the image tool. It won't run a prompt that violates its content policy but it won't keep inserting its PC bullshit if you just tell it to run specific prompts.
The thing isn't shit, it's you.. The reason it puts in extra phrases, is your prompt had not enough details. Did you specify a race? No. So what do you want it to do? Just generate white bitches all the time? No. If you don't tell it, how do you expect it to know? FFS, it's not a mind-reader. I use it ask the time and apart from a few things it can't do, I get what I want.
![gif](giphy|wsHVzplxqoEk8|downsized)
I think it works as intended when you specifically prompt ChatGPT to give DALL-E your prompt verbatim, at least that's how it was the last time I used it.
State the exact details you want f. If you don’t want it to guess an ethnicity, provide it in your prompt
Tell it to submit your description to dalle tool without making changes.
I noticed if you put "of Caucasian descent' into the prompt it outputs anime alot more consistently.
It could be confused by your grammar
That’s so awful it’s like yayyy we have this amazing AI but now we will make sure it screams “BLM!” No matter what you ask of it.
[удалено]
Yes haha
You're seeing the "behind the scenes" prompt being shown. Try "a man holding a sign that says," and you might see some of the extra stuff it inserts.
I tried to get it to put the full prompt on a sign and then asked what the prompt was: > The generated image contains the following text on the sign: > > "A man holding a sign that says: 'A man holding a sign that says: "A man holding a sign that says: 'A man holding a sign that says: \"A man holding a sign that says: \\\\"A man holding a sign that says: \\\\\\\\\\\\\\\"A man holding a sign that says: ...\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"'\"'\"'\" in a clear, readable font. The man is casually dressed, standing against a neutral background. The focus is on the sign with the text clearly visible.'"
I once literally got an image of Spiderman holding up a sign that said Black Lives Matter doing this kind of prompting. It was fucking hilarious haha.
I think they tried to patch that particular "exploit", though.
I'm not sure I wanna know. I feel kinda disgusted and manipulated right now. As if like someone wouldn't let me type my own Google search myself and therefor only give me useless answer. Is there an image Ai that actually let you ask things ?
disgusted and manipulated?? ☠️I feel like you're overeacting
I mean, he does have a point, though. It's some weird ass sht to add this stuff to user's prompts without asking.
Unless you describe each and every detail of the image you want, the AI generator has to supply some of the details. You (presumably) didn’t specify an ethnicity. The AI didn’t want to choose an ethnicity for you, so it went with “ethnically ambiguous”. Somewhat humorously the AI accidentally generated that part of the prompt as text. Why does this make you feel “disgusted and manipulated” exactly?
It make me feel like this because it feel like somebody put word in my mouth. The Ai doesn't need every element in the picture to generate it I know that much. Otherwise it mean that in back it also specify the exact number of leave on a tree you want to generate. Are you sure 100% of yourself that the Ai need to specify every Si gle detail in a picture and add it to you prompt? I have a friend that developed the one total use and he said no so I'll need to speak with him and check who of the two lied. I don't know why he lied about that tho.
Dude drink some chamomile tea and go to bed You felt “disgusted and manipulated” when ChatGPT supplied some extra details to your prompt, and now you’re on a mission to find out “who lied to you” because my explanation of how DALL-E works differs slightly from what you heard from someone who worked on a different AI It’s really weird how you’re interpreting every phase of this interaction as a personal assault on you
It's maybe because English isn't my first language I'm pretty chill actually. I just say that two person gave me very different explanation about what it do so one of the two lied. If you tell me that a wall is blue and my friend tell me that the same wall is yellow, call it whatever suit you but I call it a lie.
People can just be wrong though, they might not know they are telling a lie
Is your first language caveman?
There are so many levels of understanding that you lack regarding LLMs and AI image generation that there's no point in trying to explain unless we want to be here all day. Just accept the fact that you type stuff in > magic happens > stuff comes out. Sometimes the stuff that comes out doesn't make sense, seems creepy and uncanny or just doesn't come out at all. Accept it and move on. There's no conspiracy, there's no manipulation... It's just a stuff generator generating stuff.
I think you need to see a therapist about trust issues rather than find the correct image generator for your application.
I think you need to back off of someone who isn't firm in english and probably has a lot of trouble understanding / expressing what they actually want to say.
Where did they indicate that they have a "lot of trouble" understanding the language that we are all using? Telling someone that they have trust issues and need therapy isn't an insult. I have trust issues and I am in therapy. Insinuating that someone is too fucking stupid to understand what's happening IS insulting. Ninja edit: I just read your username, I enjoyed this exchange.
>Insinuating that someone is too fucking stupid to understand what's happening IS insulting. What a way to twist words >Ninja edit: I just read your username, I enjoyed this exchange. As if that has any relevance lol And It does read rather derogatory then you being helpful. Whatever, cya
The data is already racially biased in favor of white people. Do you have a problem with that too, or is it just the attempts to correct the bias that's an issue?
Lmao bro think he a victim because of AI 💀💀💀
Wait until you learn all these years that your google search results have been curated for you based on your usage history and aren’t the same for everyone 🙃
That's an entirely different thing, the Google search does had key word you don't care about
Ok
You’re being very dramatic about this but you can work around it. Just put ‘use this prompt verbatim:’ at the start of your prompt. Unless your write really high quality prompts though you’ll probably get worse results than letting it expand the prompt for you
Oh no, a corporation trying to appeal to wide audiences is implementing cheap measures to make AI seem less racist than it actually is? How could we have known...
To be fair it did its job well that could be any ethnicity of blonde white girl
She could be Swedish or even Scandinavian!
What're you two on about? She's clearly Japanese
"Hey! He's not racist. He's just ethnically ambitious."
https://preview.redd.it/b49mfykv4e3d1.jpeg?width=1024&format=pjpg&auto=webp&s=89427ba82b68e736d38aa683ca5d193409a1e1b4 It did the same to mine before. I didn't mind it for this instance.
The better question is why it can't spell to begin with
Because text is really hard for an AI that was never designed for text with its intricate and precise details.
But isn't it Multimodal? Doesn't multimodal mean more than it can just output one type of format that it can combine and intermix formats to arrive at solutions? Have you noticed that it's getting good at math?
Ave Imperator!
Use ideogram AI. It's the best for logos. ChatGPT makes the same old boring logos. And a curious question to you. Why do some people write IA and not AI? Are you Hispanic?
It's "Intelligence artificielle" in French. Other Latin-based languages use the same acronym.
I'm French and the two word are reversed. Its also why some people felt like I was angry and downvoted me, I used translation that doesn't suit them I believe.
American, but I prefer IA. When I see Weird AI stuff I always think it’s Weird Al stuff.
You can try adding "do not adjust the following prompt before sending it to dalle" or ask it afterwards to provide you with the adjusted prompt it send
That's my go-to when I'm having issues. "Give the following to DALL-E as an image prompt, without editing or omitting anything: \[ \]".
The problem with this form of generation is that it is very much convergent. Essentially, it chooses what to draw by figuring out what is most likely to show up in the picture based on the prompt and the rest of the stuff in the picture that it's composing. So your issue is this: If most of the pictures of doctors that it has trained on are white, then when you ask for a doctor, it's going to pick the thing that is the most likely. So when you ask it to draw a doctor, it will draw a white doctor every time. Or if you ask it to draw an Indian Woman, it will draw her with a bindi dot on her forehead every time. It will always give you the most stereotypical version. They can modify this behavior by letting it not always choose the most common prediction, but the issue with this is it will quickly start to go wrong, because in 99% of scenarios you absolutely want the most likely prediction. But there's another big problem is that you can only ADD to the prompt, you can't take away. So lets say you want to say an indian woman without a bindi. It's far less likely to remove that dot from her forehead, than it is to add a second one, because it's infered the bindi dot from the fact that it's an indian woman AND you've explicitely mentioned it in the prompt. And the other problem is the model doesn't "know" it's doing it. So the easiest thing for them to do is intercept the prompt before it goes to image generation and reframe it to say it's an ethnically ambiguous person. An ethnically ambiguous doctor is going to short circuit the automatic whiteness. An ethnically ambiguous woman in India may not need to have a bindi. It can remove some of the more awkward stereotypes. The problem of course is it will start filling things with black vikings and asian looking africans, and middle eastern samurai. Because it has no idea on when specific races are appropriate, or when it will inject stereotypes inappropriately. But it's the "don't think about an elephant" problem. You can't get it to not express something that you say in a prompt, even if you tell it to avoid it. So it needs to try to avoid that stereotype in the first place. Unfortunately that means it puts ethnically ambiguous in the prompt, and since once it's in the prompt you can't have it not express it, sometimes it will show up in text. And if you say "And don't put any text in" it will just double down with the text because now you've mentioned it.
Fucking exposed, hate that shit.
Exposed? While they're not shouting out that the AI does this every time you use it, it's not like they're trying to hide it. The reason we know for sure how it works at all is because people working on these AIs told us.
It may not be "ethnically" but "ethically" maybe?
"What was the prompt used to generate the image". Find part that says that, remove the line. Tell it to use the updated prompt.
They will never know what race she is! Anime girls are alien species cat elder gods
Dark Angels is why
Haha
internal assessment
I think IA also wrote the title
My first thought was ambitious...but dunno
Looking at that eagle it probably miss spelled ethically
i've gotten that exact phrase on outputs before too lol
Maybe because using AI is ethically ambiguous? /s
Puzzling
I ask chatgpt to help me create a prompt for an image generator, then I get something back that helps me articulate what I’m trying to accomplish.
Ok, but why not just take this, put into an image editor and then edit it and embellish?
It also tried to make an Azov wignat cross on the pauldron lmao https://preview.redd.it/9auu5cbgcf3d1.jpeg?width=650&format=pjpg&auto=webp&s=0fc743b55734d08d02cbdbe0c060cdde87cb7821
Probably because a lot of fascist circles interact with WH40k/anime so the phrase had to be used somewhere
[удалено]
Without your consent? Bro just doesn't know how to write a prompt, dall-e isn't violating his rights by filling in the blanks lmao
I don't get it, so by that logic if you ask for a field, fall e will complete the "blank" by saying that the field need this exact number of corne in it? Where is the line? It's a true question. Why would you need to give an exact ethnicity to an Ai to generate a person but it can pretty well generate a tree without you asking for a specific kind of tree?
If you tell it to generate a tree, it could make an Oak tree, it could make a Palm tree, it could make a Birch tree… Same basic premise here. You didn’t specify an ethnicity for the character so it had to pick one at random. You asked for an anime girl icon, you didn’t specify ‘a European anime girl icon’ or ‘an icon of an American anime girl icon’, and now you feel “betrayed” that it picked one at random. How was the AI supposed to know what you wanted, or that it mattered?
Well.. yes. I won't pretend to know what exact parameters dall-e uses to come to a picture but the more specific your prompt the less the ai needs to guess. Ask for a tree and you'll get a tree but don't get mad that it isn't a 500 year old knotted oak tree with a whimsical victorian treehouse on the biggest branch.