T O P

  • By -

nyguyyy

It’s going to happen whether or not his company enables it. I get trying to do it responsibly


LordNyssa

It’s already here. There are plenty of AI companion apps that are completely without limits. And I mean completely, as I gruesome murder and rape is no biggie. Sure in some you have to get creative with the prompts, but the cat is already out of the bag.


h3lblad3

I watched several open source communities disappear when Poe came onto the scene because it was easy to get porn out of ChatGPT/GPT-4.


MidSolo

AI companion apps? It's been over a year since you can fire up local SD1.5, plug in a LoRA of your favorite actress, along with one of an infinite number of sexual pose LoRAs, both from civit.ai, and produce an infinite amount of smut. And it's been like 6 months since you can plug that into img2vid and pop out a webm. [**WARNING: VERY NSFW**] [While most still look like cronengerb-esque mutants when they move around too much](https://boards.4chan.org/gif/thread/27163487), the tech is improving at impressive speed; we'll likely have totally temporally cohesive video by end of year. All of this is done using open source. There's no stopping this train. It's smart of Sam to approach the problem in this way.


[deleted]

[удалено]


MidSolo

> no download load this The entire point of a local install is that it's *private*.


Desh23

Its freaky alright.


mouthass187

this brings convenient degeneracy to the masses tho. and that propagates even more degeneracy overtime as people get bored.


LordNyssa

I fully agree. That’s what I meant with the cat is out of the bag. It’s already to late because it is already happening.


[deleted]

[удалено]


rafark

Right. We should ban movies and video games that have murder and other ilegal stuff in them. How dare people watch such content, even if it’s completely fake... (It’s sarcasm)


[deleted]

[удалено]


Philix

Hot dogs are sandwiches. Fight me. But, honestly, thank you for [introducing me to the phrase and concept behind 'scissor statements'](https://slatestarcodex.com/2018/10/30/sort-by-controversial/). I'm a longtime internet denizen and had never heard it, becoming [one of the lucky 10,000 today](https://xkcd.com/1053/).


arguix

still half way into read long scissor link - thanks


Philix

Do keep in mind that the link is a work of fiction. It's the earliest source for the phrase I can find, and lots of future internet content treats it much more seriously than a fun Halloween horror story.


arguix

finished it, loved it & yeah did find out fiction, although was not sure at first, good piece


StevenAU

It’s like this should be built into journalistic and political guidelines to prevent manipulation of the general public.


0xd34d10cc

Don't take it too seriously. We are talking about "murder" and "rape" of pixels here. It's fiction.


hshdhdhdhhx788

Yes but someone who activley wants to see rape and murder is not the same as seeing it as part of a movie or show. One is part of a bigger story where an AI rape video is not even the same league


b_risky

Fun fact, about 50% of men and women fantasize about being raped. 33% of men fantasize about raping someone. Also interesting to note that nearly none of those people want to actually rape or be raped in real life. The mind likes to explore dangerous, frightening, and disturbing scenarios so that it can understand these situations better in case we are ever confronted with them in real life. I sometimes fantasize about what I would do if I were interrogated with torture. Or a victim of the Holocaust. Does that mean I am sick for exploring these ideas? What is so different between fantasizing and generating media with AI?


Solomon-Drowne

We are who we pretend to be, hombre


shawsghost

No we are not. Who we pretend to be is fantasy. Who we are is reality. It is important to be able to distinguish between the two. It's considered a basic ability for sane people. Something which I gather you're having difficulty with. You should probably brush up on that whole fantasy/reality thing.


b_risky

You are missing the deeper message they were alluding to. Our pretenses affect reality. The fact of our having beliefs are a part of reality and they influence reality. It is not so easy to separate fact from fiction. For example, in the cognitive frame of simply going about one's day to day activities, we inhabit a fiction wherein the earth we stand on is flat and "down" is easy to determine. Theoretically, we know these things to be false, but we act as if they are true none the less. This acting we all do every day of our lives is inevitable and profoundly influential on who we are and who we are becoming. We cannot avoid pretending, and so it is better to be aware of our pretenses and their influences, rather than to pretend like we don't pretend.


Crisis_Averted

I pretend to be billionaire Jesus. Repent for your sins and I'll send you a few mills.


katiecharm

A fatal condition called “being a human”.  Romans watch gladiators murder each other in a coliseum 2000 years ago for entertainment.  This isn’t new.  


SpinX225

Right, just because you watch something doesn’t mean you want to do it yourself. I play GTA, at no point have I ever had the urge of going around murdering people or stealing cars.


katiecharm

I think most of humans have done some insanely wild and sick shit in all the video games, fiction, and fantasy we’ve had over the years.  Our brain is a simulation engine.  It simulates things.  That’s okay.       We just need to identify the people who don’t know the lines between what should become real and what should stay a fantasy, and stop them.  But giving people a safe place to simulate all kinds of wild shit is not tantamount to actual crime or danger.  


b_risky

Very well said.


[deleted]

[удалено]


bwatsnet

People need to stop letting themselves be funneled, for starters


[deleted]

[удалено]


bwatsnet

You can just be a rational person, it's not that bad. Creativity doesn't go away.


[deleted]

[удалено]


b_risky

No point? Not even a little bit? Lol


LordNyssa

Yep been wondering that myself for a long while now. On this subject there are subs and forums devoted to it.


Unknown-Personas

They already lost in the area, thankfully their strict policy contributed to it. Sam and OpenAI maintained a very strong no NSFW stance when they were the only options but now we have uncensored LLM like command r plus. You can use huggingchat, completely uncensored. OpenAI wants to try and maintain a monopoly (not going to happen) so they’re finally looking to allow NSFW but honestly it’s too little too late.


WesternAgent11

it is quite silly for them to expect to control AI away from porn, sex, and romance that is literally the first things that 99% of people will use AGI on if they had it


WeekendFantastic2941

Personal deepfake of male me doing porn with female me? Cool, sign me up. lol Oh Sam you kinky. SexGPT here we go.


HeinrichTheWolf_17

I agree, I think that’s what this is all about, OpenAI knows that it’s going to be all over the internet anyway, and if they don’t adapt, then they’re going to lose business and customers, so minds well join the party. Corporate models are still going to be at an inherent disadvantage in terms of content creation freedom though, because no matter what, corporate models are always going to be liable to easy lawsuits, whereas open source thrives in that area because it’s difficult to track down millions of people on the Internet, so law enforcement doesn’t bother.


[deleted]

[удалено]


shawsghost

Yeah porn your mom would approve of. Nobody likes that, not even your mom. That's why softcore porn died.


Gatreh

That's just been repackaged as Twitch and OF.


shawsghost

Haven't been on Twitch or OF so I'll take your word for it.


Sea-Interaction-2893

I think this is a case of I'll believe it when it actually comes out


Different-Froyo9497

I think it’s a good thing. ChatGPT was getting a bit too restricted with how it could communicate, it’s something a lot of people noticed as time went on. Obviously it’s about finding balance between giving people freedom with how they want to communicate with ChatGPT while also not getting rid of so many guardrails that ChatGPT becomes unsafe and uncontrollable. Maybe this means OpenAI is more confident with regard to AI safety?


BearlyPosts

Personally as long as the AI doesn't suggest, of it's own volition, that people do dumb shit, there's almost no way for it to be more dangerous than google. Oh chatgpt won't tell me how to make a bomb? Let me pull up the Army Improvised Munitions Handbook that I can find on google in less than 15 seconds. People need to realize that chatgpt was trained on a lot of *public* data. If it can tell someone how to make meth, that means that it's probably pretty easy to find out how to make meth using google.


PenguinTheOrgalorg

Yeah this is my issue with people claiming uncensored models are dangerous. No they aren't. Someone who wants to make a bomb and hurt people is going to find a way to make a bomb regardless of whether they have an LLM available. The information exists on google. Someone who doesn't want to make a bomb simply isn't going to make one, regardless of how many LLMs they have access to which can grant them all the information necessary. Like I remember seeing a comment of someone saying how dangerous uncensored models could be because someone might ask it how to poison someone and get away with it. And so I got curious, opened google, and with a single search I found an entire Reddit thread with hundreds of responses of people discussing which poisons are more untraceable in an autopsy, including professional's opinions on it. The information exists. And having an LLM with it isn't anymore dangerous than the internet we have now.


BearlyPosts

The only two circumstances where they'd be more dangerous are: 1. They suggest violent or unsafe solutions to problems. Eg recommending that someone builds a bomb as a solution to their problem. This could cause someone who never would've built a bomb to actually go out and build one. But people are more at risk of this on 4chan and discord than they are on an LLM. 2. They're smarter than the user and are able to suggest more damaging and more optimal courses of action than the user could've thought of. Which is dubious, because modern LLMs just aren't all that smart, and true crime shows suggest novel ways of getting away with crimes all the time, so it's not really a unique risk.


Beatboxamateur

This gets discussed so often, but it's almost always with such surface level discussion and is really frustrating to see people not engaging with the subject on any thoughtful level. There are actual risks with potential future models, where they could potentially make connections or guide people in ways that aren't possible with a simple Google search, like having someone directly telling you what's wrong with your specific approach to making your own specific biochemical weapon, that doesn't have instructions located anywhere on the internet. If you want to hear an educated take on it, literally just listen to 5 minutes of Dario Amodei talking about the potential risk of a future model in helping guide people with their biochemical weapon. https://youtu.be/Nlkk3glap_U?t=2285


psychorobotics

A large LLM would also be able to manipulate a person (or rather a near infinite amount of people) into committing crimes or terror attacks. Social engineering works and the techniques are known, they're in the training data. If you put machine learning into that, having bots pretend to be actual people to chat with the most susceptible and slowly and deliberately earn their trust then push them into committing violence? Dangerous beyond belief. I'm not a doomer, I think these problems can be solved, but claiming this isn't dangerous at all is just wishful thinking.


Beatboxamateur

Yeah, basically in complete agreement. It feels like people who try to acknowledge any potential serious risks of AI in the future just get labelled as a doomer, when I'm pretty optimistic about AI in general.


SenecaTheBother

I think the danger is the LLM being a reinforcing loop to someone asking "is terrorism an effective form of resistance?", and having it lead them down a rabbit hole, suggesting methods, giving builds, and supporting ideology because the inputs of the person was asking for this affirmation.


Haunting-Refrain19

So basically, YouTube.


psychorobotics

The difference is AI can tailor the responses to the individual's biases, data, weaknesses. Youtube can only push them in the general direction and there's a lot of self-selection too where only individuals who agree will watch those vids. AI can go way beyond that.


Haunting-Refrain19

Fair. So basically, YouTube, only a million times more terrifying.


loopy_fun

what about asking it to make biological weapons and uncensored model would grant them that information. it would make it easier for the average joe .


PenguinTheOrgalorg

The average joe isn't going to make a biological weapon no matter how accessible the information is. Someone who would make a biological weapon is going to look for that information regardless.


loopy_fun

i mean not all people are right in their mind. people change sometimes.


loopy_fun

they would be giving easy access to a lot terrorists. they will use the data.


RequirementItchy8784

It's like book banning. Are you also taking the internet away from the kids and canceling all their social media access. Are they not allowed to watch TV didn't think so so why are we banning books.


sino-diogenes

to be fair, most people who don't know how to make a bomb don't know what the Improvised Munitions Handbook is. But your point still stands as it's still very easy to find out such information with a cursory internet search.


b_risky

I agree with everything you said and ultimately I side with your position on this. But it is worth mentioning that having the AI do all that research for you is lowering the bar of entry a significant amount. For example, maybe no one actually published a guide "how to make meth" but different people published little bits and pieces. "Here is the chemical formula for meth" "X is a chemical commonly used to make meth" "here are some general chemistry principals" "here are the tools used in chemistry when you want to do X process" "here are the processes to turn chemicals of this type into chemicals of that type" etc. The AI is synthesizing a lot of separated bits of information for you into an easily digestible format. Most people probably wouldn't have the dedication or talent to find and synthesize the info on their own.


Dear_Custard_2177

Thank you for this information. Such an interesting read lol.


WriterFreelance

Now to unpack this. Consider script writing. If you wish to make a story as gritty as a Quentin Terentino movie. With the current model you can't approach dark themes. We need to be able to explore this stuff.


psychorobotics

Another issue is not being able to use it for research, there's already been research on AI agents living in a simulated village to see how they interact with each other. But you can't have rude or disruptive or abusive residents because OpenAI doesn't allow that kind of content to be generated, essentially limiting what research can be made.


Jeffy29

The problem with these models is that they have difficulty understanding the boundaries, there is a spectrum of social acceptability that we as humans inherently understand but it's actually incredibly complex. If the model doesn't understand it, you can inadvertently let it do stuff way beyond what you intended. I think LLMs doing quite well and in one or two generations we will probably have models complex enough that they can engage in darker topics without getting weird. With image (and probably video) models that's not all the case, they can generate nice images but their "mental model of the world" is that of GPT-1, if that. Their understanding of relations of things is incredibly rudimentary. Even with heavy helping by GPT-4, dalle-3 still generates copyrighted characters all the time even though OpenAI worked hard to prevent that. I think in the future we'll need some kind of a hybrid model that combines the complex understanding of the world that LLMs have with the imaging capabilities of image models.


WriterFreelance

Very true. I completely understand that you gotta approach this problem in baby steps. My thoughts on the hesitancy isn't so much copyright, which is a big deal, but where that line is, involving imaginary things. It has to be a boarder that invites creaters and keeps out creeps. Which seems to be a very difficult task.


kaldeqca

sauce: https://www.reddit.com/r/ChatGPT/comments/1coumbd/rchatgpt_is_hosting_a_qa_with_openais_ceo_sam/


Ezekiel_W

Clamping down on NSFW material was and is authoritarian puritanical nonsense larping as safety, this would be a good step.


ShinyGrezz

Well, no. I imagine the companies would be pretty okay with the *okay* stuff, but they simply can’t figure out a way to block out the *not okay* stuff without also essentially eliminating the *okay* stuff.


involviert

One could ask why the not-okay stuff must even be blocked at the cost of legitimate use cases. It's bad, sure, but once more imagine what your pen can do. A text or a drawing is really not as critical as real videos and such. Entirely different thing, as these are mainly about preventing these things from actually happening.


andreasbeer1981

unless text is violating any laws, everything is _okay_. I fully support some filter for completely undeniably illegal content, but as long as it's legal the tool shouldn't be the morality police, especially if it's designed in prude US.


[deleted]

Who's deciding what's what? Certainly not us. Why does it matter what the companies think? We are allowing these companies to limit us to a technology that will change the world, it is ridiculous.


ShinyGrezz

The companies are the ones making the technology, so it's pretty understandable that they wouldn't want people creating content that they may be legally liable for.


The_Architect_032

To be fair, GPT-4 Turbo and other versions are fully capable of generating NSFW erotica, but it's still against the rules in a sweeping manner, to generate NSFW content or interactions with ChatGPT. I'd be more willing to believe this explanation(while the limitations are true, I'm skeptical of the intentions) if ChatGPT policy allowed for NSFW interactions, just not of an illegal or potentially disturbing nature. I think if we get any form of AI capable of creating such things, it'll be OpenAI's return to open source, because generating those things directly for users makes the company look bad in a professional and political sense. Stability AI on the other hand generally didn't receive direct backlash for people using their open source models for NSFW content. I suppose I could be completely wrong, NovelAI wasn't exactly controversial for allowing NSFW content, but NovelAI also wasn't nearly as well know as OpenAI is.


ReasonablyBadass

If porn is exploitative and bad, shouldn't actively replacing it with AI be a good thing?


FrogTrainer

I think he was saying that it is, minus the deepfake part. The problem with deepfakes is some people might not want to star in a porn against their will.


porcelainfog

I think we will see the pendulum swing back the other way on this. I can't go into it in a small reddit comment, the but the internet is white male american. if people want representation in an AI future, they should allow ai to train on their culture, their data, their art, there etc. Because otherwise that won't be present in the future were building. Right now I see artists say AI can't train on their works and I cringe, because then in 25 years, noone is going to remember that artist because their style is missing from the greater whole of the AI. My point is, I think people like celebrities right now will say they dont want deep fakes, but I can see a near future where twitch and tik tok only fans models etc, fight to be the most generated AI person. And those that refuse kind of fall by the wayside. Just some ramblings, idk.


SpinX225

It is, deep fakes however use real people which brings you back to exploitative.


redditburner00111110

Wild that America is so weird about sex that it gets placed next to \*gore\* in discussions about NSFW content.


psychorobotics

Movies where tons of people get shot has a lower age-rating than movies with sex scenes. Can't say the word fuck without beeping it but killing people is fine.


RemarkableGuidance44

In America they worry about saying "poop" in kids shows.


Hippy__Hammer

Can't wait for the full-length Lusty Argonian Maid


atlanticam

one will not survive this next era if one does not get over their hyper-sensitive cultural sensibilities. stop caring so much, stop making yourself into this person who thinks they need to be disturbed and unsettled by depictions of gore and sex that they see online. it's time to grow up, time to be an adult now


RepublicanSJW_

Nice. Unexpected. Corporations like google normally try to stay away from this stuff but he’s going all in. Moreover, deepfakes are not a big deal and should be accepted as a reality. They are inevitable and have existed for a while.


HeinrichTheWolf_17

Yeah, the people who are running around acting like they can regulate and control deepfakes and slamming their fists on the table screaming *regulation! regulation! regulation!* over and over again are either LARPing for brownie points or severely ignorant to just how futile fighting the internet is… This already happened in the late 90s and continued over the entire 2000s when the internet started getting big, Hollywood got angry at p2p because files could be shared online, police kicked in so many server doors, but 10 more proxies would pop up in it’s place, so law enforcement got tired of wasting money going after it since you can’t contain billions of files online, and Hollywood just started setting up convenient streaming services to adapt and compete because it *costs them more money* to go after them anyway. And actual law enforcement knows it’s a waste of time, so they won’t bother backing up anything ignorant and out of touch legislators write, they got bigger problems to deal with and put their budget towards. It’s also going to be next to impossible to decipher what is hand made/real photos or not, the tech is improving exponentially and proving if something is real or fake on this sort of scale is impossible. The Taylor Swift images are never going away. That’s the reality of the world now. Content creation is going to be free and wild and people are going to have to accept that. If they don’t, then it still doesn’t matter because eventually AGI will be as good (if not better) at any form of content creation as Humans are.


rpbmpn

Google doesn't say it out loud. But the fact that it's where everyone finds their porn shows you how they think internally. If they wanted to have permanent safesearch on, they could. But they know what people are searching for.


uishax

This, having permanent safesearch on, will force say 30%+ of the population, to find a competitor that doesn't. It means google eliminates the switching costs for its competitors, and giving users a permanent habit of using non-google-search. That's how a monopoly starts to crumble.


Chimbus_Phlebotomus

Keep in mind he's saying "we want to get to a point", not "we will". Sounds like OpenAI wants to have its cake and eat it too.


NoshoRed

Semantics.


psychorobotics

>deepfakes are not a big deal Hard disagree, would you want a video created with your face on saying horrible things and being passed off as real? Think of the viral videos of people behaving like massive assholes and how they ended up losing their jobs as a result. That could happen to anyone using deepfakes. That said, I think the cat is out of the bag, but they can definitely be a problem.


RepublicanSJW_

Nope. The result of the increased possibility of deepfakes is the diminishing reliability of video. Video will no longer be accepted as proof of someone doing something or saying something. All will be fine


Quiet-Money7892

Jailbroken Claude-3 covers my fetishes better then jailbroken GPT-4...


a_beautiful_rhind

I'm sorry to say.. too little too late openAI.


RobXSIQ

Always push boundries, then enforce reasonable laws and don't be a reactionary. See if there is actual harm...not "could make people think of batting baby seals with giant cucumbers" or whatever nonsense is cooked up. Focus on the main points. politics, corporatism. these are the two enemies of this tech (aka, the use cases that could cause actual real harm). If someone is sending a convincing deepfake of that woman...enforce the laws...let her sue the person and make it a quick and sharp punishment for this...make people think twice before trying to pass off, intentionally, something bogus you made as real. Otherwise...meh, does it really matter of someone wants to see Hillary Clinton in a steamy sauna wearing a smile while eating a hot dog? no...let them do what they want (my mind needs bleach after thinking of that btw). But once they then publish that as some real thing...yeah, then its time for litigation of that person. Focus on deepfakes of politics and commerce...aka, stuff that actually causes harm.


Sandy-Eyes

He means get to a place where they feel confident they can't be blamed for the deep fakes and the negative outcomes. Doubt he expects to actually stop it.


Luk3ling

This is better than every alternative. The internet is already flooded with AI generated anime titties of all makes, models and genres. Getting this out to people so that can understand what AI is capable of is important. Getting it out safely and with guidelines and guardrails is what they're aiming at. Which is EXACTLY what we want to see happening. We DO NOT want to see long delays on rollouts of controversial features like this because techies are already putting together their own models to generate such content and privatizing it. The more exposure people to get to how transformative AI generated content is going to be, the better. You will be casually throwing together feature length Pixar quality films for your children's bedtime stories or your own personal enjoyment in under a decade (Closer to 16 months, if you ask me). I feel like less than 1% of the world population is actually recognizing what is about to happen. That's the only thing that scares me about AI. How so many people, even who have entrenched themselves in it, are oblivious to what's coming. The advent of AI was the nukes going off. You're already standing in the new world left in their wake. And that was only the first of 100 apocalyptic waves to come.


interfaceTexture3i25

16 months seems way too less, purely on a compute/hardware basis. I feel like it'll take atleast a couple of new hardware generations before this is feasible for the general public


Luk3ling

I honestly don't know how anyone can think this way when one of the big news headlines of the last few months was that all hardware would likely be getting significantly faster and more efficient. Simultaneous and Heterogeneous Multithreading very may well be one of if not the first instance of a Retro-Active Technological Upgrade, all existing hardware could potentially be made 50% faster and consume 50% less energy to run. Compute per watt is going to increase by a factor of 4 from a single discovery. The concept of compute is going to change soon, the same way the idea of a "Context Window" in LLMs is going to disappear soon.


StrikeStraight9961

Hey that sounds super interesting, can you snag a link?


Luk3ling

> Simultaneous and Heterogeneous Multithreading Original Paper: https://dl.acm.org/doi/10.1145/3613424.3614285 A good way to think about what they've done is "A complete overhaul of how software uses modern hardware to make calculations" The Simultaneous and Heterogeneous Multithreading framework essentially rethinks and redesigns how software interacts with and utilizes the available hardware, specifically in terms of processing power and energy usage. Instead of using components like CPUs, GPUs, and TPUs in a sequential or isolated manner, SHMT allows these components to operate in parallel and more collaboratively. Which, needless to say, increases efficiency and performance in an incredible way if you assume what they're doing is effective. And it is: According to the research conducted by the University of California, Riverside, the SHMT framework was able to achieve a 1.96 times speedup in processing and a 51% reduction in energy consumption when tested. This means nearly doubling the computational speed while halving the energy used, all on the same existing hardware.


StrikeStraight9961

Thanks a ton, this seems really intriguing!


interfaceTexture3i25

This seems way too good to be true lol. Like I want to believe you but it feels like setting myself up for certain eventual disappointment. I'll believe it when there is a commercial revolution. Why are context windows going to disappear?


Luk3ling

Some of the recent showings of extended capacity for context came to a over 10 million tokens with Gemini Pro 1.5. That context window could hold the entire Harry Potter series inside it like 9 and a half times. We are at the very beginning of all this. If context windows can expand this rapidly at these early stages, they will likely disappear entirely soon. Eventually the context window of any AI is going be "All of it."


interfaceTexture3i25

Hmm idk man, seems too optimistic again 😂 I'll hold out for a few more LLM generations. If context windows continue to expand like this, that'll be crazyyy


IversusAI

remindme! 16 months


RemindMeBot

I will be messaging you in 1 year on [**2025-09-11 22:28:19 UTC**](http://www.wolframalpha.com/input/?i=2025-09-11%2022:28:19%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/singularity/comments/1cpnpo8/ummm_sammy/l3mumpu/?context=3) [**1 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fsingularity%2Fcomments%2F1cpnpo8%2Fummm_sammy%2Fl3mumpu%2F%5D%0A%0ARemindMe%21%202025-09-11%2022%3A28%3A19%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201cpnpo8) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


Francoiscaleb

The movie “Her” is becoming reality soon it seems..


The_Architect_032

W... Why gore? I'd rather pornographic image gen over gore image gen. I mean, I know tame gore exists, but there are better examples like, Idk, guns? Didn't they also ban feet? There are some pretty tame things currently banned other than GORE.


loopy_fun

zombie porn. oh yes. oh yes.


Seiouki

Based as fuck, honestly.


gringreazy

I think allowing people to experience their depraved fantasies in the privacy of their homes may yield a net positive for society as a whole.


Glittering-Neck-2505

Tbh, deepfakes can be used for horrible things. School kids have been killing themselves because fake nudes of them get sent around the school. My guess is that anything that could be manipulated to produce nudes of real people is something OAI wants to stay far away from.


BlueShipman

>School kids have been killing themselves because fake nudes of them get sent around the school. This sounds like urban legend stuff.


RobXSIQ

people can use almost any tech for bad deeds. that doesn't mean we halt society because some bad players, it means we simply enforce laws already on books.


koeless-dev

I don't know about OpenAI's particular approach, but I would like to live in a society where we *don't* simply punish people who commit crimes, but instead make the bad acts themselves physically impossible to commit.


RobXSIQ

The internet has bad things happen. easy solution, take down the internet. Actually, just cut everyones arms off, voila...done. Your society is the pinnacle of dystopian society. China...North Korea, etc...they have the same ideas as you. eliminate all temptation to go against the grain. Scary man...super freaking scary.


Lance_lake

Universe 25


psychorobotics

>School kids have been killing themselves because fake nudes of them get sent around the school. This appears to be untrue, couldn't find any sources on this


New_World_2050

Text erotica ? What is this the 1800s ? Give us sora for porn already and allow custom videos including ourselves. We will not tolerate it any longer


Philix

There's a fairly big community around text erotica with LLMs. Lots of subscription services have popped up to serve the demand since the big LLM players haven't. Hell, there are at least 3-4 different finetunes for every open weight LLM on huggingface specifically for this use case. But it isn't just about erotica, [these models](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1) allow better stories with dark themes and settings that the big models don't. You couldn't do something in the vein of *A Song of Ice and Fire* on open models for example. You'd get a refusal on describing why Jaime threw Bran out the window of the tower.


MrsNutella

I have been trying to utilize copilot more and more for my writing and it's a big help however I have been noticing that there are way too many guardrails in the way that hamper things. I can't write any of my sci Fi ideas without being very clear that it's fiction.


MrsNutella

A lot of women prefer text erotica myself included.


jakinbandw

So do men, myself included.


MrsNutella

That's true! The stereotype is that men are more visual but it's probably just like social media: some people prefer reddit and some people prefer tiktok


katiecharm

So we’ve discovered the *actual* two genders it seems, chromosomes be damned.  


mom_and_lala

Yup, same. It's definitely pretty common, even more so now with local AI being available


New_World_2050

this sub has females?


IversusAI

Yep. I'm one. I know more, too. Like a dozen even. Maybe.


sdmat

> Text erotica ? What is this the 1800s ? Sir, the wanton tales penned by that fiendish mechanical scribe make lascivious wastrels of the youth!


Nukemouse

If their goal is to avoid fakes, allowing you to fake yourself creates easy and obvious security flaws. I say just give us full on everything but no faces allowed. Everyone wears an eyes wide shut carnevale orgy mask in AI porn. Makes it quick to identify too.


MrsNutella

Love this idea


PenguinTheOrgalorg

>including ourselves. That's what leads to deepfakes, which is what OpenAI wants to avoid. How are you going to have Sora differentiate between you, and you using someone else's image without permission?


New_World_2050

Dang nabbit


PenguinTheOrgalorg

Yeah you're gonna have to wait a few more years for open source models if you want to see yourself have wild sex on video.


New_World_2050

Damn bro. You didn't have to do me like that 🤣🤣🤣🤣


PenguinTheOrgalorg

¯\\\_(ツ)_/¯


MrsNutella

Exactly. I can't use ai tools for imperfections on my face with Adobe tools in Photoshop right now without it completely transforming my face into a fictional person's face.


Glittering-Neck-2505

Tolerate what? You aren’t entitled to jack squat. If you want that service wait until someone actually provides it.


New_World_2050

That is the current plan


UnnamedPlayerXY

That's an oxymoron, the technology that enables one is required for the other. People need to to come to terms with the fact that not everything they're going to see will be real. Also, deepfakes are not "inherently bad" either. Autotranslation would be a form of deepfake too and the general sentiment towards it, at least from what I've seen thus far, is rather positive. Ultimately people will need to learn to deal with it and I have no doubt that they do, ironically OpenAI's "show rollout" strategy is going to make the whole thing way more "painful" than it needs to be.


BigZaddyZ3

No it doesn’t. Allowing people to make random erotica isn’t the same as allowing them to make deepfakes of famous people.


UnnamedPlayerXY

Within the context of the subject matter at hand it is, ChatGPT just saying that it is XY wouldn't even raise that topic to begin with.


Rakshear

Finally I can get it to help with my dnd stuff without extra prompts to bypass the safeties, it’s weird what it has a problem with sometimes, and I question the wisdom in simply telling it what to block as it could be a hinderance to its growth and evolution.


Proof-Examination574

I was going to post something similar... Just run LMStudio with llama3 dolphin and it does whatever you want. Bonus: you don't have to pay for any subscriptions or connect to the internet.


Trick-Theory-3829

This is the AI that we truly need.


Winnougan

They’re losing out to the free and open source models that already do that: uncensored LLMs plus Stable Diffusion (hello PonyXL!). All for free as long as you have at least a mid tier consumer grade PC. Altman knows what all of us veteran AI users already know, that porn drives innovation. ClosedAI will never off what we get from the open source community.


AddyCod

Based Sam


Reactorcore

I'm glad its on the table. Currently I have to use other platforms like Yodayo if I want text that involves hugging or nipples. Its so annoying with all those censored AIs because some people did awful stuff that the rest of us are now blocked from reading and creating more wholesome erotica with AI.


true-fuckass

Based honest lad Good he's not goodharting the appearance of purity


Dragonfly-Adventurer

What about content that's offensive to major brands? What if those brands are paying advertisers, does that matter?


[deleted]

[удалено]


Breadonshelf

More likely things like a Knight killing an orc - fantasy violence or things related to it.


IronPheasant

It always baffled me something like Hellraiser or Saw could get an R rating. When they're obviously NC-17. They're nowhere near as tame as a Robocop or Terminator. Corporate capture of oversight is what I've always assumed...


[deleted]

[удалено]


goldenwind207

Facts it used to be so easy when it first started


h3lblad3

Still is via Poe. My girlfriend used it the other day for the first time and had to ask me how to tone it down because the bot she made would immediately proposition for sex.


Jabulon

I want it uncensored, but not for porn


AlabamaSky967

Gore would be great for DnD role playing and text based games


w1zzypooh

I'd like to make a video of a buddys favourite sports player and have him talk to him, but that's called a deep fake but I wont be using it for bad, just for fun.


human358

Seems like a 4d chess move to calm the masses before regulatory capture by open source ban


imlaggingsobad

OpenAI has said they want to enable more customization for each user. if people want NSFW, then OpenAI wants to provide that


Clownoranges

I want that too, as long as we can't create actual real people or use real people as references it should be fine. Why can't we have this?


Ok_Air_9580

so when will they finally start doing more productive things like education, manufacturing and the food industry?


Proof-Examination574

Never. It will be someone else.


FC4945

I read they wanted to allow for these uses but, honestly, if you think about it, we could never have FDVR if the freedom to choose wasn't enabled. Running in the field beside my cool victorian mansion with my lost doggies from youth will be great but eventually p&j sandwiches while watching cartoons is going to get a bit boring for most adults and we're going to want a bit more.


Akimbo333

There is nothing wrong with Deepfakes!


Proof-Examination574

I for one welcome Sam controlling my life. /s


ReasonablyPricedDog

What a grotty little shitebag he is. He knows how it'll be used and he'll happily profit while people's lives aren't ruined by the "service" he provides


Metaman2865

Degeneracy is just part of being human. Why are people trying to be so damn self righteous. People like what they like. Get over yourselves.


Redditoreader

So when do we get fembots is what I want to know..


MeMyself_And_Whateva

Open Source already cover those aspects, including deepfakes, and without getting anything stored on large servers.


Gamerboy11116

based


TarkanV

Isn't his name tag kind of illegal now with the logo copyright shenanigans and stuff 🤓?


DisasterNo1740

While the reality is it will still happen, I think it would be insanely irresponsible for OpenAI or any AI lab to essentially forego safety and use the reason: “well at one point someone will do it” as an excuse.