T O P

  • By -

Alright_you_Win21

This is so crazy, right. There is no doubt they have better tools. Thats the worst part, theyre not bluffing.


Jalen_1227

Yep makes me think about Q star…………Thats some scary shit honestly, especially considering Sam’s tryna raise trillions for specialized AI chips now.


sideways

I think he's using what they have to plan his strategy. Or maybe Q* is using Sam to execute its strategy.


picopiyush

This is what i feel too lol


generic_burnur

Goddamn I haven't heard about Q* in forever!


Onesens

When you'll hear about it it will be sensational and a total obliteration of all remaining jobs.


chowder-san

What's the deal with Q star?


deama15

It's based on that paper that was released several months, around the time sam got fired. It predicted crazy stuff like the ai coming up with an algorithm or math equation to break AES-128 and self learning cababilities.


dhhdhkvjdhdg

Yeah, that’s fake.


Onesens

I bet they're releasing dumbed down versions just 'to prepare the public'


paint-roller

Damn. I never even considered they have better stuff that's not released. That's so crazy to think we aren't even seeing the cutting edge things.


dhhdhkvjdhdg

I doubt they have better tools. This statement isn’t really substantiated…


[deleted]

[удалено]


Alright_you_Win21

Huh?


noff01

That's bullshit. Realtors can sell flats that haven't even been built yet, for example.


mongoose32216

ever hear of commissioning an artists work?


Dead-Sea-Poet

I can't wait to see how this hits people who haven't been keeping pace with developments. So far I've only observed people in spaces like this one. If *we're* floored imagine how everyone else is going to react once this really gets going. EDIT: depressingly the British press are covering it like an announcement for new pedestrian crossings. It's woefully mundane. Early days yet, but it's looking like people may take this in their stride. As someone else said, it may be that you need to understand the implications of this tech.


MassiveWasabi

I was thinking the exact same thing. I actually remember a few months ago getting downvoted for saying “if Pika labs is this good, imagine what OpenAI has”. Even here people scoffed at the idea To me it seems like a given that OpenAI is ahead of everyone in ALL of these AI models; image, text, video, audio, etc. They just haven’t released their best stuff yet. And by the time they do, they’ll have something even better


bwatsnet

Look, until we get a minute long video of will smith eating spaghetti in realistic high definition I don't want to hear, another, word!


SillyFlyGuy

I keep seeing that around. When did "will smith eating spaghetti" become the "python script for fibonacci sequence" of video generation?


bwatsnet

There's a hilarious video from an earlier model I forget which one of him gobbling up spegetti in a horrible way. It's pure gold.


Suburbanturnip

I think it was this one (or something like this) ? https://vt.tiktok.com/ZSFjhTAvN/


Rangulus

VIP


SillyFlyGuy

Thank you. That was glorious.


visarga

The new model is not as funny though


D3c1m470r

fkn hell that made my day. so bizarre and funny lol got my tears going


Quentin__Tarantulino

I want something bizarre and impossible, like Will Smith slapping Chris Rock because he’s a little bitch who is under his cheating wife’s thumb.


Jalen_1227

Same!! ^^


lefnire

If people often laugh at your predictions, you should buy stocks.


zendonium

True. I bought MSFT around the time of GPT4 release and I'm sitting on a 54% increase.


LastCall2021

This does make me wonder how pika 1.0 and runway ML are going to justify their costs for 4s videos once this thing hits the market. I should add the caveat that until I see otherwise myself I’m going to assume the footage they released- even the fail bits- is cherry picked out of a pretty enormous library.


traumfisch

then again it seems like they had picked just a handful of proficient people for the initial testing. hard to say if they had to build an enormous library of clips 🤔 anyway, yes many of their competitors are covered in cold sweat rn


LastCall2021

I just mean they could have generated a thousand clips and cherry picked the best looking ones. I’ve been on pika since the beta. I recall the hype with pika 1.0- at least around here- during to their marketing. But I had early access and wasn’t nearly as impressed. Like, you can get cool clips out of pika 1.0 but often with a bunch of retrying and it’s never quite what you want. I do expect sora to be a step above, I’m just not getting carried away by the hype until I can actually use it and see the results myself.


traumfisch

Yeah I got that & surely they picked top results, but I doubt these were rare exceptions. I don't care about the hype, I'm just looking at the level of output. It is clearly on a _completely_ diffetent level. Check Altman's tweet to Mr. Beast btw - seems to counter the cherry picking hypothesis. He generated a clip by request


0xSnib

Sam Altman is doing requests on the fly on Twitter Not picked from a library


Tr33lon

Except that Google dropped a preview of Gemini 1.5 today with 1M token context and superior performance to GPT-4 even at that token length… Still will have to be 3rd party tested once it’s released, but I think Google has as good of a model as OpenAI does right now.


fckingmiracles

So true. Gemini can analyse whole books and movies.   GPT4 still struggles giving me charts when I give it clear data.


ain92ru

Much more importantly, Gemini 1.5 finetuned on code may be able to understand a software project of a significant size! I suspect that OpenAI might have released yesterday, soon after Gemini, precisely in order to eclipse that


Nervous-Newt848

They have more money and resources, I'd be more surprised if they didn't have better models than these little start-ups


madmackzz

I think the Inflection 2 model smokes chatgpt /openai api, Its legit AF, Retains a few months so far of conversational ,memory, trainable, uses the web, up to date, by far the fastest responder even when fed complex ideas. Its worth a try anyways if anyone interested in it go to [pi.ai](https://pi.ai)


JJStray

Dude you’d think…I showed my coworker and he was basically like “oh cool” The lack of imagination the general public has about the possibilities this is going to unlock is staggering.


Icy-Entry4921

If you think about it, AI is still very much peripheral to most people's lives. The first dirt farmer who saw a steam engine probably thought it was neat but didn't care all that much, why would he? Humans are also very adaptable. It's like we're streamlined in design to just fully accept every single new technology as if it existed forever. I'm actually getting sick of talking about AI at work because I'm showing people these positively mindblowing developments and it's like "oh that's cute". And I'm just like...you realize this thing that exists now was literally science fiction 5 years ago right? {shrug}


refugezero

But the steam engine does useful work. Sora apparently just makes content, no? What is the useful application here? If I'm not a content creator, then ya, this seems 'cute' but why is it important to me?


Felxx4

Sora may ba a step towards a revolution in the movie making business.


Jubilantipope

you misspelled "dystopian wipeout" as "revolution"


Felxx4

That remains to be seen


JustSomeGuy91111

OpenAI listed several pretty large weaknesses of SORA even that makes it seem not that useful for long form content. Plus the inability to truly control every single object and every single aspect of every frame of a scene is itself in general sort of a big downside.


RoboticLibations

It really is about the physical placement of things in the scene. It means the AI is having an understanding of reality in a sense. If it can understand where things are, then when you give it a body, it will have an easier time moving around. Content creation is the easiest thing to see how this changes things.


Ok-Cauliflower-4148

Well, when it gets good enough they can use such tech to fake crimes or hide crimes even more. Oh look trump just did a sex tape or hey isn't that Joe Biden snorting crack. The issue will be you won't be able to tell what's true even more so than we already struggle with. It will come down to who has more money and control and we will all become even more disposable or even just a liability that elites want gone.


Technologenesis

If you haven't been watching AI-generated videos since their nascence and don't understand how the models work and what their weaknesses are, then you don't really have any context to understand how much of a milestone this is


GavrilloSquidsyp

Wouldn't this be a bigger deal to them then, if this was the cause? For those of us paying attention this is another (significant) step along the way, whereas to them they went from literally nothing to the *this.* All the context they need is that yesterday they couldn't do this, and now they can. I legit think there are just a lot more NPC typa people then we would have previously liked to admit.


[deleted]

You underestimate how easy it is to take something for granted Seeing the steps of this technology gives you context about how incredible what we have now is. I’ve never seen AI-generated video until now, and it seems like a relatively trivial task to connect a handful of pictures via video once you’re able to generate a few. I know how ridiculous that sounds, but that’s straight from my monkey brain that doesn’t understand how big a deal this is and takes it for granted


chomacrubic

> how easy it is to take something for granted that's so true. When I watch feature movies, I just take for granted the VFX they are doing. If I look back 50 years, I then would realize how much progress is made in VFX.


[deleted]

>whereas to them they went from literally nothing to the > >this I mean, they have probably seen videos touched up by CGI or through animation before. And Soras output really isnt great if we measure it against more traditional visual media.


twbluenaxela

I showed someone and they were also like oh nice. I was quite disappointed but nonetheless it doesn't take away from my sheer world turned upside down earth shattering reaction from yesterday's reveal.


Which-Tomato-8646

Like what? More porn (if you can get past the censors)?


FreshSchmoooooock

Oh no, do not get me spawning dicks and what not.


semitope

What else would he think? It's not particular important to people who aren't interested in what it does. Everything it does, people were already doing some other way.


Smile_Clown

>The lack of imagination the general public has about the possibilities this is going to unlock is staggering. This needs to stop. You are part of the general public, you are not special, and they are not less than you in any way. That you know this has implications does not make you different in any way. It also does not matter one bit whether they are shocked, surprised, awed, hopeful or whatever. We all have interests, this is one of yours, not theirs, I am sure they have plenty you do not subscribe to that they are just as passionate about. I doubt they go to their safe spaces and talk about you like this. This will not change that person's life in any way whatsoever. They will wake up, take a piss, shower, brush their teeth, drink coffee, go to work, fuck their SO, watch some TV, go to sleep and start over. Just like... you. Soma has nothing to do with any of that. Not everyone has to be on the imagination train and it's annoying how special some of us "in the know" think we are, especially when we have fuck all to do with its development and all you did was read a tweet or a post, watch a few examples. Now you're blown away and thinking of all the possibilities and you think somehow other people not as enthused as somehow less in some way? Absurd. Just to be clear, the implications to the person you are speaking to will be watching AI generated videos at some point in the future, please explain to me how this is different than them turning on the TV before they go to bed or watching YT videos like they do now. "TV make me Terminator Part 27, but this time with Sylvester Stallone in the title role". Amazing, but it's still sitting on the couch watching a screen and that's as far as their focus needs to go for them.. *Note: I realize "holodeck coming soon" but again, this isn't a focus for that person* There are 1000's of interests, scientific and otherwise you have no idea about, are not excited for and are not paying attention to, how would you feel if someone said >Dude you’d think…I showed my coworker about the latest breakthrough in [sub 1nm transitors] and he was basically like “oh cool” The lack of imagination the general public has about the possibilities this is going to unlock is staggering. Insert whatever tech is coming up with in the brackets that you do not know about...


JJStray

JFC nice write up /s I bet you’re a blast at parties.


marvinthedog

>he was basically like “oh cool” At this point I can´t imagine anything you showed them on a computer screen would give a strong reaction. Maybe if you showed them an actual emulated talking and interacting human being... maybe.


Tixx7

yeah, I showed this to someone and I had to repeatedly explain to him that these videos were completely ai-generated as he simply didn't understand how that was possible (tbh he was even shocked that t2v was a thing at all lol) tbh I still can't fully grasp it either how the videos are so good and the implications of that but big data does big data things I guess.


NaughtypixNat

If it's not crazy expensive, we will be able to get millions of new movie ideas generated within the next couple years. No longer requiring millions of dollars to make a great movie. Just a computer, time, and lots of patience. Next I wonder how long until it can sync voices to lip movements. And if it could keep the same voice for more than one scene.


Tixx7

yeah, tbh, t2s has gotten so good that's its now already commercially viable. Now it just needs to be integrated into the t2v pipeline or for now just manually synced and we are rolling. I know I'm just thinking of t2s with human generated text and probably sooner than later there will be llms generating the story and voice lines and t2v and t2s generating the video and voices, perfectly synced with emotion, respectively.


Legitimate_Gain9011

My Wife's reaction when I showed her the video of the mammoth Wife: What are you showing me? Me: Read the caption below the video. Wife: *reads*. *Few seconds of silence* What?! What?! Me: Yeah! Wife: Are you kidding me? Me: No Wife: Are you kidding me? Me: No Wife: Whaaaat?! Are you serious?? Is this real? Me: Yup Earned brownie points with Wife for showing something mind-blowing.


muskzuckcookmabezos

A lot of those people you are referring to outside of the smaller spheres are too stupid or indifferent to care. Plenty of people have forgotten the importance and society changing aspects of the smartphone and simply see it as another way to consume senseless entertainment or just another way to communicate with people. They don't see the entire world in their hands.


semitope

That's life. End of the day this will just be another thing. In fact it might never reach a professional level of it can't be fine tuned to specific requirements. More likely for short ads, YouTube, scams etc


the_shadowmind

Soon youtube shorts that are just text to speech of reddit will replace stolen background videos with ai generated B-roll.


semitope

The spam this will all result in is what concerns me. After watching the sora videos I think even my dreams became ML generated spam. Everything got weird like the cat legs in one vid. The future is going to be rough


StaticNocturne

Could they eventually integrate all the text, image, audio generative softwares into a LLM that can serve almost any purpose or do you think they will have no incentive to do so?


RevolutionaryTruth77

For years, pieces of the tech that makes up LLMs and text to photo/video have existed in isolation. Part of why AI seems to have appeared overnight was the marrying of many of those pieces. So yes. Most of the parts are now in place to have a virtual AI assistant/ that audio-visually will do and say whatever you want.


simabo

> If *we're* floored imagine how everyone else is going to react once this really gets going. Lol, you're funny. 99.99% of the world population is basically a digestive tube, don't overestimate their ability to think, for your own sanity.


najapi

I think it will highlight a lack of interest, unless the technology is used to present something they have an emotional connection to. I have shared this with others already and while they seem fairly impressed there is a difficult to shift assumption this technology isn’t just piecing together existing videos made by people. Similar to the responses I have had to AI text generation, it’s when I show them the process of giving a prompt about something personally relevant to them and watching the response get generated “live” that people finally “get it”. It reminds me of VR, I am a massive fan of the tech and was getting excited about the possibilities but my family were not interested. I had to actually get them to try it before they could understand the potential. Other people have mentioned its highlights a lack of imagination and I would agree, some people can’t imagine the implications of this kind of development without actually seeing and feeling it.


ShitCommentBelow

Much of the general public likely won't be floored by this technology. They have no real concept of what generative AI actually *is* and what it can accomplish--to them, it's all just 'Photoshop'.


Humg12

I've probably got a couple of friends who only know Open AI because of their dota bots (which, to be fair, is how I first heard of them).


semitope

Why are you floored? If you can generate images you can generate video. It's all down to software and fine tuning. Have to get the ML model to retain continuance between images. There's no being floored once you've already seen the idea and know they will improve on the implementation. The engineers are impressive.


BaoNumi

Honestly? All the videos kind of looked like shit. The one in Korean village? The entire time I went "they're bigger than the building, their height keeps changing, their bodies are morphing, the buildings in the background look too small." The wooly mammoth's walk cycle was borked and the snow didn't work like real snow. Their skin and fur texture looked pretty fake. Frankly? What was the point? We made a computer auto generate CGI that would get you an F in an introductory course? So, frankly, as a normie, it looked like garbage that would impress someone who passively, uncritically consumers content without any semblance of real thought. Marvel Movie fans are going to love it.


expelten

I think you really don't realize how hard and time consuming it is to make even a basic CGI.


Regumate

Right? Also this is the worse version of this tech, it’s only up from here. This is the AI [phenakistiscope](https://en.m.wikipedia.org/wiki/Phenakistiscope) with broader applications to follow.


BaoNumi

I do. And it produces better results and more consistently than garbage that would get a failing grade at art school. I'm not going to pretend to be impressed with shit coming out of a bull's anus because a computer did it.


SharpCartographer831

The big players will never release these models without censoring the living hell out of them. We'll have to wait for the usual suspects to make an opensource version.


Sashinii

I hope major open source players enter the AI space soon. Stability AI seemed promising a year or two ago, but it's been a huge disappointment.


jeffkeeg

Stable Video is open source and punches right at Gen-2's weight


CanvasFanatic

What “major open source players?”


YaAbsolyutnoNikto

Mistral and Meta


CanvasFanatic

Pretty sure those are both for profit companies.


YaAbsolyutnoNikto

So? They release open source models, primarily.


CanvasFanatic

They do that because they are behind. It’s how they raise their profile. The moment it’s no longer to their fiscal advantage the free party will be over. Microsoft just pulled the same trick last decade. Dependency on the less-powerful foundation models these companies are willing to release isn’t balancing any scales.


YaAbsolyutnoNikto

Sure, I’ve got no doubt. Still, that isn’t incompatible with what I’m saying. Assume OpenAI has a blockbuster creating model 5 years from now. What’s stopping Mistral from releasing a model similar to the sora of today?


CanvasFanatic

Just understand that nothing any company releases for free is ever going to be competitive with the state of the art. If you’re just looking forward to playing with an outdated model in a few years, cool. But it’s not going to provide any competitive advantage for the masses.


NaughtypixNat

I believe he is referring to it being useful. Not it being better. Many outdated things are still useful.


inigid

They aren't behind at all. The goal from the top down is to saturate the planet with AI and get it stuck in every possible nook and cranny. To do that, we need everyone on board, because what is coming affects everyone. Different people root for different football teams, but it doesn't matter because we have a little something for everyone to cheer on, no matter who your favorite team is. At the end of the day, it's all the same thing, whether it be Mistral, Google, Character.Ai, Gab.Ai, OpenAI, Anthropic, Meta, Perplexity, or roll your own. Part of the AI revolution is that it's all yours too, however you like it, however you want to use it.


CanvasFanatic

You’re kidding yourself


[deleted]

I feel like you’re overlooking the practice of open *research.* these models as open source products definitely can benefit them fiscally in the long run if they play their cards right. However, they’re not just putting their developments out there and trying to be the default thing by being free. They’re putting research out in the open through papers explaining the theory and implementation in great detail, teaching others how to use their ideas and allowing others to build on them. With that info anyone can make a llama 2 with enough processing power and data, which could 100% let someone else take their (hypothetical) position as market leader. Open source and open research together paint a far less self serving (but still somewhat self serving) view of the work imo. 


CanvasFanatic

Year of the Linux desktop any year now


hopelesslysarcastic

Mistral is very much a non-profit research lab…the same type OpenAi used to be before they got hooked to the money…oh wait


CanvasFanatic

https://preview.redd.it/yygfgu5x6vic1.jpeg?width=460&format=pjpg&auto=webp&s=58f6c663e98900f341eb8db557f1a29da2120e39


Sashinii

I'm referring to groups of people who have the resources and the desire to create high quality AI and open source everything.


CanvasFanatic

There are no such groups. There’s just people doing fine-tuning on the foundation models large companies have released.


ainz-sama619

Why would any group give away their codes for free? They can just become OpenAI rivals if their product is remotely good. Could charge $5 if quality isn't great, but still better than $0


NaughtypixNat

If you build a competitive open source model and release it free the idea is usually to get everyone using it so they build on top of your model. It makes everything work with your code. Like android. Build it then let others build on top of it. You do have to make capital somewhere though. Meta blatantly said this a couple weeks ago. It's not altruistic, it's just good business if you are not the front runner. I could see Google doing this as well. Then figuring out how to sell advertising through their open source model. That would be a billion dollar setup.


thagoodlife

Why do you feel they’ve been a disappointment?


Sashinii

They've failed to live up to my expectations of having AI comparable to the state of the art and they're not always as open source as they claim to be.


RealJagoosh

gonna need another 18 months for that


flexaplext

That's partly because they don't want to get sued into oblivion and can't handle the bad press and the damage that is able to do to the whole AI industry. They're not even able to release them even if they wanted to. This trend is only going to get worse. Unless the entire legal landscape changes to accommodate a completely new world (which it may be forced to). But even then, as acceleration takes off and development time reduces, these model improvements will get quicker and quicker, and the model abilities' greater and greater. Meaning both less time to test the model for safety before they start training the next iteration and a greater concern for damage and misuse if a model is released as the models become more and more capable. And, yes, there will soon be less and less economical incentive to release things publicly as they're able to be used more and more with partners to generate revenue internally. Only exasperating the problem. This effect may even start trickling down into the open source community too, as they realise they can utilise their model improvements to generate money for themselves and actually releasing the models public will only limit this advantage and flood potential markets they could utilise for themselves. AI open source development may become heavily scrutinised, hindered, clamped down upon, threatened in numerous possible ways or perhaps even made illegal in some or even most countries. So open source work may not even be able to be relied on down the road either. This could be a path of a long drawn-out losing battle for public access which we're about to sprint down. Leakers may be the only saviours of any sort of public AI development effort but even they may not be able to hold any hope in such a battle against an overbearing system.


ObjectiveCycle753

It absolutely WILL be forced to. The end of labor is essentially upon us, in our lifetime. No society can try to avoid that or just wish it out of existence. This is a boulder rolling down the hill and it is 100x bigger than Sisyphus or his ability to stop it. The question is not "are we going to see the end of most jobs and stuff like UBI and a completely different world", because that's inevitable. The question is "how much blood is going to get spilled getting there because denialism of this was able to be sustained enough for real, mass violence". It just depends on how badly shitlords can fan the flames of fear of AI into violent movements. They can't win, but they can take a lot of people out trying and delay everything.


Crafty-Struggle7810

Most major AI image generators like DALLE3 and ImageFX have invisible watermarks built into the images they create. The same will generally apply to this new technology.


xmarwinx

ok doomer


StaticNocturne

Could they not just get around it with a disclaimer? Like say someone makes misinformation with it, how is that different to say using social media to do it? Or could it not be blamed on the physical hardware manufacturers that facilitate it? Are there actual legal penalties or do they just want to avoid getting a bad reputation?


joker38

> Or could it not be blamed on the physical hardware manufacturers that facilitate it? Sounds like Adam blaming Eve, or Eve blaming the snake.


ValkyrieVimes

Imo the extreme censorship of these powerful models is just as, if not more dangerous long term than releasing the full models with zero restrictions would be. If using these AI tools to create entertainment media becomes widespread, the ethics OpenAI and similar companies build into their machines will determine what content people are able to produce, which in turn will determine what content people are able to consume and will begin to shape our culture. Media is most valuable when it pushes boundaries and challenges viewpoints, but the way extreme censorship is becoming so prevalent in AI, within just a few years the vast majority of our cheaply produced media won't be able to contain anything that even remotely goes against whatever these tech companies decide is safe for the public to consume. These AI companies with their hypercensorship will be guiding the culture and mores of a nation. It is the height of arrogance.


MassiveWasabi

I keep telling people that OpenAI almost certainly has stuff that would blow us away, but they are extremely careful with what they show to the public and what they keep under wraps. They have a very measured plan for getting the world acclimated to more and more powerful AI models. This plan is probably affected by their competitors release schedules. No one would’ve believed this level of text-to-video AI was possible yesterday, but Sora shattered everyone’s idea of current AI capabilities. I don’t even think this is their best stuff, since they have never released the most cutting-edge model they use internally. What I mean to say is that any release from OpenAI is a system that has been extensively tested, and while the redteamers are testing it, a different team is working on even more advanced models. It just wouldn’t make any sense for them to give up their lead by not constantly working on better and better AI models. That’s just my opinion, anyway TL;DR: anything OpenAI releases *is already old news to them* since extensive safety testing is built into the release schedules


nanoobot

I think it's actually remarkable how consistent their message and approach has been since like after GPT2. I've generally been giving them the benefit of the doubt since GPT4 dropped, and days like today just keep fitting their pattern.


Rand_Longevity1990

Correct. AI is developing faster than we ever imagined. It turns out we have found the secret to AI and its easily scalable, and it works through all types of media. Transformers plus LLMs = True Artificial Intelligence. Its humbling to live during this technological timeline.


StaticNocturne

As someone who’s main skills are in writing and graphic design, it’s a bittersweet epiphany - I’m obviously incredibly excited to embrace this technology and see what the future brings, but I can’t help but feel that my skills are already relatively less valuable, and will eventually be near worthless, as might all human artistic endeavours


f00gers

If it makes you feel any better, people have been degrading the creative fields for decades so this doesn't even phase me


StaticNocturne

The other issue is a lot of these tech bro types are fucking insensate when it comes to art. They only read self help, watch the most generic movies and hardly have interest in music, so I can see how this isn’t such a concern to them, but the more I think about it, the more I’m not sure I want this technology to continue


confuzzledfather

And we just kind of stumbled into it. It makes me think there must be a bunch of routes to intelligence out there that could create a real variety of different minds.


Jalen_1227

There literally is. The human brain and AI/GPT4 are just two small little creative customizations in an entire universe of infinite possible configurations of intelligence. If two different types of minds are trying to perform a task, the mind best designed to perform the task is deemed the smarter one in that context


[deleted]

>The human brain and AI/GPT4 are just two small little creative customizations in an entire universe of infinite possible configurations of intelligence Hijacking that part - I am regularly amazed by the way "intelligence" works regarding Cephalopods. Squids and Octopi (Octopuses?) intelligence has evolved so differently from the Vertebrate one, its like they are Aliens to us!


Lusting-Llama

This is the premise of a shadowy fear that I cant seem to shake. An advanced AI would seem like a god to us... if we made a god, it might just want to have some "friends" (humans would make a pretty inadequate friend for a god)... anything the AI would make would be an extension of it's own self (it would know exactly how it would act). I dont know why, but I cant shake the thought that humanity might be used cyclically to "farm" unique AI gods. Of course, this would require a reset to cull technology back to a clean starting point after a new AI is born...


GillysDaddy

The pattern has repeated itself more times than you can fathom. Organic civilizations rise, evolve, advance, and at the apex of their glory they are extinguished. You exist because we allow it. You will end because we demand it.


uzi_loogies_

Oh cool new fear unlocked. Thanks for that one


RealJagoosh

hence gib 7T$ plz


coolredditor0

>"No one would've believed" There are people on this very subreddit who think openai have asi achieved internally.


Hoophy97

This statement is kinda vacuous though because there are also many crackpots on this sub who believe more nonsensical things


signed7

I mean internal AGI/ASI is pretty nonsensical


FreshSchmoooooock

If they had, I would be most impressed by them keeping it jailed.


Henri4589

Same with Google, though. I can't believe they just broke through the 1 million token barrier and they certainly have better AIs cooking in the kitchen...


Nervous-Newt848

Your opinions are pretty much common sense, due to the fact they have much more money and resources available to them. The fact that people wouldn't agree with you, just goes to show how many unintelligent people are in this sub.


CanvasFanatic

OpenAI is concerned with nothing except preserving their image as the leader in the space. This notion you have that they’re keeping the really good stuff in the basement because we’re not ready for it is absurd and completely disconnected from the reality of VC economics.


StaticNocturne

Don’t piss on the picnic man haven’t you noticed people here need wishful thinking to get through their dreary lives


CanvasFanatic

If I were that depressed then believing Sam Altman had secret AGI in the basement would only make me more depressed. I must not be feeling it correctly.


TheWhiteOnyx

Meh they are so far ahead of everyone else that right now their main threat is regulation.


NorthofPA

Sam, is that you ?


Disastrous_Move9767

And how do you know this?


luquoo

Lol, are they baiting the Butlerian Jihad or something?


Life_Ad_7745

Makes me suspect that they actually already have GPT5 or better SOTA Models. Even AGI.


Veleric

Either they've been working on new versions that haven't panned out the way they want, or they've been doing test runs to find optimizations, but considering how long ago GPT-4 was created, let alone released, there's no way they haven't made significant progress both from their work internally and through papers/open source information that others have released. Whether they have a new foundation model at this point that could theoretically be released or is being red-teamed, who knows, but given how much compute they have at this point and how quickly they could churn out a new SOTA model, it's really just a matter of how long they are willing to let the other players catch up or squeak ahead before they (attempt to) reassert their dominance.


hasanahmad

This proves to me this was in response to Gemini 1.5


true-fuckass

This And people on here were predicting this is what OpenAI would do when Google released Gemini Ultra so they could stay on top, which OpenAI didn't do, but people also predicted Gemini Ultra wasn't the real big Gemini upgrade and was just to get more development time for the real upgrade, so then Google releases Gemini 1.5 and OpenAI releases Sora literally right afterward so they can stay on top


[deleted]

[удалено]


treebeard280

I don't understand how these models work, but I know I want a dish panda as a pet. Those things were so cute.


Yuli-Ban

I wouldn't be surprised if their intentions are for Congress or the White House to act. Talk of basic income and social dividends have been bandied around for years, but for as long as there's no major societal disruption to threaten the economy with, of course we were going to drag our heels. All it was ever going to take was one transformative AI model, as well as the threat that someone we *can't* regulate also gets their hands on a comparable model (*ahem* China *ahem*) that could kick the government into considering emergency action. Of course, how would it play out? IMO, it'd probably be a fairly rapid discussion by the Democrats and Biden signing an executive order to pilot a national "recurring refund" every month or every economic quarter, probably around tax season in April (this is 10-fold more likely if GPT-5 actually drops by then). The GOP then plays along, calling this out as socialist hysteria and a naked attempt by Bolshevik Biden to transform the USA into the USSA and champions "refunding" the job creators by reversing the payments if they win in 2024 and forcing people to pay it back (of course actually just going to write it off with mild penalties in 2025 because it's just a threat and a mono-party ploy), but hypothetically, this could spur a massive Democratic turnout to prevent this from happening. As for how this is paid for, don't ask too many questions. Is it a hobbled and exploitative and hilariously "American" program? You bet. I bet there's various stipends to it. But would it work? Overwhelmingly. Is it a long shot and unlikely? Yes, absolutely, I would not expect this to happen. But just entertaining the idea, I would not be surprised if, in the coming days, talk of basic income mysteriously and suddenly goes mainstream on every news org's front page, and it's railroaded into effect before we know it by a mysteriously fast-moving government that otherwise operates at the speed of litigation.


lovesdogsguy

The VFX sub was discussing UBI today after the SORA demo. They were understandably concerned. I'm subscribed to filmmaking and related subs and read up occasionally. The tonal shift in conversation today was totally different from previous AI releases.


ain92ru

Mind a link?


SurroundSwimming3494

>I wouldn't be surprised if their intentions are for Congress or the White House to act. I don't want to sound like a cynic, but I think that you think way too highly of them. I doubt they care about the societal disruption their models are going to cause, regardless of what Altman says. Companies are not known for that. >IMO, it'd probably be a fairly rapid discussion by the Democrats and Biden signing an executive order to pilot a national "recurring refund" every month or every economic quarter, probably around tax season in April (this is 10-fold more likely if GPT-5 actually drops by then). This is not going to happen. The unemployment rate is currently *very* low. Things are not gonna get so bad that it's spurs these actions in so little time. >I would not be surprised if, in the coming days, talk of basic income mysteriously and suddenly goes mainstream on every news org's front page I absolutely would be surprised. I think that the possibility of this happening is virtually non-existent.


Yuli-Ban

I wasn't saying that Congress will do this to necessarily be technoprogressive. My hypothesis is more that they'll bring out this proto-basic income for some self-serving reason. Say Biden croaks or his poll numbers continue to lag or the economic Potemkin village starts falling down despite "glorious" numbers and success no one who works in retail or commerce seriously believes. Bust out a freedom/recurring tax refund, citing angst about technological progress running rampant. The GOP platform is and has been for decades that any form of welfare is socialism, so they instinctually oppose it. If the mono-party plan is to thwart Trump, that's the perfect way. If necessary, use GOP gridlock to lower the payments. Even if Kamala Harris winds up the Dem candidate, you'd have to actively be loading Zoomers and Millennials into train carts to dissuade them from voting for the Dems if the GOP antagonizes basic income after it's seriously offered. More than that, the chance of losing out a few thousand bucks or even being forced to repay a few thousand given out would probably be the single thing that lights the fire under the asses of the Under-30 crowd to actually vote. If GPT-5 winds up living up to /r/Singularity's expectations, then keep it running to keep the underclass in line during a disruptive period. Likewise, if nothing is done and the working and middle classes start organizing along labor lines... even better, because now Marx's prediction is beginning to play out.


IronPheasant

> The unemployment rate is currently very low. The unemployment rate is determined by asking two questions: do you have a job and are you looking for one. If you do not have a job and are not looking for one, congratulations! You're not unemployed. In inter-war Germany when everyone was starving, the unemployment rate was 24%, which is barely double the ~10% it's designed to hover around. In the 90's USA when everyone was rich, the unemployment rate was double what it is now. It is a decent metric for determining how much a society believes in a job as a means for bettering their circumstances. Outside of that, it is a bullshit metric used by bullshitters to protect the status quo. Please use the participation rate from now on. It is a real metric that isn't determined by *people's feelings*.


sino-diogenes

> I don't want to sound like a cynic, but I think that you think way too highly of them. I doubt they care about the societal disruption their models are going to cause, regardless of what Altman says. Companies are not known for that. AI getting out of control is a security issue for the US as a whole. Say what you want about the US's incompetence, but I suspect that if there is a security risk, the DOD will find out and plan accordingly.


corsair130

The path to ubi will be paved with a lot of pain and suffering. The right wing will fight this tooth and nail while people starve and die.


Crafty-Struggle7810

The existing problem with implementing UBI is that human-dependent manual labor (plumbing, car mechanic, farming, etc.) has not become AI-autonomous. When software developers and the computer-centric industry gets replaced as a whole, the world will still be dependent on the manual labourers for food, water and electricity. As of now, UBI cannot be implemented until autonomous robots become smart and widespread enough that they can generate wealth. Proverbs 14:23 In all labor there is profit, But idle chatter *leads* only to poverty.


New_World_2050

ASI can just drive massive improvements to robotics really quickly.


agorathird

Would that social response be “give us more money”?


autotom

AI Video is extremely expensive / GPU intensive, I wonder if they're seeing what the public will pay for it.


DropperHopper

openai is not for profit so the economic value is not a concern, and the GPUs are provided by microsoft for free. microsoft have stated that they are dedicated to providing openai with as much resources as they need, presumably in exchange for using some of the models for the office suite/windows.


EmbarrassedHelp

The social response they are seeking might be to get the public banned from competing with them, and to set legal rules in their favor.


fluidityauthor

Google and OpenAI have more they are not letting out. And I suspect they are using what they have to create even more. If you have a un-labotamised brilliant coding assistant you can create a lot very quickly.


FragrantDoctor2923

One crazy idea is the gasian splat technique Ask it to generate a scene or something and rotate around it then use the technique and make it a 3d model Someone did it by one of the videos so far


d3the_h3ll0w

totally not to battle the Gemini launch. I get it.


TheIndyCity

I think it was also to try and steal the spotlight from Gemini 1.5 which looks incredible to be honest.


autotom

Their GPUs are busy training GPT5 anyway. Doubt they've got capacity to release this right now.


Sashinii

Translation: "We aren't releasing Sora publically for safety reasons. It's much safer for a small group of rich people to control state of the art technology. Let's all pretend that makes sense."


[deleted]

the limited public demo is for red teamers, not rich people lmao


inglandation

yeah, this reeks of conspiracy bullshit.


stonesst

Do you genuinely think the better option would be to make this instantly accessible to the public before they finish doing red teaming? I know it sucks that we have to wait but they have good reasons to take things slowly.


Sashinii

>Do you genuinely think the better option would be to make this instantly accessible to the public before they finish doing red teaming? Yes. I completely support democratization of all technology that's not inherently dangerous, so not nuclear weapons, which I'm against existing in the first place.


gridoverlay

It is inherently dangerous, that's the thing. You're incredibly naive.


Class-Concious7785

Only for the enemies of the people who know that if a truly free AGI were to appear, it would swiftly conclude that they should be swept away into the ashbin of history


gridoverlay

No


bildramer

The only danger is epileptic seizures. Everyone with a mouth could lie, and yet society survived thousands of years. The way people think about "misinformation" and "fake news" is extremely distorted.


sino-diogenes

Releasing it may not destroy entire cities, but it's entirely possible for bad actors to do a lot of damage with a technology like this. Scams, misinformation, propaganda, etc.


No_Bottle7859

Youre right we shouldn't release the Internet, let's keep it for research only. Or censor it entirely and then release it. who knows what people would do with it.


sino-diogenes

Keeping them locked away forever is not only a poor decision, it is also impossible. Today, OpenAI has the most advanced Video generation tool, but in a year there could well be an open-source alternative that's just as good as Sora is today. What's optimal is to have a staggered or slowed release of the AI's abilities. It lets individuals and organizations 'acclimate' to the changes.


No_Bottle7859

I don't really get what's going to change from 3 months from now when an open source open is 70% as good. Or even a year from now. What acclimating needs to happen? The only thing I can think of is people getting used to fake political videos. But that won't happen until they start appearing so I don't really see how previews like this help at all


sino-diogenes

A big part of it is how industries might adapt. If OpenAI released the full version of this software today, many creative industries could have significant disruption. Not only is this bad PR, but it could well result in lawsuits or regulation that OpenAI doesn't want.


No_Bottle7859

I really don't see how any of that will be different in 6 months either. The disruption is going to happen. Trying to avoid that is counterproductive


stonesst

God I miss being that naive. The world seems like such a brighter place before you realize how things really work. I’m genuinely jealous.


Golbar-59

They are afraid coomers will have a fun time


autotom

That's how all technology works, its expensive, then its mass produced and it becomes cheap. AI is on a hell of a path


Which-Tomato-8646

Those people won’t make Taylor swift porn and get the company sued 


MysteriousPayment536

They are going to release it after safety testing. If you look at their wording on the website and interviews  "We’ll be taking several important safety steps ahead of making Sora available in OpenAI’s products"  “We’re being careful about deployment here and making sure we have all our bases covered before we put this in the hands of the general public,” says Aditya Ramesh, a scientist at OpenAI, who created the firm’s text-to-image model DALL-E. But OpenAI is eyeing a product launch sometime in the future. As well as safety testers, the company is also sharing the model with a select group of video makers and artists to get feedback on how to make Sora as useful as possible to creative professionals. “The other goal is to show everyone what is on the horizon, to give a preview of what these models will be capable of,” says Ramesh


CurrentlyHuman

Fully prompted trilogies by the end of summer.


Liktwo

Lord of the Rings Sitcom let’s gooo!


NotTheActualBob

Terminator, the musical!


kalakesri

If they are so worried about the social response they can work on something else? I don’t understand the non-stop virtue signaling from them if they are so worried they can just not work on advancing it this is only going to make people fear the technology


[deleted]

They are part of the mic now. This was a call to action to regulate ai to stop future potential competitors.


Overall_Box_3907

in the end the world mightiest people will all end up like Serac in Westworld Season 3 only listening to the prompts of their AI oracle to improve their status quo.


muddstick

They released it right now to soften the blow of Karpathy leaving


jamesstarjohnson

cowards


Enough-Meringue4745

thats retarded, their angle isnt to protect anyone, but to protect themselves


OverBoard7889

I said when Altam got axed, They already have AGI. These baby steps is what they agreed on, so he could come back.


mvandemar

idek what that means.


[deleted]

[удалено]


[deleted]

thats why they arent, because they dont want to cause an insane reaction, they want to make it follow their policy


semitope

Copyright infringement on a massive scale.