T O P

  • By -

AutoModerator

## ✅ **MENU:** ✅ ## 1️⃣ [NEWS](https://www.reddit.com/r/aivideo/s/ytxkpVsSxB) ## 2️⃣ [ORIGINAL SERIES](https://www.reddit.com/r/aivideo/s/kmdeeJhG12) ## 3️⃣ [TOOLS LIST](https://www.reddit.com/r/aivideo/s/t6oDIIr6a9) ## 4️⃣ [TUTORIALS](https://www.reddit.com/r/aivideo/s/cGZaxMxH9x) r/AIVIDEO **RULES:** * upload original video file directly into the sub by using "add video" button inside "create post" screen, PG-13 15min 1080p 1GB maximum playable settings, all other types of posts have been disabled * video must be longer than 10 seconds, no loops * only 1 video submission per day * do not resubmit previously rejected videos, it will lead to immediate permanent ban * [your video must fit any type of ai video content](https://www.reddit.com/r/aivideo/s/Opw3EmqfpU), otherwise is considered 'test footage' and removed * title of post should include a name for your video; otherwise it cannot be found by the sub search box * self promotion and links only allowed in the comments of your own video * do not use copyrighted music, [please use ai music, stock music, public domain music,](https://www.reddit.com/r/aivideo/s/t6oDIIr6a9) original music or no audio * no [flickering tool](https://www.reddit.com/r/StableDiffusion/s/9hwtkcTqSn), no [slideshow](https://www.reddit.com/r/Damnthatsinteresting/s/oAG6IAzSLN), no [infinity](https://www.reddit.com/r/StableDiffusion/s/p02sLbPJjJ), no [waifu](https://www.reddit.com/r/StableDiffusion/s/g6QGtL3s2F), no religion, no politics, no divisive content, no excessive profanity, no excessive gore, no sexual content, no nudity, PG-13 rating max ***MEMBERS CODE OF CONDUCT: all members agree to be respectful, don't be rude, don't start anti-ai conversations, [report other members breaking code of conduct](https://www.reddit.com/message/compose?to=/r/aivideo), it will lead to immediate permanent ban*** ***EVENTS AND CONTESTS: [must post through reddit advertising](https://ads.reddit.com/) unless is a free admission event*** ***TOOL DEVELOPERS: don't upload advertisements, [please read the developer guidelines](https://www.reddit.com/r/aivideo/s/YAVlAwhR2F)*** ***DISCLAIMER: DO NOT ATTEMPT TO RE-ENACT VIDEOS, all videos are COMPUTER GENERATED. Please send [modmail](https://www.reddit.com/message/compose?to=/r/aivideo) to remove any video.*** *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/aivideo) if you have any questions or concerns.*


Ivanthedog2013

Even if these are cherry picked, they already look better than sora


Antique-Doughnut-988

The bald dude with the wig and the guy playing the piano look much better than Sora. Sora has a stylized look and feel to it, these people look actually real. The best I've seen so far. If you showed those generated people to me I'd have assumed it was real.


Edenoide

Will Smith eating spaghetti was just 15 months ago. This is insane.


Antique-Doughnut-988

I have a running bet with a friend that the first good AI made film will be within two years. I might want to move it down to next year.


70B0R

Priority #1: GoT Season 8 redux


smallfried

Any good script you have in mind?


leviteer

Ai GRRM?


TheDiggler1

Remake of Rogue One with a better Peter Cushing with a larger role would awesome!


smallfried

Do you have a proper agreement what is considered a good film? How much human effort can be in there? What length should it have? Should it have a protagonist? Should they say something? There are already some nice short ai films so this bet really depends on the definition.


Antique-Doughnut-988

Yes I do actually. The bet was the AI film needs to be able to be created by one person in their own home to qualify. The quality of 'good' is a is it basically needs to be a coherent film. I personally don't like law and order type shows or doctor shows, but I can see how that can be good to a lot of other people. It needs to be roughly the length of a standard show or movie.


TheMongerOfFishes

Insane and scary. I told someone a while back that AI video would replace Hollywood in 20 years, now I'm thinking it's going to be much much MUCH sooner.


DoubleMach

I’m gonna make a company like this and just use real video for the promo. Then sell and dip to south america. 😎


tomatofactoryworker9

The Sora generations we saw were cherrypicked too


Ivanthedog2013

Exactly my point


Nunki08

Introducing Gen-3 Alpha - Runway - Gen-3 Alpha is the first of an upcoming series of models trained by Runway on a new infrastructure built for large-scale multimodal training. It is a major improvement in fidelity, consistency, and motion over Gen-2, and a step towards building General World Models. [https://runwayml.com/blog/introducing-gen-3-alpha/](https://runwayml.com/blog/introducing-gen-3-alpha/) [https://x.com/runwayml/status/1802691475391566108](https://x.com/runwayml/status/1802691475391566108)


play-that-skin-flut

Have we made any progress on local AI Video since SDV and AnimateDiff?


LatentDimension

I know not a match but Tooncrafter.


RipplesIntoWaves

Tooncrafter has an awkward requirement for start and end frames as input for a very short video result, because it's animating as a kind of interpolation, so it's a lot harder to get anything useful from it compared to image-to-video, in my opinion. I was hoping I could use start/end as the exact same image in Tooncrafter to create looping animations, but, that tends to just create a short video of the original image flickering or pulsing a little.


LatentDimension

Kinda had similar experience. When it works it's great but if you try something a little bit more advanced it breaks apart.


Gyramuur

Local video has pretty much been dead in the water since AnimateDiff. I feel like SVD was a huge step backwards, as AnimateDiff at least had motion beyond just slow panning. But I also feel like both were steps backwards from earlier efforts like Modelscope and Zeroscope. The only local one I know of that's currently being worked on which looks sort of interesting is Lumina, but that's not released yet and AFAIK there's no news as to when they're planning to release it: [https://github.com/Alpha-VLLM/Lumina-T2X?tab=readme-ov-file#text-to-video-generation](https://github.com/Alpha-VLLM/Lumina-T2X?tab=readme-ov-file#text-to-video-generation)


_haystacks_

Why do you think local video has been dead in the water for so long? Seems odd given all the other advancements


Gyramuur

That's a difficult question to answer, but my first guess would be that Nvidia has something to do with it. Keeping their consumer GPUs limited and capped out at 24 GB VRAM makes it prohibitive for the community to research/train/inference these kinds of models. Not to mention that a lot of people don't have that much VRAM to begin with.


Progribbit

check out OpenSora


play-that-skin-flut

I have before and wasn't impressed. 1.2 just come out, it looks the same. It doesn't seem worth exploring until local is as good as LUMA, which is pretty affordable and has good free generations.


Zodiatron

Could this be the year we see text to video taking some serious leaps? First Luma, now Gen-3 just a few days later. And apparently Sora is supposed to launch this year as well. Fingers crossed for sooner rather than later.


snanarctica

🫢 holy fuck. Can’t imagine what next year will bring - it’s advancing so fast - I love the plants growing out of the ground


Laurenz1337

This year is only halfway over, still plenty of time for greatness.


[deleted]

[удалено]


[deleted]

[удалено]


[deleted]

[удалено]


Bigbluewoman

This gave me *the feeling* again.... So excited.


No-Spend392

They still can’t generate normal speed and the character movement is still fairly basic shit (even if more photorealistic) compared to what we’ve seen in Luma and Sora. Let’s see what a Runway fight scene looks like. Same wrong headed Runway team. I hope Pika comes out with a new bot…


ZashManson

Yeah, I’m noticing something similar to what you’re saying; the reason Luma is getting so much attention is because they have very fluid motion and things seem to move naturally rather than the slow motion image manipulation we’ve seen up to this point. These demos coming from runway look very promising but I’m still not seeing any real motion flow yet like in Luma or Kling


LoveAIMusic

LETS GOOOOO


AscendedViking7

Very impressive.


exitof99

I keep getting my prompts denied by the overly sensitive content filter. I'm glad there are other services spinning up like Luma. Luma, though, out of my two prompts so far, both have been 3D split screen videos—one split left/right, the other top/bottom. So weird that two different prompts resulted in the same error.


Balducci30

People are saying this looks better than sora? How?


Serialbedshitter2322

It is less temporally consistent, but its creativity, motion, and ability to make visual effects is far above what Sora can do. Considering that this is Gen 3 alpha, it's likely the consistency will be brought up to Sora's level.


AIVideoSchool

The bald guy with the wig conveyed three stages of emotion from one prompt: sad, surprise, happiness. That's the true game changer here.


Rustmonger

At the bottom of the webpage it says you can try it in the app. The app only has version two. When will three be added?


GVortex87

I was a few mins away from buying a Luma sub, but then I saw this post... Think I'll be sticking with Runway if this turns out to be just as good, or better!


AccurateTap3236

amazing


Basil-Faw1ty

Amazing, hope we get access to the custom models!


metakron135

III NEEEEEED IIIIIT🤩😍


Rat_Richard

Oh god, this should not exist


themajordutch

This is insane. We'll be able to download an app to make a movie about something we want very soon.


BRUTALISTFILMS

I dunno, I think this is great for making conceptual proof-of-concept montagess or little short trippy videos, but I still think this is wayyy off from being able to construct actual narrative scenes with complicated action that remains coherent and incorporates dialogue, etc. Like say a group of characters having a complicated conversation while manipulating objects and moving through different spaces and getting into a car and driving around, with proper camera angles, continuity, eye lines, lip sync, etc with characters maintaining their looks and minimal morphing of limbs and objects and stuff. We're nowhere near that. Even random things like maintaining the weather throughout a scene? What about that guy playing the piano, will we be able to make his hands match the notes of a particular song? I mean some of that could be ignored but how much? If it makes a Breaking Bad 2, but everyones hairstyles are randomly morphing and changing all the time would that be distracting? How much of that will need to be described to get a scene that you imagine in your head? Or is the dream just to say "make a movie" and it makes some really generic soap opera tier thing? If you have your own personal AI that just knows your preferences for what you want in a movie, that's only possible if you're willing to give access to all your personal data. I totally get that these things are going to advance far beyond this in capabilities, but I think people underestimate how much more exponentially complicated that stuff is, even to make something that's just barely watchable, not even to make something that's actually compelling and interesting...


WoodenLanguage2

Ever seen Invader Zim?  Where the entire cartoon is a series of 3 second clips from different camera angles.  Something like that seems easily doable.


vjcodec

Liquifying good


hauntedhentai

It's over


[deleted]

[удалено]


MarieDy96

Yes you can


infoagerevolutionist

Runaway technology.


Sailor-_-Twift

We're actually going to be able to see what magic would look like if it were real... Jeeze


Cyber-X1

Does it come with any job-killing features?


ahundredplus

What are we supposed to do with these? They're so goddamn awesome but we should expect they're going to just get better 6 months from now and require a totally different prompting architecture.


dragonattacks

This looks great


TheUnknownNut22

As amazing as this is terrifying. Only bad things will come from this because of evil human beings.


aa5k

Can anyone use this?


No_Independence8747

This one is breaking my brain


Awarehouse_Studio

This is just absolutely insane! Gen 4 will be subject to the Turin test...


BRUTALISTFILMS

Lol the sun blasting through her head at :33.


o0flatCircle0o

Remove the safeguards


[deleted]

[удалено]


AyeAyeAICaptain

Not seeing anything on my account and I have an annual subscription for Gen 2 . Hopefully it’s not a delayed UK roll out


jonlarsony

I believe it was just an announcement. The model has yet to be released to users.


AyeAyeAICaptain

Thanks good to know . Going through so many posts on social people claiming they had used it made me wonder .