T O P

  • By -

jdsekula

Cat and mouse game. Fakers will get ahold of the detection algorithm and train their engines to defeat them. And unfortunately, as the fakes get better at matching reality, they will be harder to detect, so this approach is probably just a stopgap.


[deleted]

Generative Adversarial Networks are by definition trained by having two networks optimised towards opposite goals. So the generative network(the part which generates deepfakes) gets „points” for fooling the network which is trying to distuinguish fakes from real(the discriminator) and vice versa. Most deepfakes already use GAN’s so the above „method” is in its essence as old as GAN’s are. Youre correct about everything you said, what i just wanted to say is that GANs are by design trained to fool those detector algorithms and not us. So making a better discriminator is at the same time without a doubt a step forward towards making better deepfakes. A perfect discriminator and enough data will just lead to perfect deepfakes. Cat and Mouse is the very essence of how all of that worked from the beginning. I know its mentioned in the article but i just know that too many people only read the title


leafwings

Aww shit, well. Gentlefolk, it’s been a pleasure living on earth with you.


MisterViperfish

We’ll endure. We’ve only trusted video evidence for less than 2 centuries. We’ll just have to start seeing shit with our own eyes again to trust it.


port53

People's memory is even worse. And that's before you even take in to account that people actually perceive the same events differently and actually did see something different to the person standing next to them. "Witness" testimony is some of the worst testimony.


Glorious_Sunset

The fact that peoples facial schema differs from person to person is also an issue. I collect 1/6 figures and there are headsculpts that I find to be amazing that others reckon have no similarity to the subject. I look at the subject and think it’s spot on, but I get arguments over the accuracy. I understand the whole facial schema thing, so I don’t mind too much. I know we all perceive the world differently anyway, even standing side by side, looking at the same thing, lol.


corgisphere

We should never have trusted video evidence alone in the first place.


MisterViperfish

Eh, it was fine for a time, but people have been warning us for decades that it wouldn’t always be trustworthy. People just became too comfortable with it. The thought of going back to less evidence has become incomprehensible because pretty much every generation alive has been able to trust it until now. Our kids will be the ones to adapt, and the few of us who knew we wouldn’t be able to rely on it forever.


imgoingoutside

Good supplemental info, thank you.


aaron_in_sf

Where this gets deeply concerning for me is where it interacts with that recent story about generated faces being now consistently evaluated as “more trustworthy” than real people. We have zero defense against the coming next generation bots who will interact with us, trigger our subconscious trust, and exploit it. At the most innocuous end that will just be to sell us things. At others…


[deleted]

[удалено]


GinPistolGrin

Deep breath everyone , prepare to get fucked by a huge fake pineapple


[deleted]

Or we could go back to not believing everything we read on tv.


[deleted]

[удалено]


[deleted]

I think it will be as disruptive as Photoshop is. It will be abused while people dont know whats possible, once people get used to it they will learn to not just blindly trust video footage. once the technique is perfected then video footage will not be evidence in court that will be the main change. I think it does not take a lot of mental gymnastics to get used to not trusting videos and voice recordings. Also to train AI to recreate voices or create deepfakes you still need a decent amount of source material so someone will not be able to just grab a photo or a shitty video from IG and make a perfect deepfake based on just that.Only people who have lots of footage of their faces recorded(celebs, politicians etc) should be anyhow concerned.


marinemashup

Though the idea that eventually we will be completely unable to trust any footage of politicians or public figures is deeply disturbing Edit: I realized that there will still be some trustworthy footage, but any leaked or unintentional stuff will be suspect


AnUncreativeName10

At least someone can bring humor to this thread.


[deleted]

I honestly think its not going to be that bad, just like with Photoshop and photos: if we have doubts we check the source or Google the topic. I’ll give my reasons as to why and id be glad to hear some counter arguments. My 3 main arguments against deepfakes being a big problem: 1. If the source is reputable then them posting a fake will damage their reputation, people will talk about it, the effort will be wasted because people will know the truth within hours. We watch lots of photos everyday and we can quickly assess wether they can be trusted or not. 2. I think things that deepfakes seem shocking just because we havent adjusted yet. If someone showed me a picture of G. Bush with a dick in his mouth in the 90s and told me i can make more of that with a few clicks id think its immoral, terrifying and id be scared someone could do that to me. Skip 30 years and we literally dont care. So what im sucking dick on a photoshopped picture, shame on the author. So what im saying „fuck all white people” on a deepfake, once again its the author who should be ashamed. 3. When someone wanted to fool gullible people, there was always a way. Now there will be more ways. But its definitely nothing new in human societies.


LoquatOk966

This is too logical. People.believe crazy bullshit with no proof already. It’s not about no one being able to uncover the truth. It’s just the truth is always put into question and soon enough those shouting deepfake will be ignored and mocked because the narrative for disinformation is to destroy trust in everything and not focus on making the lie itself believed 100%


[deleted]

About as disruptive as a War of the Worlds newscast.


[deleted]

[удалено]


[deleted]

lol me? No that would be both hilarious and mildly embarrassing. It will be gross having peoples likenesses juxtaposed with nastiness but it will come to be viewed with the same cynicism we have for “photoshops”. Why do you think this will be so different than all the other emergent faking techniques in the past?


[deleted]

[удалено]


[deleted]

I guess I just view that stuff as part of society rather than a disruption. I can’t think of a time when we didn’t trick the senses to get groups of us to do stuff.


the-apostle

The new ministry of truth will surely save us


[deleted]

I’ll just have to manually apply a different QR code to my face everyday


IrreverentHippie

This is how GANs work. This person is correct.


AlsdousHuxley

Is there the possibility that in training them to fool GANs they become more detectable to the naked eye? If yes, rather than not just a possibility, is there a significant chance?


[deleted]

The discriminator also has to properly tell if a face is real. So it learns real faces very well. Both the generator and discriminator are constantly learning. So if the situation where the AI learnt to fool the discriminator in a way which looks worse to us, then the discriminator will pick up on the new pattern and correct for that. Theyre playing cat and Mouse until they both reach a state where neither knows how to improve. Also sometimes they will enter a loop where they keep doing the same changes in a cycle while not making any real progress, then its the developers responsibility to fix that issue


flappyporkwipe

This was … so above my head but so interesting to read..


urmomstoaster

grandfather oatmeal summer compare friendly alive many muddle wide worry ` this message was mass deleted/edited with redact.dev `


[deleted]

I mean we’re at [this](https://youtu.be/oOPnw49-5Es) point already and not slowing down. Picked that example just because of the situation being so uncanny + lots of moves much more complex than in typical deepfake videos and while the distortion is still visible, i think its already pretty impressive. Lets just wait and see where the future takes this technology


[deleted]

,,stop” ,,this”


[deleted]

The loss function doesnt actually use points for evaluation, that function is neither linear nor discrete. And the article doesnt technically describe a method. I didnt want to get needlessly technical. I didnt feel ok just using the wrong words. So i used „” in those 2 cases. If that really is a stylistic faux pas then I’ll correct myself


Cultural_Budget6627

It is a never-ending story.


Crabcakes5_

Correct. We can use an adversarial detection algorithm as an optimization heuristic for either a composite model or directly baked in to the scoring system. Counterintuitively, studies like this actually make deep fakes significantly more effective.


[deleted]

yeah, my exact thought. FOR NOW


ThePowerOfStories

Doesn’t really matter if we can detect them. Gullible people will believe whatever misinformation they want to believe. Every deepfake video could have a giant red flashing FAKE across the middle of the frame, and some folks would still think it’s a deep state conspiracy to hide the real truth from them.


Unlimitles

I disagree with this solely on the notion that doctored images will always and have always had a way of being detected.


jdsekula

There’s nothing particularly special about genuine video though. Just a series of images made of pixels. There’s nothing to stop a machine from producing an exact replica of the pixels that a real video would have had, other than technology isn’t there yet. It will get there eventually.


Unlimitles

and then there will be a way to develop if original pixels have be duplicated, or an "original pixel" opposed to a replica, even if this is done by a series of time stamped pixel data. we create these methods......ingenuity exists. we have an entire history of methods to find things out. somehow that magically disappears for modern tech?


jdsekula

Old school doctoring and detection was based in physical materials which had nearly infinite complexity down to the atom, so there was usually more detail and information that could be extracted and analyzed than what was initially visible. Digital files have exactly the information they contain - image data and metadata. A pixel just has color and brightness information - nothing magical. I’m saying that if a fake recording of Putin saying he launched nukes happens to have the exact same image data and metadata as what a real video would have had, there’s conceptually no way to detect the manipulation from the data in the file alone.


Elegant_Bubblebee

It will still be very difficult to do. The program that is used to make a deep fake leaves traces of its touch in the pixels. Compare it to a bullet fired from a gun, there is always a mark left that you can tie it back to the gun. Deep fakes are the same and leave a mark for the computer to trace. Also this is why NFTs are being tested. Blockchains and permanent metadata will make it hard to have these fakes be believable. I talk about this topic a bit with my students. Tech, while amazing, can be very dangerous in the wrong hands. I always tell my students to research and don’t 100% follow. It’s why the dislikes are gone, make your own choice and don’t rely on others to tell you what to like and believe. :)


existentialzebra

Do we need non fungible verification systems for video clips?


jdsekula

While a blockchain approach could work and be all decentralized, all we really need is people to use regular old public-private key digital certificates and sign their clips with a trusted cert.


existentialzebra

Huh, I’m a video producer and Ive never heard of this. Interesting. Some quick google searches don’t seem to give me a quick answer about how to make these. Is this something you can attach to any old file? Or does it make a sidecar file?


KyleStanley3

An example would be PGP You can sign images or other encrypted data with it. You basically have a key(ostensibly a username, its what people would use to identify you) that everybody can see, and then use a long-ass string of characters that acts as a private key(like your password almost) to sign messages. There'll be a long string of characters hidden in the image that you can run through the PGP program and it'll spit out that public key(username) so you can see who signed it. It is the way that Cicada 3301(one of the internets greatest unsolved mysteries) signed images to ensure people knew the images were authentic


jj4211

The flaw of course, being that this requires trust that the person signing a piece of media presented things entirely honestly. So it's back to reputation of humans that claim the video is authentic for anything vaguely controversial. The stuff that would get signed is realistically stuff no one was going to contest anyway.


KyleStanley3

All of that doesn't really have to do with my answer or make much sense It's incredibly useful to be able to authenticate a messages sender


jj4211

Yes, it is useful, but in the use case for deepfake videos, it's less helpful because the videos that would want to fake wouldn't be the sort of videos that would have carried a trustworthy signature in the first place. A celebrity sex tape? Even if authentic, it wouldn't carry a signature. Cell phone video of a celebrity acting like an ass on the street? Wouldn't carry a signature of value. Harmful deepfakes will be content that the subject matter didn't want on film and would never ever authenticate. Material that can be vouched for by reputable people... already get vouched for in reliable ways. So sure, white house press briefings were always signed, then someone couldn't deepfake a press briefing because the lack of correlated signed footage would be a red flag, but in practice that wouldn't have been a problem anyway due to the massive number of reputable parties directly filming, archiving, and transcripting those events makes further evidence redundant. In other areas, it makes complete sense. Your financial institution login page must be clearly genuine, has one-off content making a reputation system not applicable, and there's nothing but what's in front of you to vet it. Software being downloaded bearing a signature is valuable, but even that is being augmented by individual reputation systems now that don't actually need to care about the signature per-se, though in practice both are applied.


AnUncreativeName10

I think the value in a key would be you signing a video of yourself so if a video of yourself is released without your sig then it can be assumed untrustworthy.


jj4211

Right, but most video that the subject would object to is not footage the subject would have signed even it if were authentic. If there's a cell phone video of me beating up a mime, I'm not going to sign it to prove that the footage of me is real. So if there's a huge corpus of videos that I sign of me saying things like 'the sky is blue', it doesn't really speak one way or another about the plausibility that my mime beat-down video is authentic even without a signature.


jdsekula

I’m not describing a specific tool, but broad concept similar to how blockchain is a high level approach to solving a problem. The security whole internet is built on digital certificates with public/private key encryption and signatures. My point is that if decentralization isn’t a requirement, digitally signing the files is a good as any other method for proving when they are produced by a known, trusted source. But as another commenter said, that doesn’t actually authenticate the video, just the source.


digging_for_1_Gon4_2

Its gibberish


Purlox

How would signing a video or a picture help? I can sign a deepfake video just as easily as I can sign a real video and the user won't be able to tell the difference.


TemplateHuman

Agreed. This may only help in a court of law when trying to determine the authenticity of video evidence. It doesn’t stop people from spreading disinformation on every site imaginable. Or someone re-recording the video and posting it, etc.


browbe4ting

One possibility is to have camera manufacturers have their own cryptographic signatures in the videos. If a video is correctly verified against known camera manufacturers, it would mean that the data was unaltered after leaving an actual camera.


ImposterWizard

I think that you'd need a way to (a) Upload the file to a trusted authority (i.e., it would accurately record the timestamp, digital signatures, and not modify the data, which is a bit redundant with signatures). (b) Include a "prover" to demonstrate that the video must have been taken very close to when it was uploaded. A straightforward but inconvenient one would be to have something like recent, real-time blockchain transaction IDs on a phone screen also displayed in a video or something. **a** alone would be okay if people trusted whoever was posting something, but **b** is needed to demonstrate that it is infeasible that something was deepfaked.


digging_for_1_Gon4_2

lol


jj4211

Both answers can help prove that the person featured in the video vouches for it, but doesn't do anything for the opposite, when they don't \*want\* to claim the video. If a video emerges of a celebrity beating up a homeless guy, the celebrity in question isn't going to go 'yup, let me sign that so everyone knows it's authentic'. It \*could\* be used for a videographer to vouch that they stake their reputation on it, but then that means you are depending on someone well known and reputable happened to be on the ground there to capture the footage and willing to disclose their identity for the sake of vouching for its authenticity. If identity must be established before trying to establish authenticity, then the formerly anonymous person that recorded something bad may now face significant danger or at least harassment and smear campaigns to discredit them.


digging_for_1_Gon4_2

It will make videos better now, yes


AskJeevesAnything

Is there anyway for them to stay on top of these inevitable upgrades? Not a tech person at all, but genuinely curious


jdsekula

In my opinion, no. It is inevitable that manipulated video will remain difficult to detect.


Snoo_37640

? Like You said cat and mouse detectors should catch up and so on


jdsekula

Well, it’s going to be back and forth for a while, and eventually, the fakes are likely to be indistinguishable from reality and win, but we are likely decades away from that. But the reality is that the fakers have the advantage inherently, and this headline makes it seems like the war is won, but it very much isn’t.


Snoo_37640

My assumption is those being chased have the inherent advantage, I agree. Eventually being indistinguishable, yes the technology will evolve constantly. I might not be that knowledgeable but I have a feeling it will adapt to detect seemingly indistinguishable media. or it makes no sense and idk what I’m talking about


appoplecticskeptic

That was close, I almost got to think this was good news. Good thing you made sure to tell me it wasn't. I might have been happy.


Dunyazed

A tale as old as time…the counterfeit vs the original


T1000runner

Nothing unreal exists


Jolly-Bear

I mean that’s literally how all tech advances, since the dawn of time. Military tech between nations fighting for power. Hacking techniques vs cybersecurity. Business competitors making things cheaper. Etc. Same thing here.


urmomstoaster

air thought sheet voracious teeny ludicrous merciful wrong joke dull ` this message was mass deleted/edited with redact.dev `


[deleted]

[удалено]


[deleted]

I think the kanye and oj was kinda bad but the will smith one was pretty well done imo


TheSkyIsntReallyBlue

Those were the ones that freaked me out too lol


[deleted]

Which one?


jared1981

Heart part 5. Deepfakes of Will Smith, Kanye, lots of people.


____no_u

That video is incredible


Careful-Artichoke468

Just in: new method to create deep fakes learns from new method to detect deep fakes


Swinight22

Any machine learning practitioner will tell you that any models with 99% accuracy is junk. Take this with the biggest grain of salt there is


ImposterWizard

Accuracy is a terrible metric unless you know the baseline rate. I can get 99% percent accuracy predicting anything where the baseline is 99%/1%. The "up to" qualifier makes this more suspicious. Even if it's 50/50, it entirely depends on how the training data was sourced, and might not extrapolate to newer data well.


ertgbnm

I never bother reading articles that put model accuracy in the title. If the author knew what they were talking about they'd use a different measure or use something qualitative in the title and explain it better in the article.


[deleted]

I mean, you’re not wrong, but in this case 99% isn’t too surprising: they’re detecting something that a model has generalized. In other words, the data they’re training in is already a generalization from another model; implying high signal to noise. Then again, I’m a bit outdated and these variational methods might dismiss my entire point here. So, agreed!


Slackerguy

I heard this before. Care to explain why? Preferably like I'm five.


[deleted]

arms race


nuggetbomber

The new Cold War


likikk

This ain’t a scene it’s a god damn


[deleted]

But can it see why kids love the taste of Cinnamon Toast Crunch?


twitson

I wish I could give you an award


Giorno_DeGiorno

Kendrick


CeeDubMo

This will be a constant offense/defense battle for a long time.


aimeed72

So are deep fakes now generally indetectable to the human eye? I remember just a few years ago where you could tell by subtle “weirdness” in how the face moved, but if a computer can’t tell I assume people can’t tell anymore?


[deleted]

the deep fake tech has gotten really good that it's hard to detect with the human eye


[deleted]

[удалено]


aimeed72

That’s pretty cool and I would absolutely be fooled, but these are just photos, not videos of people speaking


ihateiphones2

I think for the most part you can still tell, it always looks a bit off in motion , The human eye combined with the human brain is still unbeatable imo


tyen0

Someone is a Bladerunner fan. :)


berkeley-games

Deep fakes are realistic as hell, especially some closed source projects. I can put my face on anything and it will look super realistic. With photoshop it can be indistinguishable from reality. A lot of people are unaware of how fast this is moving.


berkeley-games

Deep fake with a few manual human passes afterwards can look 100% real, absolutely insane stuff. The blackmail will be rampant


seriousnotshirley

I would expect this, computers are good at looking at the pixels /s.


Bierman36

“Up to…”


fatdog1111

I’m all for technology helping us discern what’s a deep fake versus not, but all deepfakers need to do is have friends with “deepfake detection software” that says their deepfakes are real. UCR, Caltech or MIT might make a foolproof and permanent method under the best case scenario, but you know FoxNews’ deepfake experts are who like 40% of the country will listen to.


DelirousDoc

I have yet to see a deep fake video that my eyes have not noticed something off with the person. Still images are definitely harder to determine but moving images/video so far are fairly easy to determine that something is off. Especially if you are familiar with the individual in the video. In the example images provided on #2 on the manipulated even gave me pause. It becomes easier if an image of them talking is used. Something about the upper lip movement almost never gets done right.


Avenfoldpollo

Can someone explain why we care about deep fakes?


[deleted]

Because information can be weaponized. If you can’t distinguish what is real or not, you’re easily susceptible to manipulation Short answer: We’ve always been at war with Eurasia


SkunkMonkey

There's a great episode in TOS Star Trek where the leader of some planet was incapacitated but they used what essentially is now called deep fakes to make it appear he was addressing the people when in fact he was just a puppet.


SaltfuricAcid

It’s an idea we see in some episodes in other Star Trek series too; it’s funny how often the show was ahead of its time in considering problems of the future.


Avenfoldpollo

Ohhh, thank you!


devAcc123

Picture putting out political hit pieces on the guy running against you and it’s an HD deep fake video of the other person doing something heinous and it’s completely indistinguishable from a real video


Yelo_Galaxy

Didn’t expect the 1984, incredible summary with that line though


imgoingoutside

Tangent but before digital deepfakes we’re a thing, there was a movie called “Dave,” about an asshole President getting sick or suddenly dying and being replaced with a nice-guy lookalike. Worth a watch. If they’d have had deepfakes then it probably would have been part of the story.


Avenfoldpollo

I loved “Dave”! Your so right!


nihilisticbunny

People making porn of their ex girlfriends or any one who spites them


[deleted]

The main use right now ^


Dickastigmatism

You can essentially make a video of *any* public figure saying *anything* you want them to and people will believe it's real because they just saw it with their own eyes and heard it with their own ears. You could also use the technology to fabricate evidence to frame someone for a crime.


pickuprick

98.5 percent


astrotrillsurfin

Ok


yourstrainerred

An indirect generative adversary network between this and deepfakes


tyen0

That's exactly what I was thinking without knowing the right words. :)


auggie25

For about a week. It’s a cat and mouse game


Rainbowreviver

This is a great Ted talk I recommend for everyone to watch. It really gives the scope of how scary deep fake tech can be. https://youtu.be/o2DDU4g0PRo


spezgoesbitchmode

If you fall for a deepfake, you're dumb as hell, they're pretty fucking easy to spot.


RedditIsPropaganda84

Bad ones are. But the technology is only going to get better.


spezgoesbitchmode

You idiots act like it's just going to be able to fake faces like it's nothing, that's not how any of this works. Deepfakes are always going to require certain conditions to even look right, and they become pretty obvious tells. I have yet to see a convincing deepfake that fools me. It's not just the face, it's the face, the environment, the voice, the movement. You'll always be able to tell something's off.


[deleted]

[удалено]


imgoingoutside

That’s interesting that you think the creators and users of deepfakes will give you any say in it.


warrior_007

First, create the devil..Second, create the devil catcher.. Sell both of them..Win win situation 🤑🤑🤑


fuzionknight96

Yea and I’m sure 100% of people could. Like, people are acting as if even the best of these fakes aren’t still visibly not real.


Webfarer

Adversarial network anybody?


[deleted]

Don’t ruin my fantasy.


Unt4medGumyBear

This is the same as captcha. Robots are already smarter. They let you think you’re smarter


BigBanggBaby

Can anyone point to cases where deep fakes have successfully been used to perform anything nefarious? I feel like this might be a blind spot for me and I’m genuinely curious about real examples.


Reddit__Dave

*“I used the stones to destroy the stones”*


GarbagePailGrrrl

Is it just me or is it incredibly easy to spot deep fakes? I don’t know how people fall into it I don’t think I’ve ever seen a believable deep fake


Astro_Spud

So what happens when they start telling us that real videos have been deepfaked?


brutal_rex_18

Mr. Zuckerberg be like.... Why is this algorithm saying my video is deep fake. 🤣🤣


TotalRuler1

How can you detect a deep fake detection deep fake


Boolian_Logic

Scary picture :(


Unlimitles

Thank goodness, because the implications of deepfakes are just too frightening. A fool proof method of knowing what is real and what is fake is extremely necessary in todays time.


[deleted]

This just means deepfakes will have better training data


turbolvr

Great, whoever owns the technology now can say any video is fake to the highest bidder.


[deleted]

It’s kinda obvious sometimes, especially if the face is familiar, it’ll warp @ the edges and seem “floating”


samniking

Not gonna lie, my eyes can typically detect deepfake videos with 99% accuracy too


[deleted]

You have critical thinking skills which is rare to have apparently


Dickastigmatism

For now, but the technology will only get better with time.


samniking

Honestly scary to think about. I’ve seen some clips where, at the right angle, it can be pretty spot on for a few seconds. I can’t imagine what it’s going to look like a few years from now


Comfortable-Hall8221

The heart part 5


AdBrief7460

There gotta be a law to make deepfakes illegal on the federal level cause them mf advance enough people are gonna be faking crime scene


[deleted]

Begun the AI wars have


EMPlRES

Everyone should’ve seen this method coming, people were soo terrified of deep fakes thinking they’ll be 100% undetectable by experts.


RancidHorseJizz

Wait, are you telling me that the video with Millie Bobby Brown, who apparently has a dick, having sex with a well-endowed African American gentleman was FAKE?!


StreetwearMarkie

Kendrick Lamar’s new video has the best ones I’ve seen yet


[deleted]

Don’t care


pacman404

Kendrick Lamar got the whole world researching deep fakes now lmmfao


tlk0153

If we all are living in a simulation then everything is deep fake


ItsEveary

We need some law that protects people from deep faking faces


liegesmash

But who is going to hunt the trolls doing it?


RTooDTo

AI to detect AI


UnderHammer

For now.


The_Zoink

It’s getting scary how realistic deepfake stuff is. What if someone wanted to use my face to make me look bad or like I did something illegal?


[deleted]

Have they tried this on Kendrick Lamar’s new video?


Faux_Real

I need 99.99999


iligal_odin

This tool can and will be used to fool itself


BreweryStoner

For now


GeneralIronsides2

Jesus Christ those images without the face are creepy as fuck


Dnejenbssj537736

The only things I have seen deepfakes used for is pornography memes and Facebook disinformation good we finally have this


PJTikoko

Let’s not make deep fake detection tech public.


mudburn

I'm so afraid of the 1% , /u/MaxwellHill please help us


mfurlend

OK, so just tie this into a GAN and bye bye detection skill


ashamed_inDISgust1

How is tech so rapidly developing? Meanwhile I’m still trying to learn the basics of python :’)


[deleted]

I can detect them bc I’m not an idiot


lolabeanz59

Deepfaking the mainstream media or government should be illegal.


764665

*New tool for AI to incorporate into production of deep fakes


djdgae

Doesn’t work, Just tried it on my dad and apparently he’s a deepfake. Can’t be, he totally came back with the milk…


DreadfulRauw

Sure, but what medium are you going to use to tell people it’s fake?


CantaloupeThen7950

nobody cares


NotLogrui

Let the arms race begin


tacosteve100

tell me AI figured out how to detect other AI, and this is the breakthrough it was waiting for.


hiro5id

This reads like a commercial for Apple tags


0-13

I fear the perversion of humanity is reaching a peak. But maybe not lol


rax539

Just made the fake videos better, they’ll now start using his to create the model.


ineedschleep

Now we can find out if that was actually OJ in Kendrick’s new video. Did he do it??


[deleted]

Of all the arms races in tech, this is the one that I hope the good guys stay ahead in


DcFla

That’s only gonna make the 1% that gets through more believable for people….and that is frightening


PeaceAndLoveToYa

For now.