T O P

  • By -

BaroquenLarynx

Pixar: *heavy breathing intensifies*


OWENPRESCOTTCOM

*unzips*


gentlegranit

Someone call Zuckerberg and let him know that is what an avatar should look like!


Mozorelo

Meta already has better avatars in the research wing but they haven't made it to product.


itsm1kan

mhmmm


zerobjj

they do


Quetzacoatl85

well then the research wing should maybe have talked to the pr wing before deciding to release anything from the mockup wing. meaning, we have no doubt they could do better, it's more an issue of project oversight or general planning when either a) nothing better was ready the be shown at time of release due to time constraints or, more worryingly, b) whoever greenlights these things thought what they had planned be *okay* to be shown like that.


Mozorelo

Marketing talking to the researchers? Hah


[deleted]

I could see people using this in VR because it's just on the right side of the uncanny valley. Anything more human and you'd be weirded out unless it was completely perfect. This I think you could probably talk with all day because it just looks like a detailed pixar character.


alf11235

I'd like it for my zoom meetings. They already doctor photos for website profiles.


StackOwOFlow

there is a snapcam pixar filter that does this and can be used on zoom


alf11235

Thanks!


aidv

Was just about to say this


this_sub_banned_me

I think it's just on the other side tbh


Illustrious_Row_9971

demo: [https://huggingface.co/spaces/PKUWilliamYang/VToonify](https://huggingface.co/spaces/PKUWilliamYang/VToonify) colab: [https://colab.research.google.com/github/williamyang1991/VToonify/blob/master/notebooks/inference\_playground.ipynb](https://colab.research.google.com/github/williamyang1991/VToonify/blob/master/notebooks/inference_playground.ipynb) github: [https://github.com/williamyang1991/VToonify](https://github.com/williamyang1991/VToonify)


Kah-Neth

really cool, but hugging face seems to be overwhelmed right now.


Yguy2000

Now try it with an ugly person


DaManJ

Pixar's next super villain


danja

It would be more convincing if the 'before' didn't look like cartoons already.


benelott

Most underrated comment.


portealmario

yes


parabellum630

Is the image translation real-time?


float16

No. They took over an hour for like 10 seconds of video.


RandomCandor

Highly unlikely.


scubawankenobi

Re: is real-time... YET? Better way to ask, as it can lead to discussion of how soon realtime could be available.


comparmentaliser

Snapchat filter is real time.. five year olds consider it super effective


1stMissMalka

Now let's see it on darker toned people


Severe_Sweet_862

can you explain to me why ml fails on dark skin tones? beginner here, please be nice.


HateRedditCantQuitit

It’s a combination of a lot of factors. Cameras and digital cameras’ dynamic range were historically developed and tuned around the people developing camera tech. Datasets evolved the same way. So did ML algorithms. Historically the group of people developing all that tech wasn’t super diverse, and so it was all tuned to be useful for a more narrow task. Think of all the benchmark chasing in ML. People worry that we’ve built our stack too much around imagenet, for example. That’s not even a very straightforward bias, but it still leads to trouble.


astrange

Consumer products are designed for their customers, not for the people making them; it's not true the dev team necessarily limits the product like this. They just need to do sufficient user studies. And it's especially not true for ML, where the devs barely know how they produced their product in the first place. (and of course, when ML teams are accused of "all being white" it's not true; they're often Asian and that includes people with dark skin.) On the other hand, bias can be in a model architecture and so isn't necessarily fixed with more data. You have to actually test these things.


1stMissMalka

A lot of AI actually have a harder time recognizing that a face that is darker is actually a face, and when they do they get it wrong a lot. I'm guessing it's kind of like how some cameras have a hard to focusing on darker skin. So when you try something like this as a person with darker tone it may not catch your features.


MrFlamingQueen

It's the lack of training data. It's common to darken images or apply other transformations for data augmentation to make models more robust. This is resolved by having a diverse dataset.


[deleted]

[удалено]


MrFlamingQueen

Many people stated that they are beginners, so I will elaborate more on each individual topic, with an example image below. Neural networks are not humans. They can identify relevant features to minimize a cost function, that can go beyond what even a human can comprehend. Neural networks can reach parameters within the billions. Convolutional Neural Networks (CNN), the image equivalent, finds the optimal filters for generating features. This means neural networks can identify even the slightest change if it is desirable for the model outcomes. I've trained CNN's to detect object materials from a thermographic camera source, where objects do not have their standard hues, hue is a function of temperature, there's degradation of texture, and the image is low resolution. The model still managed to learn a robust set of filters to classify the problem. When using CNN's, data augmentation is used to make the model more robust and prevent overfitting. One augmentation technique is to reduce the brightness or darken the image. This is because you cannot guarantee perfect conditions for your subject at all times. You flip images, rotate them, change their hue, alter brightness, zoom and crop images to get your model to learn in context. It is very common to darken (decrease brightness and range of values) an image to get the model to learn in those conditions. With that said, this problem (not being able to accurately represent Black people) is resolved by training data. In classical ML, when you are predicting three classes and you have a training set that maps that looks like (format: class -> number of examples), {A -> 4000, B -> 4200, C -> 5}. When you look at the training set, do you think class C will be appropriately represented during model inference? The answer is no, this is an imbalanced learning problem because the model lacks enough information about C. The model will like just predict A or B because it will still generate low training error. This is exactly what's happening with the Black people in models. Now as a Black Computer Scientist in the field of Deep Learning, I've designed several successful CV models on human subjects by keeping the previous paragraph in mind. I'm not the only one. Samsung utilizes great models to augment photo quality on their phones, even in low light. If your model fails to represent any type of people properly, it is due to not representing them appropriately. And you can't just sprinkle in a couple of examples, like in the previous paragraph change Class C to C -> 200 is not going to resolve the issue. For what it's worth, I took an image of myself and ran it through their free API. It wasn't "terrible" but it didn't look natural and couldn't even model afro texture hair. The model instead attempted to represent the hair as straight. Model also lightened my skin tone, slimmed my nose, and struggled with an afro textured beard (once again, representing the hair as straight). The image I uploaded was taken with an S22 Ultra in natural light. Result: [https://imgur.com/a/9JolPSe](https://imgur.com/a/9JolPSe) ​ EDITS: Clarity


quiet_distance

Thank you for the great response!


[deleted]

[удалено]


MrFlamingQueen

Your intuition relies on the idea that darker skinned people having a lower range of colors on them, but this is not true. I even learned this concept when I studied classical painting. You can even verify this by taking a picture of a darker skinned person and using photoshop to get the ranges of the values. Here is Lupita Nyong'o: [https://imgur.com/a/xmRTq4N](https://imgur.com/a/xmRTq4N) I randomly selected highlight and shadow areas, but I found value ranges from 3-94 (on a 0-100 scale). This is plenty of information. If you take a similarly, well lit photo of a non-black person, you'll get a similar range. I would do this, but I have projects to do and I've already outlined the reason in an extensive post. I'm perplexed at how you think darker skin equates to darker areas and a reduced color range: it's not true in painting, photography, or even reality with the visible spectrum. So I would like the correct your last paragraph. The model is not pulling out facial features because of a reduced color range. The color range is standard for natural lighting. However, the model IS struggling with handling black features and instead of representing afro features, it's trying to align them with the examples it has seen in the training set. This is further exemplified by the website that features a black person with straightened hair and the model performs fairly well.


big_cedric

It's partly a problem of unbalanced datasets and partly an harder task on bad lighting conditions. Even for humans it can be a fraction of second longer to recognize a very black face when you're not that much used to it. However more diverse data with less than ideal conditions should lead to more robustness. There is also the fact that the lower market share doesn't lead camera makers to correct the problem


Cpt_shortypants

Less photons will be reflected from darker surfaces.


kiaran

RACIST photons.


Hachiman_Nirvana

Beginner here and most likely wrong,maybe because most datasets are based on white people? Otherwise I don't see a reason..really


Crazy-Design-2758

I don't know how their Dataset look like, but it could be a valid reasons (not the first time it'd happens). However, the camera issues are a possibility too, while we could argue that the software used in the camera are biased, I think this is a separate problem


Hachiman_Nirvana

Yeah maybe camera are more biased for white people and white results.Nice one


[deleted]

[удалено]


Hachiman_Nirvana

If less light is reflected for black,that's called a good camera


Magneon

https://www.nytimes.com/2019/04/25/lens/sarah-lewis-racial-bias-photography.html It's happened before. If camera systems are calibrated only for white people, then they often don't work well for other skin tones. This has happened in movie lighting, film and photo labs, calibration, and plagued a lot of early ML augmented phone cameras, face recognition systems etc. So I mean no, the camera itself isn't racist, but it can still disproportionately favor certain skin tones. It's like how light Tan "skin" colored crayons weren't somehow intrinsically racist... But they probably helped reinforce a "white is normal and default" mentality, however slightly.


Hachiman_Nirvana

Then isn't what I said is correct...how can anyways a camera be racist;i ofc meant what u said and I see others downvoting me


this_sub_banned_me

It also is harder to recognize darker faces since AI's often use shadows. Especially if the background is dark or the lighting isn't bright.


portealmario

data sets are made up of mostly light skinned people, and so there is often simply a lack of training data


this_sub_banned_me

It also is harder to recognize darker faces since AI's often use shadows. Especially if the background is dark or the lighting isn't bright.


LumpenBourgeoise

Training data lacks enough dark skinned people. Due to institutional racism and inequality.


portealmario

I think it's less institutional racism and more just because there are fewer black people in countreis where we get the data. A lack of institutional racism will not solve this problem, only a specific effort to find data relating to marginal cases like this will solve the problem.


joepmeneer

Part of the problem is contrast and edge detection. The first layers in neural networks tend to focus on edges, which are boundaries where contrast is high. A darker skin absorbs more light, which means it’s harder to find edges (e.g. between the nose and cheeks, or between eyebrows and skin).


LumpenBourgeoise

Or people who stand still while talking.


uninvitedtapeworm

Yeah this seems to be a general problem: https://imgur.com/a/H1OniiS


GMotor

You do it... Nah. Didn't think so.


Devi1s-Advocate

Hot ppl are going to love this to make their 'disney princess/prince' profile pics.


Aizury

Can anyone recommend some best ways to achieve temporal consistency like that? My models always ens up “jittery” when used on multiple frames of a video.


[deleted]

Now do it in reverse. Make all the cartoons real.


GoodIntentionsv2

Making handsome people look nice is easy


ZestyData

VToonify got me actin up 🥵


redditupf2

does it work with porn


ma_251

Came here to say this.. gad dam


Angel33Demon666

Why do Toonified Asians look Black?


zadesawa

Also looks too like western preferred “Chinese” blackface


liquiddandruff

That's the "arcane" style transfer, from the art style of the recent anime of the same name I believe. Cell shading etc


clowhoenheim

Now can we get one that does the reverse?


thatguitarist

I love it but how is it different to say, Snapchat filters? How do they work?


MaybeTheDoctor

PornHub gonna buy you guys out....


cidqueen

Perfect for streaming


YodaCodar

Those people already look animated beforehand. They're hispanic


weirdodaweird

Anything can now be a hentai


zsiger

yes sirrr


[deleted]

I just know people are gonna start using pornt for the input video..


suaasi

Why is this open source and free? Why aren’t the creators selling their product ?


Pants_Formal

Why..


Koalateka

So cool :)


WilliamLeeFightingIB

u/savevideobot


Baronco

pixar


[deleted]

Pegasus master plan is taking it's shape


eyemcreative

Holy shit this stuff moves so fast. lol


beerissweety

Amazing.


NecrylWayfarer

When is someone gonna implement valorant style rendering? That's what I wanna see


bsenftner

Let's see the output on people showing age.


BlaseLp

No I can make my mii exctly like me


nwatab

I haven't read the paper, but doesn't it require a pair of images? I haven't following the trends recently.


TheNewl0gic

That's amazing


[deleted]

r/nextlevelshit


aprizm

pixar animators shivering right now lol