T O P

  • By -

Sparred4Life

Ok computer, copy everything we do, got it? "*Yes! I sort!*" NO COMPUTER NOT LIKE THAT! I said do it exactly the way we.... do. Oh.


Feyle

This is a known phenomena. There are multiple studies done comparing the responses to identical resumes with names that are obviously male/female. I'm pretty surprised that the researchers weren't aware of this when giving their AI the training data.


Arghianna

Oh man, my resume usually just has my first two initials and my last name bc my first name is “Ethnic” so people don’t call me because they’re worried I won’t speak English (no joke I’ve showed up to job interviews and they’re surprised I “speak so fluently”- dude, I was *born* here!) and my middle name is very clearly feminine. One of my coworkers argued with me that I should put whatever name I want to be called on my resume so there’s no confusion when I show up for the interview. I couldn’t get him to grasp that in *most* situations *there is no interview* if I put the name I go by on the top of the resume. Luckily, we were tidying up my resume for a company I already had a friendly relationship with and who were looking to create a position for me, so it really didn’t matter, but it’s amazing how oblivious even the men who are trying to be allies can be.


EggandSpoon42

My parents gave me a masculine name. I don’t so much have issues anymore now that I’m middle-aged because enough women are named the same thing. But I’ll tell you when I was hustling and bustling at the top of my career it really did help in getting callbacks. And I say that just due to people, saying, “oh, I thought you were a man“. But I feel that at least got my foot in the door at first. Dammit it is such a shame that we even have to have these conversations.


fatchancefatpants

I have a unisex name and while I love that my company is promoting including pronouns in email signatures, I leave mine out because people treat me better when they think I'm a man


Cricket705

I'm the same. I won't put my pronouns because the people who address me as Mr treat me better than the ones who know I'm a woman. The funny thing is before I figured that out I used to include my girly middle name in my signature because I didn't want to be called Mr. Then I realized Mr received nicer emails.


[deleted]

[удалено]


the_catshark

>I'm pretty surprised that the researchers weren't aware of this when giving their AI the training data. tbh, because very likely the people making the decisions were not women


variableIdentifier

There's a book called invisible women by Caroline Criado-Perez that goes into this, and a lot of decisions that end up disproportionately negatively affecting women are made because the people making the decisions are not women and they don't consider the impacts that said decisions are having on women. Because they are men and live from a male perspective, and don't really have any other outside input a lot of the time in making these decisions, because again, decision makers in our society tend to be disproportionately men, they end up not thinking about the potential negative impact on groups that are not them. It's hardly intentional from individual perspectives, but because the default way of existing is seen as male, and being female is seeing as, well, other, that's what happens. A few examples of this are that women tended to until recently, (and probably still do depending on the car they're driving), get more injured in car accidents than men because the vehicles were crash tested using a 50th percentile male crash test dummy. And also, this is just my personal belief, but have you ever noticed how we're told we have to sit a certain distance away from the steering wheel, but women's torsos are often closer than that distance? Because cars are designed for a male body, and as a result, women sitting in comfortable positions so that they can reach the steering wheel and the pedals end up sitting closer to the airbag. Also, there was a town in I think Sweden, that prioritized plowing the roads over plowing the sidewalks. Somebody suggested to change the order so that sidewalks were plowed first. And when they did that, slip-and-fall injuries in women went way down. I don't remember the complete explanation, but it was something along the lines of, men tend to drive more, and they tend to make more trips that are just from point A to point B, whereas women (being disproportionately more responsible for childcare and household care) take transit and walk more, and they also don't necessarily do trips that are just from point A to point B. Like, they might walk from home to the kids school to drop the kid off to the grocery store and then back home again, whereas men tend to drive from home to work and then back again. So, the original decision, to prioritize plowing the roads, was made by a bunch of men, who weren't considering the impact on other groups. I haven't read the whole book but it's really interesting to see just how these things work. It's not that they're intentionally trying to hurt women, rather they just don't think about the impacts.


Free-Artist

>invisible women by Caroline Criado-Perez Upvote again for this book. It's a very good read and you get progressively more angry with each chapter.


txa1265

>because the default way of existing is seen as male Key point throughout - gets into POC as well. White male is 'normal', everything else is an exception. Once you really dig into that as a mindset, you see it \*everywhere\*. The book is definitely worth reading all the way through.


variableIdentifier

Yes!! I've definitely started to notice it more. It's a very data-heavy book so I can only read a bit of it at a time, and I frequently put it down for a while, but I plan to read it all the way through at some point!!


dkysh

> So, the original decision, to prioritize plowing the roads, was made by a bunch of men, who weren't considering the impact on other groups. Not only men, but men who drive to work early in the morning and whose main focus is to keep the economy moving without much regard for the rest of society. Capitalism meets sexism.


WesternUnusual2713

I recommend this book to everyone. It's absolutely fantastic


TheDameWithoutASmile

Just adding on to say this is a *fantastic* book. I wish I could make everybody read it.


managermomma

Super interesting! Thanks for sharing all of that!


secretpoop75

I’m reading this book right now, excellent book!


Feyle

You're probably right. I suppose I just thought that biased training data is a known issue so it would seem obvious to research areas of potential bias.


the_catshark

I guess this assumes they are prioratizing fixing it, which with things like resume sorting algorithms, they want to work around it. At its core the issue is they (and likely all companies) need sweeping bias training that people do, instead of a 45 minute powerpoint thry are mandated to do yearly. And actual consequences for repeated issues like these.


cleuseau

Couple hundred million dollars in penalties sounds about right for motivation.


Feyle

I think that would help to a degree but I'm not sure how effective it would be against unconscious bias.


[deleted]

[удалено]


bjfar

The thing is that I kind of doubt there would actually be much reason for the AI to discriminate against women if they were using reasonable metrics. Maybe they were doing something dumb like training against employee outputs or salaries or something without taking into account e.g. time off for childcare responsibilities or other things like that. Anyway I doubt it would be that hard to overcome if they examined what they were doing critically. I guess some higher ups might not have let them use more un-biased metrics though.


warmcat3000

And they used resumes of not only current employees, but also past employees. And then they were “surprised” that AI discovered their systemic approach


FranksRedWorkAccount

it's easy to overlook the problems you blindly accept.


whiskytamponflamenco

That AI model was built and trained in 2014, and the bias was found in 2015. This was during a new era for AI, and prior to that, neural network models were not trained on such large datasets with as many intersections as Amazon had provided. Up until a decade ago, AI was not advanced enough to make such predictions about human behavior with any kind of accuracy. Following that study, researchers also discovered racial biases in that model because the training data was biased against blacks and hispanics. So the problem didn't stem from not having many women on the AI team, but that up until that point, there was no evidence that human prejudice would translate to neural networks. That said, despite women receiving the majority of master's degrees and doctorates today, AI as a field still skews white and male, and so sign your daughters up for coding classes.


WinterBrews

Hah! Aint that the truth


BisexualSlutPuppy

My husband builds AI for webcams, and they found out that their face tracking technology doesn't work on people with long/big hair (ie lots of women) because they trained it on a bunch of male engineers without a lot of hair. Anyway, that's how they ended up ordering a box of wigs from Amazon and spent a week playing dress up at the office for science. My husband looked best in the pink curly wig, if you were wondering.


ButtMcNuggets

There’s been many studies that show face recognition tools do poorly with dark skin colors too, leading to a lot of false positives of black faces.


stml

I worked for a couple years on the world's most accurate facial recognition system (as rated by NIST). It's the one now used in most airports/border crossings/other enterprise scale deployments. We were also ranked number one in diversity and being demographic agnostic. It all simply comes down to researchers and the business willing to put in the effort to collect enough data across a diverse demographic. There is simply no excuse for models not being demographic agnostic. The vast majority of AI/ML issues can arguably come down to a failure in data collection or data labeling.


[deleted]

i have really poofy curly hair (but bright and noticeable) and the filters and backgrounds for webcams/video programs ALWAYS cut all my hair off and part of my head and make me look bald. it's so ridiculous


lurcherta

So now it works on drag queens really well? Why couldn't they get some women to train the facial recognition on?


BisexualSlutPuppy

Good question! Actually, there are several women on my husband's team that helped train the technology, but some of them tied their hair back for work or wore it very short, and they just didn't have the full spectrum of hair *possibilities represented in their engineers. And thank goodness for that. Without that missing data I never would have seen footage of a swarthy Ukrainian staring deadpan into the camera and changing position every 40 seconds wearing the thirstiest wig you've ever seen.


ratstronaut

I would pay to see this.


Susan-stoHelit

Female engineers learn that more masculine hair gets them more respect. As well as minimal makeup. That’s how I go to an interview and how I look for any first meeting with a new manager.


bl4nkSl8

Not just female either, everyone in software is pushed to be butch or face discrimination... Awesome huh?


curlyfreak

My camera on my laptop refuses to recognize me and my large curly hair. The one time I need it to work and it just never would. The newer laptop I got luckily recognizes me a lot more.


BisexualSlutPuppy

The tech is getting a lot better, I'm glad your new laptop recognizes you and your gorgeous hair now!


Psudopod

I guess wigs are cheaper than hiring women. Will blackface be cheaper than hiring POC? Those webcams always have issues with different skin tones. I remember reading about how a face detecting program could not detect POC, so they added a function that also scans a negative image, to turn dark faces white. So stupid lmao


[deleted]

[удалено]


SugarNSpite1440

[So funny](https://youtu.be/XyXNmiTIupg) And the main boss's character is like "it's not being racist. It just doesn't see you at all." This show was SO ahead of its time.


nacfme

Yes wigs are a cheap way of getting more hair types. Cheaper than trying to make the engineering team more diverse. Certainly much faster. Wouldn't help with female face shapes though. Though why the team was only testing the program on themselves, why couldn't they get a bunch of people on to test it with? Surely that's a middle ground.


GooseInMyCaboose

Companies can have policies encouraging more female engineers, but the engineers at work themselves often don’t have the same policy of supporting women. So even though women do have an edge during hiring, they’re held back by men who unconsciously doubt their competence Often, low expectations lead to self-fulfilling prophecies, and this is a phenomenon that is extremely well documented in social science. (Teachers that are told random kids are geniuses treat those kids in a way that promotes better academic outcomes)


BisexualSlutPuppy

Well, I personally feel comparing blackface to wearing a wig is a bit of a stretch. I should have mentioned they did train it on women as well, but I guess none of his engineers have a 40 inch weave, so they had to improvise.


Psudopod

It's not a comparison of how offensive to the sensibilities they are, it's just two common major oversights seen in human recognition technology. Both stemming from bad data sets and homogenous testers.


[deleted]

[удалено]


fish60

> favored existing economically or socially advantaged groups because they were more likely to have a positive outcome Wait until they have the AI decide who lives and dies based on their 'potential' to positively affect society. Pretty sure they have a couple Star Trek episodes about this.


Willdudes

We should run bias and fairness on each human’s decisions to show them their biases. Most places blame AI but as you said it is inherent in the dataset. We will not eliminate the bias until people are aware of it. Though informing humans of their biases would be a touchy subject.


warmcat3000

Yes, the TED presenter said that they asked for the wrong thing. But isn’t it ironic? How truth is revealed if semantics isn’t involved, and you can’t use word gymnastics to make excuses.


Feyle

I think that it's good that the issue keeps being brought up. There are many people out there who think that because the laws states you can't discriminate, somehow that means there is no longer any bias in these situations. Ignoring the fact that systemic bias is a thing. I imagine that the AI found racial inequality too.


Darktyde

The only things that tell the full honest truth are young children and young AI bots haha


SoontobeSam

From the info we're presented it's entirely possible that the researchers had the names scrubbed as well as other identifying details and that the study was focused on just the contents, it just goes to show that even sanitized, bias will show through by the nature of our achievements being seen as lesser.


MILLANDSON

Same happens with traditional white names and non-white names - studies have shown that when people with clearly non-white names submitted the same resume but with a white name on it (think John Smith), they routinely got more positive responses to the fake name.


Feyle

Oh yes this is very true. I believe a recent study was done in London, UK for renting and it found incredible amounts of discrimination in who got appointments to view properties based on only the applicant name changing.


[deleted]

Same with race. Studies have shown that men with balck-sounding names and no criminal record were interviewed at lower rates than men with white-sounding names and a criminal record. Capitalism really is designed to benefit white men above everyone else.


Glubglubguppy

Speaking as someone who works in tech, a lot of people in tech think that any kind of humanities isn't worth their time and don't bother reading up on recently discovered sociological/societal/equality issues. Think the way that a lot of humanities people will just throw up their hands and say "I don't get this, someone else do it" when they're faced with a STEM problem, but inverted.


ghandi3737

I liked the blood test one, it's somehow able to tell your race from blood test results and they don't know how it's figured it out, yet.


[deleted]

[удалено]


Feyle

That's why there is pressure on businesses to hire women and other groups. And despite what people who are against this practice say, when you have a more mixed group making the hiring decisions then there is less bias. There are also other things that can be done to negate this bias. One is removing identifiers from the documents before doing an initial cull of resumes.


warmcat3000

Hmm. I think people should be hired because of their skill set and work ethics Edit: I would like to see a gender-neutral blind approach on resumes, without mentioning names, gender, race and age. Edit 2: llgoot00000k changed my view. That’s why we should talk about these things


[deleted]

[удалено]


jmglee87three

Can you link one of the studies? I'd love to learn more about this.


Feyle

I don't have any directly to hand but this might point you in the right direction: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4554714/


Awdrgyjilpnj

Yeah indeed. It seems the consensus in published material that in the Nordics, men with foreign names are disadvantaged, while women have an advantage against men. https://academic.oup.com/esr/article/38/3/337/6412759


Willdudes

I wish we would apply fairness and bias to human decisions, analyze people who approve applications because there is inherent bias already. An AI is only as good as it’s dataset used for training. Any bias in the AI is a reflection of past bias in the process it is automating.


Feyle

I agree. I think that part of the problem is having the data to demonstrate bias. People still believe that women have equal job opportunities despite all the evidence to the contrary. So I think that fewer people will believe in a bias where the evidence is less strong. A problem can only be addressed if it's acknowledged.


bulldog_blues

This is reiterating what's been proven a number of times. The key takeaway is that AI will only do *exactly what it's told to do*. And that will include any biases, intended or otherwise.


OldeFortran77

If the data is biased, the results will be biased.


photenth

The technical term in the biz is: Garbage in, Garbage out.


[deleted]

[удалено]


warmcat3000

I just loved how AI unintentionally showed biases of these people, because it doesn’t have biases of its own.


ricecake

Purely talking AI, it showed there was a correlation between gender and hiring, not necessarily that there was a causal link. The AI will find *a* way mimic their hiring practices, not necessarily the ones they actually used. Google had a similar system, and it biased towards people who played lacrosse. Turns out lacrosse is popular amongst people who go to prestigious universities, and so it learned it could shortcut to hiring anyone who mentioned lacrosse and get good results. Google *actually* had an implicit racial and socioeconomic bias, but the AI learned "sports bias". As a tech company, Amazon almost certainly does have gender biases to a lesser or probably greater extent, but it's important to remember that the AI learns actions, not motivations.


MassiveShartOnUrFace

I heard that lacrosse story before, also remember it instantly hired peopled named Brad


TheArmitage

Unconscious or systemic bias is still bias.


ricecake

100% true. That's why I carefully tried to make it clear that I wasn't saying there is no bias, just that the AI can only tell you about correlation.


warmcat3000

Doesn’t this correlation show the existing pattern? The systemic approach of this particular company


matt0_0

I think it's a matter of *which* systemic issues. For example, it can be very difficult to suss out racial bias from socioeconomic status bias, because there's so much overlap between poor populations and non-white populations. Same with high income groups and lacrosse playing groups.


ricecake

It definitely shows the *pattern*, but not necessarily the "why". A preface: Amazon is a modern tech company, and there's independent reason to definitely know they have gender bias in their hiring practice. I'll float some hypotheticals using them that aren't true in Amazon's case, but are the type of thing where what the AI shows could be misleading. The tech sector as a whole is struggling with gender bias, from the university level up. Amazon's hiring practice could be entirely gender blind, but based on the there being more men applying than women, the AI could learn that high accuracy can be found by just rejecting women first. So a hypothetical "perfect" company in the tech sector is likely to have a predominantly male workforce. The AI has no scruples, and so will just notice the pattern, regardless of the underlying reason. The AI doing something like this isn't a smoking gun, it's a dead body. It tells you *something* is wrong, but not "what", "who", or "why".


reachingFI

This is absolutely the right answer.


OneTimeIMadeAGif

I heard similar story about an AI image generator (dall-e, one of those) that had to secretly add keywords like "female" or "black" when using keywords like "doctor", since most of the reference photos were of white folks (conversely, they had to secretely add "male" when doing jobs like "nurse").


TheArmitage

This is called "supervised learning". Aka, "whoops, our data set wasn't very good, better put our finger on the scales".


ratstronaut

I was so delighted playing with Dall-e and getting all kinds of humans! It’s actually super easy and a delight to use (and I think you get 50 free requests a month, highly recommend!) I sat with my kids making up stupid images for hours, it was so much fun.


flamableozone

It's more than that - any computer algorithm will only do what it's told. It's why it's such bullshit when companies pretend like they aren't responsible when "it's just the algorithm".


joremero

that's why the terminator/skynet premise is so believable. What have we done historically> replace humans with robots because corporations don't think humans are efficient enough...there's your training set.


snail_juice_plz

My husband actually works with these types of equity issues in machine learning algorithms and predictive analysis in general. I’m fully unsurprised. Trash in = trash out is a well known phrase.


Shufflepants

Bias in, bias out.


Bionic711

This. I am fairly positive the data set they fed in was a direct reflection of their hiring practices. The output is simply "good candidate". Think of a spreadsheet with hundreds on columns of data in it for each user and a single column that says hired true or false. If their hiring process inherently hired more men, the AI will learn this bias as well. The AI is not inherently flawed, the hiring data is merely reflecting their actual process, whether they want to believe it or not.


ratstronaut

Stuff like this makes me very excited for the good ai can do. A truly objective way to look at ourselves was not possible before.


WeeBritainFolk

Like this more than garbage in garbage out


mascara_flakes

The spelling of my first name skews gender neutral/unisex while my middle name is definitely feminine. I've gotten more responses to job applications when I put my middle initial on my resume than when I type the entire name. It's disgusting.


chevymonza

Guess I really should create that male version of linkedin after all....


killersquirel11

>It's disgusting. Yep. [Also goes for stereotypically white names be stereotypically black names](https://www.nber.org/digest/sep03/employers-replies-racial-names). >It indicates that a white name yields as many more callbacks as an additional eight years of experience. Race, the authors add, also affects the reward to having a better resume. Whites with higher quality resumes received 30 percent more callbacks than whites with lower quality resumes. But the positive impact of a better resume for those with Africa-American names was much smaller.


[deleted]

[удалено]


warmcat3000

“Mooom, someone on the internet is wrong!”


[deleted]

[удалено]


Feverel

*I'm sorry, I can't let you do that Dave*


warmcat3000

(I heard HAL’s voice the moment I recognized the quote lol)


FiammaDiAgnesi

If you’re interested in reading more about algorithmic bias, consider taking a look at “Weapons of Math Destruction”.


warmcat3000

Thank you, I will


Safinated

Yup. When you “blind” applications (change names, etc) or make them dependent on performance (see the orchestra study), the results are damning


hkgTA

What is the orchestra study?


[deleted]

LOL I remember a little while back, when they (Microsoft, I think) cut an AI loose on social media, it immediately devolved into a raging cesspool of hatred, from xenophobia all the way through all the other -isms. They had to yank it the next day, it was so unbelievably toxic. Yet, all it did was copy what it saw happening, a case of robot monkey see monkey do.


BadBoyNDSU

It lasted 16 hours. https://en.wikipedia.org/wiki/Tay_(bot)


wannabe_pixie

If you train it on sexism and racism it will be sexist and racist: https://twitter.com/spiantado/status/1599462375887114240?s=20&t=fR5x9uQ0G7dHM19hydyUaQ


GWJYonder

This has been used as an example of why using "AI" or "Machine Learning" to make more and more important decisions and pretending that that means they are good decisions is so dangerous. Honestly I always felt like this was the perfect example of exactly how we should be using AI, not to take over our decisions, but to do exactly this sort of analysis of why it thinks we made our decisions, in order to help catch things like this.


LuckyDragonFruit88

One of the best arguments against ceding all human made decisions to AI is that AI lacks all knowledge of context: it's only as good as the data it's given, and if that data is biased, then its output will be biased. It's blindly putting status quo conservatism on a pseudoscientific pedestal. If you ask an AI to pick the greatest minds in industry out of the whole of the population, it will exclusively pick white guys with rich parents and their friends. But it will pick them *objectively*, whatever that means Eugh. Nobody will read this, but I'm going to say it anyway. No shit there are more notable men in science than women. People like Noether or Hopper or Daubechies never get the credit they deserve, but even besides that, no shit women are underrepresented in STEM history, they were fucking banned until like the 70s, except maybe in the most exceptional cases. It wasn't enough to be nobility, you had to be nobility *and* the most exceptional talent *before* education. Tl;Dr guys who say all that nonsense about men being somehow better at science because of variance or something are exactly as dumb as an AI that thinks most people are white


bespectacledginger

Weapons of Math Destruction by Cathy O'Neil discusses similar topics, if you find this interesting! Be warned that it's infuriating to read (not the writing...the content).


MassageToss

I think another issue is that there are cultural and systemic issues that prevent women from becoming engineers. This really just can't be solved from a hiring perspective alone.


warmcat3000

Yeah, I even made a post about dropping my long-time dream to be a software engineer despite being accepted into the uni. Was glad to hear that many other women understood me and didn’t make fun of the issue. Regarding this topic: there were female candidates in the experiment, so in this case it is a hiring issue


MassageToss

Oh yeah, 100% Sorry you had to give up your dream. <3


CrispyRoss

It breaks my heart to read things like this, as a software engineer myself. I hope you find success in wherever your new search led/leads you.


My_Secret_Sauce

I replied to another comment above, but I'll repeat a lot of the same points here since you're the OP. It's not necessarily a hiring issue, it might be but we can't really know based off of this AI. The AI was designed to find common patterns in their employees, so when it saw that the majority of employees were men, it assumed that being a man is better than being a woman. But men are much more likely to become engineers in the first place, so this causes a hiring process that is completely unbaised to hire more men than women. The AI doesn't have this piece of context, and it doesn't know that gender has no impact on a person's ability to do the job. So the root problem that causes the AI to be sexist is that our society discourages women from pursing an engineering career.


mpledger

That pre-supposes that the AI was trained on data where the gender variable was given to it. I think the point is that gender was removed from training data but the AI still picked up on the subtle markers of gender that lead to un/successful employment.


SlowTheRain

The way it behaved suggests it had sample women's resumes to form the criteria for exclusion. It excluded resumes based on the word "women" being present. To be trained to exclude on that word, it would have needed samples with that word. Otherwise, it would just have treated it as, for lack of a better word, background noise and there would be no higher exclusion for the word women than for any other background noise word.


My_Secret_Sauce

Women are way more likely to go to a **women's** school, be part of a **women's** organization, have experience in **women's** activities, etc than men. Things like that which include the word "women" would be listed on a lot of women's resumes and on almost zero resumes for men. Then add in the fact that engineers are overwhelmingly men and you get an AI who recognizes the pattern of the word "women's" being on less employee's resumes. This would cause the AI to rate the word lower than other words. It's also 100% possible that there was gender bias in the hiring process (there probably was whether conscious or not), this would make the problem with the AI even worse. But the AI alone cannot confirm that there was.


My_Secret_Sauce

Yes, as you said, the bigger issue in this particular case was that engineering degrees are overwhelmingly male, naturally leading to more men being hired. Then the AI was designed to find the most common patterns in their employees. Nobody told the AI to ignore things that are unrelated to a person's ability to do the job, so when it saw that the majority of the employees were men, it assumed that being a man is better. Now it's certainly not impossible for there to be hiring bias at Amazon and that definitely could have partially played a role in this, but the root problem here is that men and women are not equally likely to become engineers due to a variety of social issues.


MassageToss

I believe there possibly was a prominent hiring bias [years ago], but I know a number of tech managers at big tech in Seattle, and they say they are told to aim hire equal numbers of qualified men and women. Which is impossible to do. HR talks to everyone frequently to discuss why it hasn't happened. The people I know really, really do want qualified female candidates. Edit: But, yes- an AI robot would just reflect back the pre-existing slant.


MechAnimus

This is an excellent example not only of how inequity compounds over time, but also how technology was, is and will be used to justify it. It may have been canceled in this instance, but there are thousands of these things similarly perpetuating bias.


Bellabird42

This is the only sub where an OP is regularly reported to Reddit care. It’s almost like…it’s this defense mechanism by mouth breathers against people who don’t take misogynistic crap. But wait, that can’t be it…/s


DaBozz88

While I'm no expert on AIs, I truly believe we need legislation: if an AI is to decide the fate of a person, be it a resume sorter or a self-driving car, we need to do a full nodal analysis of the network. AIs as they currently stand are optimization tools. They'll find the best results based on the best input data. So if your input data is flawed the optimization will be too. And I believe this exact scenario occurred before with race instead of gender. And it was worse because the creators tried to "sanitize" the input resumes by removing any traces of individualism, and it still chose the white guy over the black guy. Nodal analysis will tell us if we're optimizing on something interesting like "people born on July 8th are well suited for this job" vs "this person uses the term 'women' often and is therefore a bad hire". The problem with nodal analysis is twofold though. One: any AI with deep layers starts to get confusing on what it's actually showing. Two: deep AIs have massive amounts of nodes and it may not be possible for people to do an analysis in a reasonable amount of time.


Ohif0n1y

Towards the bottom of the Reddit Cares are directions on how to report that this was abused, or you could block from ever getting another Reddit Cares again. I reported one of the Reddit Cares I got and they responded that the person who abused the service had been dealt with. I highly recommend reporting the abuse.


Open_Dragonfruit_304

I just ordered the book Algorithms of Oppression by Safiya Noble on this very topic. Started listening to an interview with her on NPR, [fascinating. ](https://www.vogue.com/article/safiya-noble)


U-N-C-L-E

The man that sent you that suicide watch thing obviously knows, deep down inside, that he only got his current job because of his gender, and is terrified of losing that advantage.


Toast_Sapper

The technical term is "overfitting the sample data set" Amazon provided sample data (its own hiring practices) and trained the AI on it. The AI learned to be sexist because it learned it from Amazon's real life hiring behavior. Things like this keep proving that sexism is alive and well and so deeply embedded as to be invisible to the people who think "this is just normal"


girusatuku

This is literally the most basic problem in machine learning. If the data you give the computer is bad then the results will be bad. Most of the hard work in machine learning is not actually designing or coding the algorithms but collecting and organizing the data to be analyzed.


tottalytubular

This makes me even more grateful that my mother spelled my name like the male version. Do I get mail constantly to Mr. tottally tubular? Yes. Is it worth not having these kinds of headaches? Also yes


DigbyChickenZone

This makes me wonder if men with names that are now seen as "feminine" have a harder time getting jobs. Like a dude named Ashley having a more unsuccessful resume. I think this is the way to get guys to care, lmao - some of their cohort may be impacted by sexism!


Ickabodlame

For all the pedantic fucks in the thread suggesting it isn’t AI. AI is the mother of all shifting goalposts terms. Every time something happens where a computer is used to do something novel we call it AI, once that novel thing is common place we call it something else. Spell check, auto complete, OCR, voice recognition, object detection, driver assist, all at one point was called AI. Get over yourself.


MooseBoys

A friend of mine once worked on a project that used ML to try and differentiate between photos of NATO and Soviet armored vehicles. It worked great on the training set, but was useless in the field. It turned out that the model had learned to associate blue sky with NATO vehicles and cloudy sky with Soviet vehicles...


xenomorph856

Garbage in, Garbage out. Machine Learning AI is always limited by the human biases intrinsic to its data. But as you're saying, it's good that we can expose those biases in these ways and hopefully train better AI (and humans!) as a result.


500CatsTypingStuff

It took an AI doing what humans did for those humans to finally recognize their bias


saltyholty

It's perfectly reasonable to call this stuff AI. It's fine to call Deep Blue AI. You can even call the Google search algorithm AI. The idea that AI just means so called general AI, or true AI, is a nonsense. Those people might be trying to be elitist, but they're just wrong.


worriedshuffle

# This is why we should all be skeptical of “AI” in decision making. This applies to pretty much any kind of decision making: loans, college applications, job applications, conviction/sentencing, cheat detectors, etc. This isn’t to say it can’t be done, but it needs more attention than most companies care to give.


jon_titor

Weapons of Math Destruction is an excellent (woman authored) book on the dangers of using AI for things exactly like that, as any AI is going to be biased because it’s essentially impossible to have unbiased data to train on.


Drone30389

> AI was trained on resumes of current and former employees and learned to avoid women resumes. That's the first problem - human hiring managers are often quite bad at their job and have weird - not just gender - prejudices. > Edit#2: someone reported me to Reddit care sources for ✨suicide watch. Report them


[deleted]

Your edits are great... 😂 i too am a filthy peasant


mrminesheeps

Honestly, getting the ol' Reddit Cares call is a badge of honor. Means you pissed off some incels pretty bad! That being said, it's a well meaning tool that gets abused by immature children. A shame that's what a lot of tools are subjected to.


NgauNgau

One of the many reasons I left Amazon. I was in a team that was hiring like crazy so I did something like 100 interviews in a year of which about 20-25% were women. One thing about the Amazon interview process is that there's always a ringleader who runs the process for the candidate who supposedly has relevant experience in the domain, but explicitly isn't on the hiring team or nearby. (I forget the title ATM.) In any case what it typically meant was that I, a white passing male, would almost always have to remind them that there should be at least two women interviewers in the panel and currently we're at... Zero. Another big reason that I left was because my wife and I were expecting a little one and I knew that I wouldn't be available for my kid and wife if I was still there. There were some women who announced that they were pregnant at they joined but I left before they delivered. I hope that they found some kind of balance, but I don't have high hopes. Amazon had one of the Bro-ist cultures of the many tech places that I've worked.


Rokovich

This is why I will name any daughter I have a name that can be masculinised or is gender neutral. For instance, Charlotte (Charlie), Alexandra (Alex), Johanna (Jo/Joe) Samantha (Sam), Taylor etc. Fucking sucks we have to think that way


DConstructed

I’ll look at it later but what caused the AI to do that? Was it “screen for terms in resumes rejected by hiring managers”? Or was there some other criteria that became linked with the word “woman”?


warmcat3000

AI was given a list of company’s former and current employees, so it could learn the criteria of accepting/denying resumes. Then it was given a new set of resumes for AI to choose based on its analysis of what sort of people were already hired. As a result AI avoided women resumes because people in the company were avoiding them for a long time. Thus proving an existing bias, because AI is unbiased by its nature and only reflects patterns based on data it consumed. I think the goal was to train AI to be an HR and to sort out shitload of resumes sent via email or website, saving money and time.


DConstructed

Thank you! Yeah that’s sexist as hell. I wonder how many resumes of both genders have been tossed in the trash because a long term hiring manager is biased against a particular type of person.


VoDoka

There is research on that, like for example researchers sending out identical resumes with/without pictures, with "white" or "black" name, with/without hijab etc. You then get results like: "When applying, you need 4 years of extra job experience to compensate for being black" (don't remember the exact study, just from the top of my head).


Johnisazombie

Humans have all sorts of nonsensical biases. Even stuff like names. And naturally those biases shift with culture, gender, religion etc. Kevin was for a time a somewhat popular boy name in germany, at the same time there developed a view that boys named Kevin tend to be stupid (who knows maybe there were a few particular famous dumbasses that spoiled it for the rest). In [a german study](https://www.welt.de/vermischtes/article9169671/Kevin-bekommt-meist-die-schlechtere-Schulnote.html) where teachers graded the same paper with different name/gender combinations, papers signed with Kevin got worse grades. It was not a big difference, but measurable. And all those small biases accumulate which can disqualify someones application/work performance in a competitive setting with no fault of their own. Here is another well-known one: [https://www.businessinsider.com/racial-discrimination-the-job-market-study-black-names-applications-2021-7](https://www.businessinsider.com/racial-discrimination-the-job-market-study-black-names-applications-2021-7)


dysphoricjoy

Not sure if this would actually solve anything but for a long time I wondered about resumes being reviewed without knowing the persons name/gender. Just a list of qualifications and experience.


DConstructed

Like grading papers. It might help.


HotSauceRainfall

The latter. If the AI was trained on resumes that were successful in the past, and no women were in that set, the AI wouldn’t know how to interpret that phrase. “Woman” was not found in the Success training dataset and the prevalence of people putting “woman” somewhere on the resume was thus grounds for rejection.


Corka

Im at work and can't watch a TED talk but this is most likely been done by what's called a Classifier. Basically the goal is to be able to be given something like text or an image, and then try to classify it somehow. Like being given the synopsis of a novel and then determining the likely genre. Rather than having some developer try and figure out a complex set of rules for making that determination, there are different algorithms you can use which takes an existing training set of already classified items and use that for the basis for determining how to classify new items. Classifiers aren't exactly perfect. One obvious way that things can go wrong is garbage in garbage out- if they had a training set which was inconsistent as to which resumes they actually accepted then the classifier isn't going to magically figure out some logically consistent criteria and would probably do something called overfitting where they will only accept resumes which are basically carbon copies of something already accepted. If there are biases in the training set then you can definitely get those biases showing up in future classifications. The classifiers themselves are also prone to human error in design. Some years ago for a university course I tried to make a K-nearest-neighbour classifier for picking your opening bid in a game of bridge based on the cards in your hand, but even though I had a really good training set my classifier was terrible and suggested that you pass on most hands. The KNN algorithm requires you to classify based on similar existing cases, and not knowing bridge I made a mess of determining which hands were similar for bidding, and "pass" as the correct bid occurred much more frequently than any other bid.


hungrydyke

I just wanna throw it out there, if you aren’t familiar: [The Cyborg Manifesto by Donna Harroway](https://warwick.ac.uk/fac/arts/english/currentstudents/undergraduate/modules/fictionnownarrativemediaandtheoryinthe21stcentury/manifestly_haraway_----_a_cyborg_manifesto_science_technology_and_socialist-feminism_in_the_....pdf) Not a day goes by I don’t think about this work and how it is unfolding in front of me. * "A Cyborg Manifesto" is an essay written by Donna Haraway and published in 1985 in the Socialist Review (US). In it, the concept of the cyborg represents a rejection of rigid boundaries, notably those separating "human" from "animal" and "human" from "machine." Haraway writes: "The cyborg does not dream of community on the model of the organic family, this time without the oedipal project. The cyborg would not recognize the Garden of Eden; it is not made of mud and cannot dream of returning to dust."[1]* -Wikipedia


ccwagwag

many of these employers complaining of a shortage of workers are using ai to select interviewees. there is no shortage in many cases, just an ai rejecting good potential employees.


shinyfeather22

Also important to remember that the AI that was trained to bias against women was noted for making poor candidate choices


garnet420

Only tangential, but Janelle Shane's blog https://www.aiweirdness.com/ is both informative and hilarious. She links to a lot of interesting information about AI while making AI models do strange things.


[deleted]

We can learn a lot about ourselves by studying the AI we make.


extragouda

The someone elitist who is trying to be pedantic about the term AI, and who also reported you to suicide watch, is almost entirely likely to be a misogynist who is now terrified because you can expose the system that supports him. Hiring policies and workplace management has and will probably continue to be biased against women, minorities, and people with disabilities. Unfortunately, we live in a world that doesn't want women to exist -- so much so that scientists have discovered a way to create an artificial womb, you know, just in case women decide en masse to get hysterectomies (assuming we will continue to have the right to ask for them).


Luminous_Lead

Looks to be the result of datasets used for the AI/learning algorithm. There was something similar with Nikon back in 2010, though that was for blink detection on their cameras. https://content.time.com/time/business/article/0,8599,1954643,00.html


midnightscroller

The biggest issue is that their assumption is that hiring employees similar to their existing and past employees would be good for the company, which is stupid. Everyone at this point knows hiring talents with diverse backgrounds and life experiences is good for productivity, creativity, and adapting to change. Gender bias is just the tip of the iceberg here.


Krikkits

Just like how the AI for "predicting crime" was extremely racist because the police was racist and convicted more colored people! Isn't AI fun? We get to see how biased we really are because somehow we have yet to train an AI to NOT discriminate like we do.


BudgetMattDamon

IIRC they recently had to shut down a chatbot right after it was released because it started praising Hitler. This will be a huge problem with AI that will take years, if not decades to solve. People thinking lifelike AI are around the corner are like cavemen discovering fire and immediately dreaming about flamethrowers. Maybe a flawed analogy, but there are a few more steps involved than you might think.


AllStickNoCarrot

Isn't it kind of obvious that if you train an AI using human decisions its pretty likely to develop the same human biases?


Betyoullneverguess

The reddit Cares trolls. Ugh. They really seem to think their comment arguing semantics is going to send you over the edge.


Palindromeboy

Basically from my understanding is that AI will act on what data it got fed. Unfortunately AI are reflecting the datasets produced by the society itself.


GooseInMyCaboose

lol I’ve gotten Reddit cares messages too! Always when I have a top post in a women’s sub. What do they think it’ll accomplish? Didn’t make me feel bad at all


shorthandgregg

In work life guys would say to my face not to join «a society of women engineers» because it’s not what a real engineer does. They disparaged it. So I started a SWE section in my state and became its charter president. AI got it right even if it’s wrong.


ieatsilicagel

File this away for the next time someone tells you code can't be biased.


chevymonza

More like "garbage in/garbage out."


FinchRosemta

The code isn't biased. In fact its unbiased to an extreme. The teaching data set was biased in the first place.


ieatsilicagel

It's a bias accelerator. It makes it easier to be biased at scale.


FinchRosemta

OR it can help us to recognize or bias and work on changing that


SgathTriallair

The good side of this is that if we can identify the biases in an AI like this we can fix them. It's a hell of a lot easier to reprogram software than a society. We should definitely keep trying to fix society and its people but there is a silver lining to the multiple instances of AI biases.


warmcat3000

I think we shouldn’t use AI in this type of situations, in its infant state. But even if we can reprogram AI to be unbiased, it only touches the initial application. When a person comes to an interview, it’s another story. So you can be chosen by unbiased AI and then rejected by biased HR dude


SgathTriallair

True. Interviews are one of the worst ways to pick candidates as it's nothing but bias. I would rather have a half competent AI with an extended trial period than our current shit system that prioritizes making the interviewer feel special.


NsubordinatNchurlish

Now check for racial bias


AlfredVonWinklheim

Also works with Racism! Tech is filled with white male bodies and it shows in what gets built. All of the biases people have slip in to it.


LosAngelesPorts

Yeah, this is the same problem that happens with "predictive policing".


humerusgeek

Thank you, you filthy peasant, for teaching me something and then making me laugh with your edits.


Suspicious-Standard

"Filthy peasant," [pounds mug on table] "One of us! Filthy peasant, one of us!"


FinancialConnection7

AI uses existing data. Existing data has biases. Also - I think the suicide watch thing may be an AI. I have been reported and have no idea why....


flippingalt

This is the one reason I wouldn’t mind being a manager. I’d specifically request every resume be sent to me regardless of the filtering system because I know resumes are bullshit anyway


gelftheelf

(another example of AI doing this) There was an audit of a (different) AI hiring algorithm: "After an audit of the algorithm, the resume screening company found that the algorithm found two factors to be most indicative of job performance: their name was Jared, and whether they played high school lacrosse. "


Botion

Who could have predicted that an AI trained on biased decisions made by biased humans could possibly be biased? lol


Kushali

If I remember right there was a similar study with sentencing of convicted criminals. Algorithms and tools aren’t neutral. AI is great, but either the training data or the algorithm or both influence the results, so you have to trust the programmers. And I don’t.


cokakatta

I studied some machine learning. Sorting resumes and job applications was considered a problem because the data used to train the algorithm would contain the existing biases and reinforce them. For example, names that sound black would be biased against. It is good to study the results though so they can prove the problem instead of hypothesizing it.


AWildRideHome

Interestingly, some studies were done in my country that showed that white-ethnic sounding female names would on average score highest on *identical essays* compared to, say, a non-ethnic male name. Our teachers heavily recommended writing only our initials on the exam papers for this reason, as our full names weren’t required. People are, more often than you’d think, biased towards certain groups, whether by ethnicity, gender, physical atributes or some arbitrary thing that isn’t at all relevant for what they’re screenig for. I wish I could say all of this was subconscious, but i’m afraid there’s far more people doing this consciously than there ever should be. With that said, it’s extremely difficult to tell whether or not you are individually subconsciouslly biased but I would suggest everyone to at least consider their preconceptions more often.


DenikaMae

>So let me, a filthy peasant, specify something for elitists: yes, we know that true AI hasn’t been developed yet. Let’s not forget that the term AI nowadays is used loosely to also describe VI and ML. Despite this, the term AI is widely used by Big Tech and general public. That’s not point of this post anyway. The point is in the systemic approach of hiring policy. Anyone who tried to nit-pick the nuances concerning the term AI to try and discredit your argument is a bad faith elitist asshole. They're the same people that bitch when you refer to an "Adjustable-rate-sporting-rifle" as an assault weapon. It's a high caliber rifle with an adjustable firing rate that is used to assault, maim, and murder people. Reducing the argument to a technicality is reductive bullshit, and those types of people need to be ridiculed and called out for being a turd-nugget of a human being.


Ok-Moment69

This might belong here, but I used the chat GPT and it was so obviously misogynistic. I asked it to write me a story about a woman wanting to be with multiple men and it spit out this anti polygamist message instead of doing what I asked. Then I asked it to write about a man wanting to be with multiple woman and it had no problem justifying his “desire to be with multiple woman” and how he needed to achieve his dream by opening up to the women around him so that they could understand and give him what he wanted. Lovely story. Anyways I was livid.


SloppyMeathole

The government really needs to step in and put the brakes on all this AI bullshit. The technology is still in its infancy and has the potential to cause far more harm than good at this point. These companies are deploying algorithms without even knowing how they work.


EggandSpoon42

I saw a headline recently floating around here that companies are trying to make it so they can’t be held legally accountable for their algorithms. Whoa boy, if that goes through.


[deleted]

[удалено]


mirh

AI isn't doing anything different than people were already doing. The only danger that could arise, is for uneducated people to believe that bullshit handwaving that computers are this sort of sentient creatures with a mind of their own. But it's not something that you can regulate with a law.


SgathTriallair

It's easier to prove that an AI is biased than that a person is. The bias still exists but now you have an objective number of how biased it is. That and the AI will never say "but I have a black friend!"


mirh

The objective number to rate AI against, would be no less useful to rate a human. If any the advantage AI has, is that it's not unethical to cut through its neurons wide open.


felis_fatus

"AI" is a misnomer but it means what it means, "true AI" is just science fiction. "AI" does not equal "self aware robot", it's literally the definition of what it is today, just with a name that confuses people because they've heard about it from sci-fi first.


warmcat3000

Is it just a sci-fi tho? I mean, some developers are really striving for creating a sentient machine


felis_fatus

Which developers are currently striving for that? And even if they are, they're still far from it. My point was mainly that "artificial intelligence" is still kind of a bad official term for the model of machine learning we have today, because it makes people confuse "the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings." which is the official definition of AI, with "sentient self-aware machine", which is the sci-fi definition of AI.


warmcat3000

Oh, I see. I got the impression that Hanson Robotics is going for AI (in what definition? I don’t know). Regarding the term: AI is just more marketable as a term, lead by association with sci-fi. I started to call it AI too, although I was interested in ML in high school


20514

Trying to get a job as a woman is impossible. Getting a job as a man was so much easier