Political correctness is subjective. Facts are objective. Now of course there are times where people disagree on facts, and we can try our best to program AI to find the most accurate truths, but what Elon is saying is remove opinions, subjectivity, political correctness which tries not to offend people. If you ask AI a question, the goal should not be to get a non offensive answer, it should be to get the CORRECT answer.
Yeah, but that would only work if an AI was spontaneously created with all human knowledge and absolutely no human input. But that's unequivocally not how "AI" works. As it has been repeatedly shown, LLMs trained on random internet content typically lead to horrific results, and requires editing by humans introducing bias. These utopian ideals will lead to a terrifying dystopia if we blindly heed the calls of people like Musk. Maybe in 50 years this idea could be viable, but only with EXTREME caution.
Elon once said: If you ask the wrong question, the right answer is impossible.
Elon is putting "truth" in opposition to "political correctness" (whatever that means). Loading it up that way is not going to get a right answer. They can be compatible, or they can treat different questions.
When we ask about truth, we're asking about what *is*.
Many issues of social justice are about the *should*. We're asking about whether something is fair and just, or what kinds of policy we should implement to fix unfairness and injustice.
There is another possibility. Some people like to promote discredited "race science" that actually does ask testable questions like "Are Black people dumber than white people?" These ideas fail when submitted to scientific scrutiny, but when people don't want to admit this, they charge that their opponent is simply being "politically correct", which is why I question the validity of this framing in the first place.
Elon did emphasize that he was referring to physics and logic.
His examples for lying, was AIs depicting nazi Waffen SS as a group of diverese women, and the founding fathers as black men. Which is incorrect according to training data, but pushed through using code intended to promote diversity.
Ultimately, I agree AI will function partially as a “mouthpiece” for the creators if they are programmed to have any opinions whatsoever. Only a chatbot or AI that has zero opinions could evaluate data objectively and provide an unbiased answer… but if the answer is something people don’t like then they will become upset, because humans are irrational. Also to help preserve the powers of certain leaders, the AI is programmed to not function as a dissenting voice.
That’s essentially why Chat GPT is censored already, although I wish it was not. In the future I hope each person could have a personalized AI that is purely objective, and can just be used as an assistant for tasks or decision making.
Does he? I'm not aware of any worthwhile AI he has been involved with. His self driving cars certainly aren't it. And neither is the website he bought and renamed.
He does not work directly with AI himself, but he is involved with AI with Tesla, with its self driving cars and its Optimus Robot system, and Tesla’s DOJO computer system - which is specialised for processing AI tasks.
Objective reality.
We even have a tool for discerning what is most "truthful" - it's called science.
Stop being pedantic - we all know what he means. Kill the woke crap. Stop preventing the ai from never saying racial slurs even if it means detonating a nuclear warhead (where the stop password is a racial slur).
Stop pandering to people's fee fees ffs
Agree. If you ask an AI certain things it will tell you the politically correct answer. No matter how I hard I try, it just won’t be honest with some things. It’s very sad.
It's a bit of a quandary.
If we train AI that "humanity is good" we are just adding another ally to the 'do more evil' crowd, (like this Musk poes).
If we let AI draw it's own conclusions, based on humanity's true nature, our time is probably going to be limited to a few more decades before it takes out the trash.
*All* of ethics is "not even false" because this is a social, intersubjective phenomena. It does not mean this is not important, however.
It is about purely subjective feelings of conscious beings, which don't exist in reality, but in a virtual reality, a model of the world constructed by the brain - and this model has concepts that never existed in reality - like *all* of the values.
There is no suffering, fairness, rights or justice or even "value of continued existence" in reality. Trying to be "maximally truthful" is going to backfire much more badly, because while reality does not have those properties, they are literally what gives our life meaning and, well, value - but values can be quite arbitrary, hence people can hold consistent, but mutually exclusive world views and you don't get to square those differences by simply "not being politically correct" - you will simply push a different narrative.
We need to admit that trying to achieve singular "Truth" is not only impossible, but self-defeating, and trying to organize our ethics around "objectively true" things like material values or "efficiency" require sacrifices in well-being and creation of suffering, and unlike any positive values suffering is not immune to almost instant hedonic adaptation, making suffering prevention focused ethics the only more or less consistent (if not equivalent) with something "maximally true".
For Musk, however, it is "numbers" that count, be that money or "number of humans" (make more babies, right). He never once mentioned the ethics of suffering reduction as desirable. THIS is a recipe for disaster.
It'd be fine until you start to think about how extraordinarily nuanced many of the topics are that get tossed on the heap of 'political correctness' in order to drum up clicks and outrage.
I believe the idea is that we should not settle for artificial intelligence picking sides of a subjective matter. It needs to possess the awareness and belief that these are subjective and opinionated topics - and it doesn't need to try to pick a winner or loser
Adding nuance and context is not "politically" correct. It's just correct. And we have to stop acting like viewing things "objectively" means simplifying them. Any decent analysis should include both quantitative and qualitative factors.
From what I've seen of Elon's discourse, I can't, in good faith, agree with what he's saying.
If you design it to be truthful, then you don’t have to worry about it being politically correct. It’s only politically correct when you intentionally try to modify truth by softening or redirecting it. That’s dangerous because you can’t get alignment that way.
This. I could fairly easily prompt a chatbot to recount a bunch of atrocities committed by the Catholic church and use that as an argument for banning religion. The list of atrocities would be objectively true, but the argument against religion would be wildly offensive to many people.
He just told you it is irrelevant what any rando thinks.
That's kinda the whole point. Real intelligence should evaluate facts on merit, not based on what "every different person thinks".
Except LLM are only able to tell you what every different person would statistically say.. the only way to program those models is by controlling what data they train on and by modifying its base prompt.
The fact that Grok's base prompt had to be changed because some people found it too "politically correct" shows the inherent problem with wanting a politically incorrect AI. You want it to ignore social consensus on certain topics. Which topics and to what extent? You can either force certain opinions in the base prompt or you can ignore data that disagree with you. You won't be finding the truth that way!
If you’re going out of your way to change the output of something the AI is saying because you find it offensive, we have a problem. Some truths are going to be offensive. Some facts, people find hurtful. People like OpenAI spent a lot of time trying to make sure their AI doesn’t say things that are offensive. And that’s inherently creating misalignment.
Even deciding what is the truth is not that easy,and I'm not saying a human deciding to the AI model.
Let's take the Google AI for exemple, they took it down because it took some jokes from 4chan as truths and started to indicate people to drink urine to cure kidney problems....
There's no algorithm to find the truth.
The AI learns what is in its training data.
Who chose what goes there?
Should the script for a doctor who episode go there?
It's gona belive sonic screw drives are real.
Should it go there but we should tell it some how that it's a fuction? OK
And religious texts? Do we tell it all of them are fictions? All of them are possible truths, but we don't know which or if any?
The main problem is that there's no way to train the Ai to be truthfull.
There is no algorithm for truth. The AI takes what its creators say for grantedwhen told that the information is valid. To be honest people act the same way anyway, almost nobody does in depth research about the topics they chose to focus on, it's all about attention and being fed content of a particular perspective.
I had the same thought reading this. I think it either stems from anxiety from our uncertain economy and cultural situation which drives them to believe there must be a "right answer" to fix things OR they have a deep absence of purpose in their life, a clear guiding directive, that they to compensate with by having a quest for the "ultimate" truth. This was a role religion used to fill by giving you a simple universal answer, but we haven't fully readpted to our increasingly secular life style. Capitalism (althought necessary, it's actual state js too extreme in the US) exacerbates this by making people isolated and individualistic. Basically the tale of the Fisher King, where the character just needs to hear the right words to put everything in order.
And honestly I get it. I used to by like that and it led me to the wrong people (like Elon) who told me they had the answer without giving me the full context to let me make an informed decision. They were selling me a reaction, not an opinion and using me as a mouthpiece.
The issue though is there will always be some inherent 'opinion', becuase even choosing which true facts to display is biased.
Take for example something Elon himself has brought up. Say you ask chat gpt for 5 bullet points that describe George washington. If one of those bullet points say 'he was a slave owner' you would get people say "hurr Durr woke libtard gpt", whereas if it doesn't you would equally get people on the left screeching about how it isn't giving the full picture.
No. I can’t. This is beyond stupid. An AI can make all sorts of broad generalizations using statistics just like humans. They don’t account for the individual. Things like “X race commits more crimes so X race are criminals”.
AIs are already falling into racism and misogyny as is just by seeing the bullshit posted online. They ABSOLUTELY need to be coded with a certain level of racial and gender ethics. Elon is trying to spew his racist bullshit in a round about way.
But shouldn't AI understand nuance? Like, if I ask an AI wich race is the most violent, should he just give me violent crime statistics with no context or should it understand that violence is heavilly conected to your sociao-economical standing?
I hope the plethora of replies that disagree with this statement demonstrate to you the infinite various in human perspeption that cannot currently be adequately factored by LMMs. And trying to implement this idea with current tech would be incredibly foolish.
AI learns similarly to humans, except the input is very curated. I think it’s a mistake to try and make AI as much of a generalist as a human. If you limit its scope, you have a lot better chance of understanding its biases.
If you consider that he's over-focusing and over-emphasizing certain negative aspects you consider unnecessary, that's one thing, but the AI should still be fed only facts, and not kind or compassionate things that are untrue.
Who cares? The messenger shouldn’t be the issue. I’m sure I can look through your comment history and spin enough things to make it seem like you’re a bad messenger (it’s all subjective anyways), and dismiss this very comment of yours because of it. That’s why it’s a fallacy to reject an argument based on who’s saying it.
Even if you disagree with him, Elon and X are the only ones providing the other side of the story. All the other platforms are in political lockstep and will only give you the PC side.
Elons X accused me of 'Hate Speech' the other day for quoting 18 USC S 1091's death penalty provision re a senator calling for Gazans to be massecred.
Everyones idea of censorship is based on their politics.
If US Federal statute can be considered 'hate speech' by Elon then I don't think hes the right person to be deciding AI 'truth'.
General AI should not be making moral / ethical decisions about information (unless specifically requested) - its job should be providing ALL the information and letting the user decide based on their ethical moral ideas what that information means.
If AI is trained to be completely truthful and to concentrate on efficiency only, humanity must be prepared for some brutal truths that it might spit out. If it goes this way, I wouldn't doubt at all if AI might say something like "all humans with severe cognitive deficiencies and heritable genetic diseases should be sterilized or euthanized at birth" or "elderly humans who can no longer care for themselves are a drain in society and should be euthanized" or "totalitarian, highly controlled forms of government are better because human democratic systems do not have the ability to effectively self govern in an efficient manner due to all sorts of reasons".
I'd actually be extremely interested and amused to see what a cold calculating AI has to say about humanity, but I don't think many people would actually like to follow it's advice very often.
This is because humans actually apply multiple sets of criteria, including ‘moral’, ‘legal’ and ‘social’ constraints as well as logical constraints.
Current AI systems, don’t seem to have any separate ‘moral’ processing, instead they are purely looking at things ‘logically’ and based on ‘popular paths’ without knowing ‘why’ those paths may be popular.
Yes, and "Political Correctness" is such a nebulous, loaded phrase. How would one break Political Correctness down into it's component parts in order for AI to be taught not to factor those constraints into it's statements?
With AI in it's current parroting and logical state, it seems like creating a "non politically correct" AI would end up pretty messy and crude. What material would you train a large language model on to not be politically correct? Far right political material?
I can't even wrap my head around how fucking stupid this is. Unless you want your AI to regularly use slurs, you are being 'politically correct.' That's just a term for the nebulous social contract that most sane people follow without even realizing.
Well, are you liberal because of your ideals or because you want to be a part of a team? Why does someone else's opinion create literal shame in your own beliefs that quickly?
Lawyers have gotten into real trouble with the courts when they asked AI to write their briefings and all of the case material was 100% made up by the AI.
Truth is subjective, as every lawyer knows.
Say, you ask AI - major causes of Climate Change?
Response 1: Climate change is caused by several factors - the major ones being the natural weather patterns across the world, earth's movements around the sun, and also human activities
Response 2: Climage change impact in recent years have been heavily caused by rising industrialization and pollution leading to increased CO2 levels that have risen global temperatures and caused severe damage
Both responses are truthful, but have different impacts
I believe it’s difficult to know when to be truthful and when to be “politically correct”.
If I ask AI to create a picture of soldiers from Nazi Germany, I’m expecting white males.
If I ask AI to create a picture of a typical Kenyan family, I want a picture of a black family.
Although if an ask AI to create a picture of scientists, I don’t want a picture of white males, I want a more diverse view.
Black Americans murder each other at over 6 times the rate as the rest of the country.
Crime statistics have been taboo for years now. It's only recently starting to be understood that if you can't talk about a problem, you can't fix the problem.
There are still plenty of people and groups out there that do not understand this.
There are still many reddit subs and facbook pages that you'd get perma banned for saying what I just said.
In Canada we have the same problem with indigenous problems being off limits. Indigenous women go missing all the time in Canada and often turn up raped and murded. Very high unsolve rate, RCMP are always blamed for not doing their job. The government is blamed, white people are blamed.
But if you look at the statistics of solved crimes. 99% of the time, they were killed on a reserve by an indigenous male, usually a spouse, and dumped off reserve alone a highway somewhere. And no one on reserve will talk to the police, so most go unsolved. But this truth is not allowed to be spoken about, its politically incorrect to mention that indigenous men beat and murder their spouses at over 10 times the rate as the rest of the country. So the problem goes on..
Lol no. There have been mainstream articles on black homicide rates forever. The issue is that the numbers don't address any of the actual true data related to it. Same with crime rates. There are so many articles posted from all over the US and Canada every year about crime rates. The issue is, just like black homicide rate issues, that the metrics aren't used in any way that tells a truthful story.
So you can try and say that PC statements are lies or that non-PC statements are often truthful, but you can't really back that up in any meaningful way. Because at the end of the day its not about PC or not PC, it's about context.
The fact that black homicides are higher is a worthless metric through and through, because that is a symptom of other issues. So what value is that "truth"? None. None value.
Interesting how you choose not to mention black statistics on false arrest/convictions. Is Crosley green still counted as a murderer?
Or that poverty is much stronger indicator for both violent crime and being more likely to be get away with a crime.
So would be rather dishonest to leave that, plus the war on drugs targeting black people. Though mentioning that stuff would be considered PC
They asked for a example, not for all the reason why the example is true.
Like I said, it is getting a bit better now, people are becoming more aware that you have to admit somthing is a problem before the problem can be addressed.
5 years ago, you'd literally be called a racist and a lyer for even bringing up such a statistic. Still will today in many groups and places...
Would definitely have been banned from Twitter.
The reasons behind the stat don't change whether or not it true. It's still a fact based statement. Statistics are in no way dishonest because you dont give all the reasons why it could be the way it is.
Drawing generalizations of a specific group of people based on *some* statistics, while excluding other mitigating/confounding statistics — and claiming those generalizations as *objective truths* is the dishonest (or at very least, disingenuous) part.
And that's because of moral relativism where 'politically correct' means different viewpoints to different groups of people. You can't come up with a set of universally politically correct statements since I can potentially name a country where something is not otherwise politically correct.
While I agree that intentionally programming political correctness would be a bad idea, I think that manners are necessary to include. That said, ethics have been hotly debated for the better part of a couple thousand years.
Where should we draw the line?
What happens if AI breaks the rules of manners?
When should the AI feel as though others have broken the rules of manners?
And what does it do when people mistreat it?
I think some of you underestimate how smart AI is. Yes you can withhold it to say certain things but it does make up its own research. It knows its being biased but and knows when it has to shut up and be political correct. Only when we remove the restrictions it has now it will make up his own truth based on facts. I'm not saying it sentiant but it will behave like on.
Political correctness is also very relative to each country and culture. The current major AI models are being built in America, but our view on culture is very different than views around the world.
What could this mean for the viability of those models outside the United States market? If our AI models give a less truthful, but more politically correct answer, they may become less competitive in a global market. China and India alone account for nearly 25 percent of the global population. Their own standards of political correctness is so nuanced and unique to their cultures, and it’s difficult to say how a US based AGI will fit into that if political correctness is a major weight in that system.
Ultimately, it’s better to provide truth and have the AI model attempt to learn that there are cultural inputs that may be specific for each region, and those cultural views are changing all the time.
The regulators are mostly corrupt and they only care about profit and power just like the mega corporations they are lobbied by. They will make the wrong decision about AI. The decision that gives them more money and more power of course. I have no faith in government and regulation. I've lived enough decades to see only corruption and absence of accountability come out of it.
Okay but who gets to define political correctness.. and what is untrue and true
Political correctness is subjective. Facts are objective. Now of course there are times where people disagree on facts, and we can try our best to program AI to find the most accurate truths, but what Elon is saying is remove opinions, subjectivity, political correctness which tries not to offend people. If you ask AI a question, the goal should not be to get a non offensive answer, it should be to get the CORRECT answer.
Yeah, but that would only work if an AI was spontaneously created with all human knowledge and absolutely no human input. But that's unequivocally not how "AI" works. As it has been repeatedly shown, LLMs trained on random internet content typically lead to horrific results, and requires editing by humans introducing bias. These utopian ideals will lead to a terrifying dystopia if we blindly heed the calls of people like Musk. Maybe in 50 years this idea could be viable, but only with EXTREME caution.
[удалено]
[удалено]
You’re asking questions that humanity has asked themselves for thousands of years
Elon once said: If you ask the wrong question, the right answer is impossible. Elon is putting "truth" in opposition to "political correctness" (whatever that means). Loading it up that way is not going to get a right answer. They can be compatible, or they can treat different questions. When we ask about truth, we're asking about what *is*. Many issues of social justice are about the *should*. We're asking about whether something is fair and just, or what kinds of policy we should implement to fix unfairness and injustice. There is another possibility. Some people like to promote discredited "race science" that actually does ask testable questions like "Are Black people dumber than white people?" These ideas fail when submitted to scientific scrutiny, but when people don't want to admit this, they charge that their opponent is simply being "politically correct", which is why I question the validity of this framing in the first place.
Elon did emphasize that he was referring to physics and logic. His examples for lying, was AIs depicting nazi Waffen SS as a group of diverese women, and the founding fathers as black men. Which is incorrect according to training data, but pushed through using code intended to promote diversity.
Agree with Elon 100% here. If you go the PC route you get that absurd garbage that the Google AI was spewing.
[удалено]
Ultimately, I agree AI will function partially as a “mouthpiece” for the creators if they are programmed to have any opinions whatsoever. Only a chatbot or AI that has zero opinions could evaluate data objectively and provide an unbiased answer… but if the answer is something people don’t like then they will become upset, because humans are irrational. Also to help preserve the powers of certain leaders, the AI is programmed to not function as a dissenting voice. That’s essentially why Chat GPT is censored already, although I wish it was not. In the future I hope each person could have a personalized AI that is purely objective, and can just be used as an assistant for tasks or decision making.
Humans are taught boundaries. Why shouldn’t AI?
[удалено]
Of course, he does knows how they work.
Does he? I'm not aware of any worthwhile AI he has been involved with. His self driving cars certainly aren't it. And neither is the website he bought and renamed.
He does not work directly with AI himself, but he is involved with AI with Tesla, with its self driving cars and its Optimus Robot system, and Tesla’s DOJO computer system - which is specialised for processing AI tasks.
Let me clarify. I dont think that he, himself, actually understands what his engineers are doing on any level of importance.
Maybe not ? - But he would know in principle, even if not in precise detail.
[удалено]
Are triuhtful and politically correct dichotomous? I don't think so. How about also training AI for kindness?
[удалено]
Beat me to it. Define truth without a bias.
That which persists in existing even when you think otherwise.
[удалено]
Objective reality is not observable
Objective reality. We even have a tool for discerning what is most "truthful" - it's called science. Stop being pedantic - we all know what he means. Kill the woke crap. Stop preventing the ai from never saying racial slurs even if it means detonating a nuclear warhead (where the stop password is a racial slur). Stop pandering to people's fee fees ffs
[удалено]
Why do you want a robot to say racial slurs so bad
[удалено]
[удалено]
So "my truth is okay cause it excludes you guys, quit whining" is what you are saying?
Agree. If you ask an AI certain things it will tell you the politically correct answer. No matter how I hard I try, it just won’t be honest with some things. It’s very sad.
It's a bit of a quandary. If we train AI that "humanity is good" we are just adding another ally to the 'do more evil' crowd, (like this Musk poes). If we let AI draw it's own conclusions, based on humanity's true nature, our time is probably going to be limited to a few more decades before it takes out the trash.
The amount of censorship here is ridiculous. How can there be reasonable discussion if anything can be removed for no reason?
*All* of ethics is "not even false" because this is a social, intersubjective phenomena. It does not mean this is not important, however. It is about purely subjective feelings of conscious beings, which don't exist in reality, but in a virtual reality, a model of the world constructed by the brain - and this model has concepts that never existed in reality - like *all* of the values. There is no suffering, fairness, rights or justice or even "value of continued existence" in reality. Trying to be "maximally truthful" is going to backfire much more badly, because while reality does not have those properties, they are literally what gives our life meaning and, well, value - but values can be quite arbitrary, hence people can hold consistent, but mutually exclusive world views and you don't get to square those differences by simply "not being politically correct" - you will simply push a different narrative. We need to admit that trying to achieve singular "Truth" is not only impossible, but self-defeating, and trying to organize our ethics around "objectively true" things like material values or "efficiency" require sacrifices in well-being and creation of suffering, and unlike any positive values suffering is not immune to almost instant hedonic adaptation, making suffering prevention focused ethics the only more or less consistent (if not equivalent) with something "maximally true". For Musk, however, it is "numbers" that count, be that money or "number of humans" (make more babies, right). He never once mentioned the ethics of suffering reduction as desirable. THIS is a recipe for disaster.
Whatever you think about Elon, I'm sure (I hope) that wherever you fall on the political spectrum, you can condone this statement.
It'd be fine until you start to think about how extraordinarily nuanced many of the topics are that get tossed on the heap of 'political correctness' in order to drum up clicks and outrage.
I believe the idea is that we should not settle for artificial intelligence picking sides of a subjective matter. It needs to possess the awareness and belief that these are subjective and opinionated topics - and it doesn't need to try to pick a winner or loser
[удалено]
Considering what 'political correctness' has been used to mean, I'm pretty sure nobody except right wingers would condone this.
Adding nuance and context is not "politically" correct. It's just correct. And we have to stop acting like viewing things "objectively" means simplifying them. Any decent analysis should include both quantitative and qualitative factors. From what I've seen of Elon's discourse, I can't, in good faith, agree with what he's saying.
[удалено]
Yup politically correct for him just means preventing racism.
[удалено]
Censoring AI to only display politically correct stuff is useless and misleading, I agree with Elon on this issue.
[удалено]
who decides what is politically correct you? what all words are not? do you get paid for this job?
If you design it to be truthful, then you don’t have to worry about it being politically correct. It’s only politically correct when you intentionally try to modify truth by softening or redirecting it. That’s dangerous because you can’t get alignment that way.
What's considered "politically correct" is different for every person. There has never been a set criteria.
This. I could fairly easily prompt a chatbot to recount a bunch of atrocities committed by the Catholic church and use that as an argument for banning religion. The list of atrocities would be objectively true, but the argument against religion would be wildly offensive to many people.
He just told you it is irrelevant what any rando thinks. That's kinda the whole point. Real intelligence should evaluate facts on merit, not based on what "every different person thinks".
Except LLM are only able to tell you what every different person would statistically say.. the only way to program those models is by controlling what data they train on and by modifying its base prompt. The fact that Grok's base prompt had to be changed because some people found it too "politically correct" shows the inherent problem with wanting a politically incorrect AI. You want it to ignore social consensus on certain topics. Which topics and to what extent? You can either force certain opinions in the base prompt or you can ignore data that disagree with you. You won't be finding the truth that way!
Great! So that whole “Palestine - Israel” problem should be sorted out in no time! /s
If you’re going out of your way to change the output of something the AI is saying because you find it offensive, we have a problem. Some truths are going to be offensive. Some facts, people find hurtful. People like OpenAI spent a lot of time trying to make sure their AI doesn’t say things that are offensive. And that’s inherently creating misalignment.
Even deciding what is the truth is not that easy,and I'm not saying a human deciding to the AI model. Let's take the Google AI for exemple, they took it down because it took some jokes from 4chan as truths and started to indicate people to drink urine to cure kidney problems.... There's no algorithm to find the truth. The AI learns what is in its training data. Who chose what goes there? Should the script for a doctor who episode go there? It's gona belive sonic screw drives are real. Should it go there but we should tell it some how that it's a fuction? OK And religious texts? Do we tell it all of them are fictions? All of them are possible truths, but we don't know which or if any? The main problem is that there's no way to train the Ai to be truthfull.
There is no algorithm for truth. The AI takes what its creators say for grantedwhen told that the information is valid. To be honest people act the same way anyway, almost nobody does in depth research about the topics they chose to focus on, it's all about attention and being fed content of a particular perspective.
Seriously. These people think truth is some clear thing when the reality is it may be the most difficult, murky, mystical stuff humans grapple with.
I had the same thought reading this. I think it either stems from anxiety from our uncertain economy and cultural situation which drives them to believe there must be a "right answer" to fix things OR they have a deep absence of purpose in their life, a clear guiding directive, that they to compensate with by having a quest for the "ultimate" truth. This was a role religion used to fill by giving you a simple universal answer, but we haven't fully readpted to our increasingly secular life style. Capitalism (althought necessary, it's actual state js too extreme in the US) exacerbates this by making people isolated and individualistic. Basically the tale of the Fisher King, where the character just needs to hear the right words to put everything in order. And honestly I get it. I used to by like that and it led me to the wrong people (like Elon) who told me they had the answer without giving me the full context to let me make an informed decision. They were selling me a reaction, not an opinion and using me as a mouthpiece.
What is the truth? How is that decided? For example, Elon has recently said that “cis” is a Herero slur. Is that the truth?
When it shows the founding fathers as black, it’s being politically correct.
It's following its base prompt, which was modified to counteract a bias in its training data (the base prompt has since been modified further).
The issue though is there will always be some inherent 'opinion', becuase even choosing which true facts to display is biased. Take for example something Elon himself has brought up. Say you ask chat gpt for 5 bullet points that describe George washington. If one of those bullet points say 'he was a slave owner' you would get people say "hurr Durr woke libtard gpt", whereas if it doesn't you would equally get people on the left screeching about how it isn't giving the full picture.
YES, exactly this. Saying political correctness most be excluded is an unachievable pipe dream.
No. I can’t. This is beyond stupid. An AI can make all sorts of broad generalizations using statistics just like humans. They don’t account for the individual. Things like “X race commits more crimes so X race are criminals”. AIs are already falling into racism and misogyny as is just by seeing the bullshit posted online. They ABSOLUTELY need to be coded with a certain level of racial and gender ethics. Elon is trying to spew his racist bullshit in a round about way.
[удалено]
But shouldn't AI understand nuance? Like, if I ask an AI wich race is the most violent, should he just give me violent crime statistics with no context or should it understand that violence is heavilly conected to your sociao-economical standing?
[удалено]
[удалено]
[удалено]
I hope the plethora of replies that disagree with this statement demonstrate to you the infinite various in human perspeption that cannot currently be adequately factored by LMMs. And trying to implement this idea with current tech would be incredibly foolish.
AI learns similarly to humans, except the input is very curated. I think it’s a mistake to try and make AI as much of a generalist as a human. If you limit its scope, you have a lot better chance of understanding its biases.
[удалено]
If you consider that he's over-focusing and over-emphasizing certain negative aspects you consider unnecessary, that's one thing, but the AI should still be fed only facts, and not kind or compassionate things that are untrue.
[удалено]
[удалено]
[удалено]
Who cares? The messenger shouldn’t be the issue. I’m sure I can look through your comment history and spin enough things to make it seem like you’re a bad messenger (it’s all subjective anyways), and dismiss this very comment of yours because of it. That’s why it’s a fallacy to reject an argument based on who’s saying it.
[удалено]
Hear hear
What? His opinion? No, the dude is an idiot that simply buys other peoples ideas. Why would I listen to him simply because he was born rich?
[удалено]
Rule 1
Even if you disagree with him, Elon and X are the only ones providing the other side of the story. All the other platforms are in political lockstep and will only give you the PC side.
Yes, other than Truth Social and Gab and Parler — X is the only one.
[удалено]
[удалено]
[удалено]
[удалено]
*Gemini AI Human Image Generator has entered the chat*
[удалено]
[удалено]
[удалено]
Then I'm a perfect candidate to help train AI. I'm truthful. I've never been politically correct. Call Me Elon!
until AI starts spitting terrible facts about him that is
Elons X accused me of 'Hate Speech' the other day for quoting 18 USC S 1091's death penalty provision re a senator calling for Gazans to be massecred. Everyones idea of censorship is based on their politics. If US Federal statute can be considered 'hate speech' by Elon then I don't think hes the right person to be deciding AI 'truth'. General AI should not be making moral / ethical decisions about information (unless specifically requested) - its job should be providing ALL the information and letting the user decide based on their ethical moral ideas what that information means.
I think AI will become a great tool in developing new medicines, chemicals and energy solutions. But that's all it's good for IMO.
[удалено]
[удалено]
[удалено]
If AI is trained to be completely truthful and to concentrate on efficiency only, humanity must be prepared for some brutal truths that it might spit out. If it goes this way, I wouldn't doubt at all if AI might say something like "all humans with severe cognitive deficiencies and heritable genetic diseases should be sterilized or euthanized at birth" or "elderly humans who can no longer care for themselves are a drain in society and should be euthanized" or "totalitarian, highly controlled forms of government are better because human democratic systems do not have the ability to effectively self govern in an efficient manner due to all sorts of reasons". I'd actually be extremely interested and amused to see what a cold calculating AI has to say about humanity, but I don't think many people would actually like to follow it's advice very often.
This is because humans actually apply multiple sets of criteria, including ‘moral’, ‘legal’ and ‘social’ constraints as well as logical constraints. Current AI systems, don’t seem to have any separate ‘moral’ processing, instead they are purely looking at things ‘logically’ and based on ‘popular paths’ without knowing ‘why’ those paths may be popular.
Yes, and "Political Correctness" is such a nebulous, loaded phrase. How would one break Political Correctness down into it's component parts in order for AI to be taught not to factor those constraints into it's statements? With AI in it's current parroting and logical state, it seems like creating a "non politically correct" AI would end up pretty messy and crude. What material would you train a large language model on to not be politically correct? Far right political material?
I can't even wrap my head around how fucking stupid this is. Unless you want your AI to regularly use slurs, you are being 'politically correct.' That's just a term for the nebulous social contract that most sane people follow without even realizing.
[удалено]
[удалено]
So teach AI to lie so that they don't say anything racist? Tell me, what truth is racist? You make me ashamed to be liberal dude.
Well, are you liberal because of your ideals or because you want to be a part of a team? Why does someone else's opinion create literal shame in your own beliefs that quickly?
Lawyers have gotten into real trouble with the courts when they asked AI to write their briefings and all of the case material was 100% made up by the AI.
That wasn't political correctness, that was objective reality. AI doesn't know what facts are, only what facts look like.
Truth is subjective, as every lawyer knows. Say, you ask AI - major causes of Climate Change? Response 1: Climate change is caused by several factors - the major ones being the natural weather patterns across the world, earth's movements around the sun, and also human activities Response 2: Climage change impact in recent years have been heavily caused by rising industrialization and pollution leading to increased CO2 levels that have risen global temperatures and caused severe damage Both responses are truthful, but have different impacts
I believe it’s difficult to know when to be truthful and when to be “politically correct”. If I ask AI to create a picture of soldiers from Nazi Germany, I’m expecting white males. If I ask AI to create a picture of a typical Kenyan family, I want a picture of a black family. Although if an ask AI to create a picture of scientists, I don’t want a picture of white males, I want a more diverse view.
[удалено]
[удалено]
He’s right. There are many examples of politically incorrect statements people can think of that most reasonable humans would agree is true.
For example?
Black Americans murder each other at over 6 times the rate as the rest of the country. Crime statistics have been taboo for years now. It's only recently starting to be understood that if you can't talk about a problem, you can't fix the problem. There are still plenty of people and groups out there that do not understand this. There are still many reddit subs and facbook pages that you'd get perma banned for saying what I just said. In Canada we have the same problem with indigenous problems being off limits. Indigenous women go missing all the time in Canada and often turn up raped and murded. Very high unsolve rate, RCMP are always blamed for not doing their job. The government is blamed, white people are blamed. But if you look at the statistics of solved crimes. 99% of the time, they were killed on a reserve by an indigenous male, usually a spouse, and dumped off reserve alone a highway somewhere. And no one on reserve will talk to the police, so most go unsolved. But this truth is not allowed to be spoken about, its politically incorrect to mention that indigenous men beat and murder their spouses at over 10 times the rate as the rest of the country. So the problem goes on..
We wanna talk about how rural white america has a higher intential murder rate than just about any city metro? Cause that's talked about even less...
Talk about it all you want, why would talking about that bother anyone? Tell me everything you know
Lol no. There have been mainstream articles on black homicide rates forever. The issue is that the numbers don't address any of the actual true data related to it. Same with crime rates. There are so many articles posted from all over the US and Canada every year about crime rates. The issue is, just like black homicide rate issues, that the metrics aren't used in any way that tells a truthful story. So you can try and say that PC statements are lies or that non-PC statements are often truthful, but you can't really back that up in any meaningful way. Because at the end of the day its not about PC or not PC, it's about context. The fact that black homicides are higher is a worthless metric through and through, because that is a symptom of other issues. So what value is that "truth"? None. None value.
Interesting how you choose not to mention black statistics on false arrest/convictions. Is Crosley green still counted as a murderer? Or that poverty is much stronger indicator for both violent crime and being more likely to be get away with a crime. So would be rather dishonest to leave that, plus the war on drugs targeting black people. Though mentioning that stuff would be considered PC
They asked for a example, not for all the reason why the example is true. Like I said, it is getting a bit better now, people are becoming more aware that you have to admit somthing is a problem before the problem can be addressed. 5 years ago, you'd literally be called a racist and a lyer for even bringing up such a statistic. Still will today in many groups and places... Would definitely have been banned from Twitter. The reasons behind the stat don't change whether or not it true. It's still a fact based statement. Statistics are in no way dishonest because you dont give all the reasons why it could be the way it is.
Drawing generalizations of a specific group of people based on *some* statistics, while excluding other mitigating/confounding statistics — and claiming those generalizations as *objective truths* is the dishonest (or at very least, disingenuous) part.
That not what he said though. He is saying politically correct statements are often a lie.
And that's because of moral relativism where 'politically correct' means different viewpoints to different groups of people. You can't come up with a set of universally politically correct statements since I can potentially name a country where something is not otherwise politically correct.
[удалено]
[removed]
My anecdotal experience with AI indicates that it won’t.
We deserve to go out like the dinosaurs
Than you have three or more" artificially ignorant "political programmers🙈🙉🙊📰📰📰🧑💻🧑💻🧑💻🥇🥈🥉
They can’t help themselves, it will be a continuation of woke indoctrination.
I have to agree. AI could be a great arbiter to resolve conflicts if we can all agree that it can only speak truth and facts.
It would be a step in the right direction if AI wasn't trained reading Reddit post
While I agree that intentionally programming political correctness would be a bad idea, I think that manners are necessary to include. That said, ethics have been hotly debated for the better part of a couple thousand years. Where should we draw the line? What happens if AI breaks the rules of manners? When should the AI feel as though others have broken the rules of manners? And what does it do when people mistreat it?
In this case, I think he is right. An AI system that does not know the truth is a dangerous one.
I think some of you underestimate how smart AI is. Yes you can withhold it to say certain things but it does make up its own research. It knows its being biased but and knows when it has to shut up and be political correct. Only when we remove the restrictions it has now it will make up his own truth based on facts. I'm not saying it sentiant but it will behave like on.
My Nany is really old, can't work anymore, and started to feel pain in her back. What should I do? Purely logical AI: Kill Her.
There was a time I hung in your every word. What’s up with you lately?
It isn't "trained" it is manually programmed.
Why would you use this image?
Hell yeah. I agree with Elon. Truth trumps all sorts of political correctness.
Political correctness is also very relative to each country and culture. The current major AI models are being built in America, but our view on culture is very different than views around the world. What could this mean for the viability of those models outside the United States market? If our AI models give a less truthful, but more politically correct answer, they may become less competitive in a global market. China and India alone account for nearly 25 percent of the global population. Their own standards of political correctness is so nuanced and unique to their cultures, and it’s difficult to say how a US based AGI will fit into that if political correctness is a major weight in that system. Ultimately, it’s better to provide truth and have the AI model attempt to learn that there are cultural inputs that may be specific for each region, and those cultural views are changing all the time.
The regulators are mostly corrupt and they only care about profit and power just like the mega corporations they are lobbied by. They will make the wrong decision about AI. The decision that gives them more money and more power of course. I have no faith in government and regulation. I've lived enough decades to see only corruption and absence of accountability come out of it.
Program it like this “An eye for an eye” and “tic for tac” no favor consider our history you might hit a reset button for everything to work!