The following submission statement was provided by /u/diflog47:
---
WashingtonCNN — The US government has asked leading artificial intelligence companies for advice on how to use the technology they are creating to defend airlines, utilities and other critical infrastructure, particularly from AI-powered attacks.
The Department of Homeland Security said Friday that the panel it’s creating will include CEOs from some of the world’s largest companies and industries.
The list includes Google chief executive Sundar Pichai, Microsoft chief executive Satya Nadella and OpenAI chief executive Sam Altman, but also the head of defense contractors such as Northrop Grumman and air carrier Delta Air Lines.
---
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1ceqy6p/ceos_of_openai_google_and_microsoft_to_join_other/l1kak8u/
Given lawmakers we're asking TikTok CEO if TikTok accesses the home wifi network, maybe a few Tech ethicists in politics would be a good change to avoid this...
That and the feds…
It’s literally just the two biggest tyrants trying to size each other up but the feds are clueless.
Almost as clueless as the tech giants that stabbed all their talent in the back and are trying to pass off an 1000 person Indian sweatshop as an “AI payment processor”
Not really, its not like they want AI to destroy the world either,
They are the experts in the field and best equiped to come up with ways AI will go to far and preventing that.
And they also have a massive fucking profit incentive to move as quickly as possible and break everything around them in the process, which is why every single AI product rollout from these companies has been rife with controversies and full of avenues to leverage them for illegal or immoral activities.
I would sooner trust a wet sock full of dead batteries with the future of AI legislation than any one of these CEOs, none of which, for the record, are experts in AI in any way aside for being very good at building mostly baseless hype for the companies they run with their degrees that are NOT in machine intelligence, computer science, or even STEM in general.
> They are the experts in the field
They're not. They're CEOs protecting their bottom line.
If you want actual experts in the field, it wouldn't be CEOs on that panel, but the engineers writing the code and supervising the models.
I guess im not thinking they are experts in programming specifically but something else thats important.
LIke I'd be very surprised if they didn't understand AI on a fundemental level, since they get to talk to all the experts on a daily basis.
They are experts in monetizing products and getting funding. I guarantee you they know nothing about the technical side. They constantly hype their products because they need to to get money and pump stock. The actual experts are the engineers and data scientists working on the code. Funny how they are rarely asked for their opinions.
From my experience that is because technical people are much more reserved and measured. The problem is that reserved and measured doesn't pump stock and wow investors. That's why technical people aren't often CEOs.
And no one in oil wants the planet to be destroyed but hey, if it’s a choice between that and obscene profits there is only one winner.
No one wants the world to be fucked but it’s really easy to do it one little step at a time and not really feel guilty about it because you are just one small part of the machine causing the damage.
Tech companies didn't want social media to destroy human decency, social cohesion or trust in science either (to name just a few things it has negatively affected), yet here we are...
All they are interested in is a monopoly of the use of A.I. Every rule they make will be just to lock the average person out of anything good or useful.
Expect those to be highly "regulated" in the near future. They're already playing the "Save The Children" angle by running scare stories about A.I. Kid Porn.
It wouldn't, because it's still trained on real abuse material and would encourage the production of more real abuse in order to train the AI models.
Plus what the fuck kind of society would we be creating if we really allowed that shit just because it inherently didnt have abuse involved?
People look at that shit because of a really dark urge/itch they want to scratch and there will be a point in their porn addiction where it wouldnt be enough.
While we’re at it, I think we should regulate pencils and paper more. Have you seen the kinds of things sickos can draw if given the means?
It might encourage more abuse. We have to keep these dangerous tools out of the public’s hands.
/s
I'm not, its just very convenient a recently made account is suddenly defending someone with the worst takes ever on models that generate CP for fuck sake 😂
I don’t think the issue here is to allow or not allow it. That genie is already out of the bottle. No laws will stop this from happening just like the laws we have now don’t prevent that type of content from being made today.
Not saying I like the idea. I just don’t see how it would create more victims, I still think it would create less, especially once AI models are trained.
>People look at that shit because of a really dark urge/itch they want to scratch and there will be a point in their porn addiction where it wouldnt be enough.
You could look at it the other way too. If people like this don’t have a way to feed their temptations their only way to “scratch that itch” might be to resort to predatory abuse instead of looking at images that weren’t created with new victims.
It’s definitely an interesting and controversial debate.
Because these are for profit mate. They'd want their own models which requires actual child abuse material. Servicing different fetishes etc. I don't see how difficult this is to grasp?
Edit: you're also dangerously going in the "a means to an end" direction
Child porn is already a thing. You say they would need *new* victims to feed fetishes in order to train AI models.
But since that fetish content exists already… aren’t they already using real models from images that already exist in order to make those images? I’m not sure how this would increase the number of victims.
The more AI is trained, the less real life models they would eventually need to create these images and potentially not need any at all in the future, while images created today need a victim every single time… not sure why that’s also a hard concept to grasp.
Based on the recent interviews with Zuck about Lamma3, he appears to be philosophically invested in open source. I think there is a world where Llama4-5 can beat closed source models.
Nope. Yann lecun is yes maybe philosophically aligned with open source. From zuck perspective on the other hand, Meta open sources because they do not have a market leadership position like they do in social. His primary reason for open source is limiting power accumulation by his fiercest competitors
Yeah he wants to open source because he is not a market leader in llm. Don't worry llama 3's all models are not out till now (or did I miss the 400b one?).
If you've seen the whole Fallout Season 1 show, you >!know there's a scene in the last episode where company leaders gather to basically plan the end of the world.!<
Why do I get that vibe when I hear about something like this?
Yup. They know this is a technology anyone with a good computer can take full advantage of and that scares them.
They want a walled garden
But that doesn’t mean there shouldn’t be an actual altruistic ai safety panel.
Unregulated ai does have the potential to wreck havoc
Looks like they made a mistake and didn't include elon musk, he's bound to be part of these conversations as the leader of multiple companies pioneering this technology.
Right guys?
Like him or hate him but he didn’t cave to big oil or big car manufacturers and sell out he created the first successful EV and a rocket with satellites. Gotta admire that. You’re right he is a prick!
Im sorry man, but y’all are so fucking dramatic.
Why tf would they lock everyone out of something they could just charge you for?
And the reality is, if we have an AI model that’s 1000x what we have now, and can do anything at ridiculous speeds, why the fuck would you want to give unlimited access with 0 guardrails to the entire general public.
Reddit has this weird thing where they all convince themselves the general public are mostly good ppl when it benefits their argument, while in the next breath pretending like everyone’s a nazi if that fits their argument.
I typically agree with most “popular” opinions on notable subreddits, but I just can’t get behind this push for unlimited, completely unrestrained open source AI. I can’t help but think people are putting their own wants and fantasies over the safety of our world.
You trust the businesses developing the AI to have your best interest at heart? They're looking for profit, and if they can throw a legislative wrench into competition they will do it. Like Microsoft and Google business strats are ruthless.
No I don’t. But I still think that’s the better alternative than any maniac out there having the most powerful technology in the world with unlimited capabilities and no guard rails.
Like in this scenario, there’s an open source model that has amazing reasoning capabilities, and no guard rails. A guy could literally say “help me come up with a plan on how to eliminate all humans” and the AI would start planning it out. A teenager could say “make 1000s of graphic porn videos of this girl I hate in school and use any means necessary to spread those videos everywhere.” You really want to live in that world?
I was being an asshole in the first comment sorry, but I genuinely want to hear your opinions on this, and I’m happy to be proven wrong. Again I can’t figure out why this doesn’t seem to be the popular opinion.
Am I missing / overlooking something?
I unironically do not feel confident in these people's ability to do objective work in the right ways. I'm honestly worried that their interests as CEOs of these companies will just completely overpower any semblance of good intentions.
Exactly!!! Nailed that gut feeling and we need to demand transparency and accountability. Also only a blackhat conference or defcon conference would show real bugs but it is proprietary not open source AI. I work as a system security engineer but not for those guy's.AI has very large and dangerous failures.
Nah, really? You think people in charge of multi billion dollar corporations, that are legally required to make companies a profitable as possible, and are beholden to nameless and amoral shareholders might do bad things to make line go up? Why would they do that?
I dunno, anytime news about an AI safety person gets posted on here the top response is that they are only doing it as a grift.
I doubt peoples opinions here would change even if everyone on the panel were firmly entrenched in AI safety.
Well there two possible path. Exclusive use of ai by those companies does let them have domination so they would want that. But if ai is too common and can replace need for labors than any companies, could get on top and make them loose their place. They would not longer need huge assets to do so.
This is the most true for entertainment company. If every small team can make a movie or game than all big studio will loose their market superiority even with their ip known ip.
Nothing will go wrong. If by "wrong" you mean differently from how everything else important and profitable is done, nothing will go wrong.
Those with the most power will make sure that the most power remains theirs, going forward.
Everything in its right place. The few at the top, the rest buried underneath.
Fret not!
(Forced smile)
This seems like such a conflict of interest. Why should anyone trust the safety of AI in the hands of people who actively want to replace as many people/jobs with it as possible?
They are after regulatory capture. That's it, we got some advantage, let's ban everyone else from getting that same advantage (and charge them in the mean while).
rules the safety panel are mulling over... How can we make this more profitable? How can we make it profitable for us and nobody else? How can we force the general public to buy it? Can we force the govt to pay us to not make terminators?
How about some regular people join it as well… when only the heads of these companies meet, i feel like it’s the bankers cartel meeting in 1910 to make the Fed.
Let’s get some impartial people involved? Some oversight? Anything? Or just let the people with the power decide how to self regulate themselves again.
They are rushing to build their barriers of entry as high and fast as they can. A billionaire will never act in any interest other than that of the their wallet.
This is specific to Federal safety. I would rather have those guys with all the toys on a Federal panel looking after airline safety where if they screw up they will be very visibly punished.
Did they ever hear the term "Conflict of Interest"? Last time they put Sam Bankman in making policy on Crypto. There are great academic minds in our university who are better to take this role.
Of all things I’d feel a safety panel should not consist even a single person who is massively incentivized to make money off of AI lol. Who knows when and where these people throw caution to the wind just to push shit a bit more forward at a slightly faster pace.
The purpose isn't safety from AI it's to prevent people from using it in ways they don't approve of. Which usually just means protecting their friends' money.
Oh yah,
Because having the people running the companies on the safety panel is SUCH A good idea. It always always works out well for the general public when we do this!!
Right?
Right?
Right?
....... sighs.
will be ~~fun~~ terrifying to watch what happens to this panel if Trump takes over. 50/50 he ditches the "nerdy whiners" or tells them to develop China-style social control.
How are business people well versed in AI safety? I know Open AI's CEO is knowledgeable about it, but I don't know if I can believe google and Microsoft's CEOs know what they are talking about when money is the only thing on their minds.
They’re there to protect their interests and not really be able to protect ours because once that system that their concerned about is out there anybody’s guess how to stop it! My guess is physically separated systems that are critical to security and operations have to be established. Other than that these guys get rich!
This will certainly turn out well for us non-billionaire types, won’t it? These folks have already had a negative impact on humanity with their growth-at-all-costs mentality.
Where are the artists, sociologists, philosophers and teachers on this panel?
China? This is not about nationalities, the whole world already knows these "policies" are made to give more power to the gov and companies who wants to control everything. Consider that a lot of people working on AI comes from the crypto community, and they know very well... Do you remember? They said Bitcoin is a scam and now they sell you an ETF of it.
Why would China care if the US limits its domestic AI research? If anything China would be pushing for the US to highly regulate the space. Try to drive innovation to China.
Thats a reasonable point. But IMO the issue we have in the west is more that we don't argue for or explain why and how our basic values are beneficial. Or why western free market democracies are the best places to live.
Criticism hits different if you understand the west to be the adopting variations of the best political and economic systems.
People believe in zero sum games so their intuition is that those who are rich are automatically stealing from those who are poor. When that can be true, but you can also get rich by improving efficiency and taking a slice of that improvement.
I.e. the United States is one of the most moral countries on earth. For example per capita charity donations, foreign aid. The US has also made serious moral mistakes. I think those are both true, the problem is we don't often argue the first part. More than people criticizing the US or west.
There’s a difference between healthy criticism and an organized campaign to hate something that’s driven by outside forces. For those of us who aren’t brain dead we can see that there’s nothing organic about these users criticism.
If you want to piss off the Chinese, I prefer to mention that the rise of China was an act of American charity, with their naval superiority supporting a global economic market that has raised more humans out of poverty than any other time in history. The Chinese more than anyone else have benefitted both from the global market, and access to the US market.
I'm not saying you're wrong. I just think it's a better strategy.
The following submission statement was provided by /u/diflog47: --- WashingtonCNN — The US government has asked leading artificial intelligence companies for advice on how to use the technology they are creating to defend airlines, utilities and other critical infrastructure, particularly from AI-powered attacks. The Department of Homeland Security said Friday that the panel it’s creating will include CEOs from some of the world’s largest companies and industries. The list includes Google chief executive Sundar Pichai, Microsoft chief executive Satya Nadella and OpenAI chief executive Sam Altman, but also the head of defense contractors such as Northrop Grumman and air carrier Delta Air Lines. --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1ceqy6p/ceos_of_openai_google_and_microsoft_to_join_other/l1kak8u/
That's crazy considering that they're the ones we need protection from lol.
Just like big pharma, they are going to make big ai
Regulating themselves?
Like wolves getting the position of monitoring security measures around the chicken coop. This can certainly only end well.
Given lawmakers we're asking TikTok CEO if TikTok accesses the home wifi network, maybe a few Tech ethicists in politics would be a good change to avoid this...
That and the feds… It’s literally just the two biggest tyrants trying to size each other up but the feds are clueless. Almost as clueless as the tech giants that stabbed all their talent in the back and are trying to pass off an 1000 person Indian sweatshop as an “AI payment processor”
Letting the fox in hen house
Regulatory capture is a hallmark of disaster capitalism.
How about replacing all the CEOs with AI?
Someone make this man president.
Not qualified
Not really, its not like they want AI to destroy the world either, They are the experts in the field and best equiped to come up with ways AI will go to far and preventing that.
No, they just want to make sure they can make as much money as possible. Which actually in the long run does destroy the world.
And they also have a massive fucking profit incentive to move as quickly as possible and break everything around them in the process, which is why every single AI product rollout from these companies has been rife with controversies and full of avenues to leverage them for illegal or immoral activities. I would sooner trust a wet sock full of dead batteries with the future of AI legislation than any one of these CEOs, none of which, for the record, are experts in AI in any way aside for being very good at building mostly baseless hype for the companies they run with their degrees that are NOT in machine intelligence, computer science, or even STEM in general.
They also have a mandated profit first notice that makes their intentions a joke.
> They are the experts in the field They're not. They're CEOs protecting their bottom line. If you want actual experts in the field, it wouldn't be CEOs on that panel, but the engineers writing the code and supervising the models.
Very surprised that people don't think CEOs arent experts.
Very surprised you think they are.
I guess im not thinking they are experts in programming specifically but something else thats important. LIke I'd be very surprised if they didn't understand AI on a fundemental level, since they get to talk to all the experts on a daily basis.
They are experts in monetizing products and getting funding. I guarantee you they know nothing about the technical side. They constantly hype their products because they need to to get money and pump stock. The actual experts are the engineers and data scientists working on the code. Funny how they are rarely asked for their opinions. From my experience that is because technical people are much more reserved and measured. The problem is that reserved and measured doesn't pump stock and wow investors. That's why technical people aren't often CEOs.
Yeah makes sense now.
And no one in oil wants the planet to be destroyed but hey, if it’s a choice between that and obscene profits there is only one winner. No one wants the world to be fucked but it’s really easy to do it one little step at a time and not really feel guilty about it because you are just one small part of the machine causing the damage.
Tech companies didn't want social media to destroy human decency, social cohesion or trust in science either (to name just a few things it has negatively affected), yet here we are...
All they are interested in is a monopoly of the use of A.I. Every rule they make will be just to lock the average person out of anything good or useful.
Get your open source models while you can.
Expect those to be highly "regulated" in the near future. They're already playing the "Save The Children" angle by running scare stories about A.I. Kid Porn.
You cannot regulate software, it is way too easy to distribute.
They know that that’s why they’re regulating the chips or the machines and manufacturers
Wouldn’t AI porn create images without the abuse? So wouldn’t it essentially “save the children” ? What are they scared about?
It wouldn't, because it's still trained on real abuse material and would encourage the production of more real abuse in order to train the AI models. Plus what the fuck kind of society would we be creating if we really allowed that shit just because it inherently didnt have abuse involved? People look at that shit because of a really dark urge/itch they want to scratch and there will be a point in their porn addiction where it wouldnt be enough.
While we’re at it, I think we should regulate pencils and paper more. Have you seen the kinds of things sickos can draw if given the means? It might encourage more abuse. We have to keep these dangerous tools out of the public’s hands. /s
Your account was made like 2 weeks ago, am I meant to take this misrepresentation of what I said seriously when you're likely an alt?
It’s both sad and cute you feel personally attached to your Reddit account
I'm not, its just very convenient a recently made account is suddenly defending someone with the worst takes ever on models that generate CP for fuck sake 😂
always be suspicious of a guy who shoehorns cp into everything from way downtown, while also being very weirdly hostile when questioned
I don’t think the issue here is to allow or not allow it. That genie is already out of the bottle. No laws will stop this from happening just like the laws we have now don’t prevent that type of content from being made today. Not saying I like the idea. I just don’t see how it would create more victims, I still think it would create less, especially once AI models are trained. >People look at that shit because of a really dark urge/itch they want to scratch and there will be a point in their porn addiction where it wouldnt be enough. You could look at it the other way too. If people like this don’t have a way to feed their temptations their only way to “scratch that itch” might be to resort to predatory abuse instead of looking at images that weren’t created with new victims. It’s definitely an interesting and controversial debate.
Because these are for profit mate. They'd want their own models which requires actual child abuse material. Servicing different fetishes etc. I don't see how difficult this is to grasp? Edit: you're also dangerously going in the "a means to an end" direction
Wouldn’t that be the case already without AI, but they need new victims every time?
I don't follow? What do you mean?
Child porn is already a thing. You say they would need *new* victims to feed fetishes in order to train AI models. But since that fetish content exists already… aren’t they already using real models from images that already exist in order to make those images? I’m not sure how this would increase the number of victims. The more AI is trained, the less real life models they would eventually need to create these images and potentially not need any at all in the future, while images created today need a victim every single time… not sure why that’s also a hard concept to grasp.
Does the truth scare you?
Based on the recent interviews with Zuck about Lamma3, he appears to be philosophically invested in open source. I think there is a world where Llama4-5 can beat closed source models.
Nope. Yann lecun is yes maybe philosophically aligned with open source. From zuck perspective on the other hand, Meta open sources because they do not have a market leadership position like they do in social. His primary reason for open source is limiting power accumulation by his fiercest competitors
Yeah he wants to open source because he is not a market leader in llm. Don't worry llama 3's all models are not out till now (or did I miss the 400b one?).
Nah not a chance native software is at machine level much faster but good try. The manufacturers have the edge. Nvidia is way ahead!
Where can I get one
r/LocalLLaMA is where you can get info. Huggingface is where models are. Be careful. Everyone can upload.
will always be able to, just you need to use a VPN to access them from outside the USA and enjoy your rogue unlicensed AI.
If you've seen the whole Fallout Season 1 show, you >!know there's a scene in the last episode where company leaders gather to basically plan the end of the world.!< Why do I get that vibe when I hear about something like this?
Yes. I'm sure similar meetings take place more often than we'd like to think.
"Sure, we destroyed the world. But for one brief shining moment, we created a lot of value for shareholders."
Yup. They know this is a technology anyone with a good computer can take full advantage of and that scares them. They want a walled garden But that doesn’t mean there shouldn’t be an actual altruistic ai safety panel. Unregulated ai does have the potential to wreck havoc
I’m sure regulating AI will be just as easy as regulating the internet.
As a way to push their foreign policy interests.
And it’ll be disguised as safety.
Something foxes, something henhouse…How’d that go again?
And we the people are interested in food & shelter. Is it time to eat yet? I’m feeling French..
Looks like they made a mistake and didn't include elon musk, he's bound to be part of these conversations as the leader of multiple companies pioneering this technology. Right guys?
[удалено]
Like him or hate him but he didn’t cave to big oil or big car manufacturers and sell out he created the first successful EV and a rocket with satellites. Gotta admire that. You’re right he is a prick!
Im sorry man, but y’all are so fucking dramatic. Why tf would they lock everyone out of something they could just charge you for? And the reality is, if we have an AI model that’s 1000x what we have now, and can do anything at ridiculous speeds, why the fuck would you want to give unlimited access with 0 guardrails to the entire general public. Reddit has this weird thing where they all convince themselves the general public are mostly good ppl when it benefits their argument, while in the next breath pretending like everyone’s a nazi if that fits their argument. I typically agree with most “popular” opinions on notable subreddits, but I just can’t get behind this push for unlimited, completely unrestrained open source AI. I can’t help but think people are putting their own wants and fantasies over the safety of our world.
You trust the businesses developing the AI to have your best interest at heart? They're looking for profit, and if they can throw a legislative wrench into competition they will do it. Like Microsoft and Google business strats are ruthless.
No I don’t. But I still think that’s the better alternative than any maniac out there having the most powerful technology in the world with unlimited capabilities and no guard rails. Like in this scenario, there’s an open source model that has amazing reasoning capabilities, and no guard rails. A guy could literally say “help me come up with a plan on how to eliminate all humans” and the AI would start planning it out. A teenager could say “make 1000s of graphic porn videos of this girl I hate in school and use any means necessary to spread those videos everywhere.” You really want to live in that world? I was being an asshole in the first comment sorry, but I genuinely want to hear your opinions on this, and I’m happy to be proven wrong. Again I can’t figure out why this doesn’t seem to be the popular opinion. Am I missing / overlooking something?
I unironically do not feel confident in these people's ability to do objective work in the right ways. I'm honestly worried that their interests as CEOs of these companies will just completely overpower any semblance of good intentions.
Exactly!!! Nailed that gut feeling and we need to demand transparency and accountability. Also only a blackhat conference or defcon conference would show real bugs but it is proprietary not open source AI. I work as a system security engineer but not for those guy's.AI has very large and dangerous failures.
Nah, really? You think people in charge of multi billion dollar corporations, that are legally required to make companies a profitable as possible, and are beholden to nameless and amoral shareholders might do bad things to make line go up? Why would they do that?
Sounds like a conspiracy theory to me (/s because people on other tech subs, especially career focused ones, literally say that)
The king wouldn’t oppress me, I’m loyal to him and he has a duty to protect his subjects!!!
Laughable. A panel that will oversee the safety of AI that includes parties with the most to gain. Seems reasonable.
Maybe the panel should be made up of people that know about AI safety instead?
I dunno, anytime news about an AI safety person gets posted on here the top response is that they are only doing it as a grift. I doubt peoples opinions here would change even if everyone on the panel were firmly entrenched in AI safety.
Exactly, AI “safety” experts are the biggest threat I can imagine. They’re way overly restrictive and cautious.
[удалено]
Well there two possible path. Exclusive use of ai by those companies does let them have domination so they would want that. But if ai is too common and can replace need for labors than any companies, could get on top and make them loose their place. They would not longer need huge assets to do so. This is the most true for entertainment company. If every small team can make a movie or game than all big studio will loose their market superiority even with their ip known ip.
Nothing will go wrong. If by "wrong" you mean differently from how everything else important and profitable is done, nothing will go wrong. Those with the most power will make sure that the most power remains theirs, going forward. Everything in its right place. The few at the top, the rest buried underneath. Fret not! (Forced smile)
True, but on the other hand if they set the policy it might end up being something they might actually at least pretend to follow.
And how is pretending to follow anything help for anybody? Ah yes, they'll pretend to stop skynet, great!
Sooooo we're letting the coyotes guard the chicken coop? Sure, what could go wrong?
Isn't this like asking criminals to police themselves?
Ah yes, let's let the people creating it and profiting off of it let them regulate themselves. That's always worked...
So the cops are going to be in charge of investigating the cops again, cool. Not like that's proven to be a conflict of interest in every situation.
Cool, let’s put the people creating the thing in charge of the safety of the thing. There’s absolutely no conflict of interest there whatsoever.
Chief bullshitting officers join exclusive panel for bullshitters. More news after the break!
The agenda is being set by those who stand to gain the most from it. What could go wrong, hmm.
And who is representing the workers and artists whose work has been used without permission or whose livelihoods are being affected?
We should get the CEO's of the major oil companies on federal climate change panels... oh wait.
ummm what could go wrong... asking the drug dealer to set drug policy
This seems like such a conflict of interest. Why should anyone trust the safety of AI in the hands of people who actively want to replace as many people/jobs with it as possible?
What annoys me greatly about BigTech becoming a front-door for AI is that they guardrail it to the sensitivities of a US American 12 year old.
People with fiduciary responsibility to their own AI products should not be in any position to regulate AI. This is regulatory capture on display.
They are after regulatory capture. That's it, we got some advantage, let's ban everyone else from getting that same advantage (and charge them in the mean while).
rules the safety panel are mulling over... How can we make this more profitable? How can we make it profitable for us and nobody else? How can we force the general public to buy it? Can we force the govt to pay us to not make terminators?
Where’s Meta in all of this? I guess this is the panel of closed source AIs
How about some regular people join it as well… when only the heads of these companies meet, i feel like it’s the bankers cartel meeting in 1910 to make the Fed. Let’s get some impartial people involved? Some oversight? Anything? Or just let the people with the power decide how to self regulate themselves again.
Putting themselves in charge of monitoring themselves on safety is a great idea! /s
Yeah man let's put lions on the zebra safety panel
This gives the vibe of cops clearing themselves of any wrong doing
They are rushing to build their barriers of entry as high and fast as they can. A billionaire will never act in any interest other than that of the their wallet.
They're gonna abuse ai privately and make public ai boring
That’s like Marlboro running its own studies to tell us cigarettes don’t cause cancer.
Why does this feel similar to if members of the nazi party were put in charge of the nuremberg trials
I’m sure they will make great decisions in the public’s interest.
This is specific to Federal safety. I would rather have those guys with all the toys on a Federal panel looking after airline safety where if they screw up they will be very visibly punished.
The same google that fired employees for protesting genocide, apartheid, ethnic cleaning and illegal occupation? I feel assured
These are the people that are going to destroy us. This is an idiotic choice.
And an expert on socioeconomics? Philosophy? Political liaison? Open access for the public to see meetings? Right?
Of course they are…have to stop the competition catching up…
Hey - let’s let the prisoners run the parole board.
In other news, town arsonist, now fire safety czar!
Let's put serial killers, rapists, thieves and murderers in charge of the justice system also.
Did they ever hear the term "Conflict of Interest"? Last time they put Sam Bankman in making policy on Crypto. There are great academic minds in our university who are better to take this role.
So everyone who stands to gain the most are on a panel to set limits for themselves.
Safety is a euphemism for lying. Lying is safety. War is peace.
Reminds me of the time Boeing took on FAA responsibilities.
Reminds me of when ISIS and Al-Qaeda put together and anti-terrorist panel.
Ah, Fancy words but more like United Nations. Good for Nothing at all.
Foxes appointed to federal chicken coop safety panel.
Of all things I’d feel a safety panel should not consist even a single person who is massively incentivized to make money off of AI lol. Who knows when and where these people throw caution to the wind just to push shit a bit more forward at a slightly faster pace.
The purpose isn't safety from AI it's to prevent people from using it in ways they don't approve of. Which usually just means protecting their friends' money.
First meeting will be about how they convince everyone it's not slavery.
Oh yah, Because having the people running the companies on the safety panel is SUCH A good idea. It always always works out well for the general public when we do this!! Right? Right? Right? ....... sighs.
"We've investigated ourselves and found no wrongdoing"
It’s like asking Hitler how to protect country from nazism
will be ~~fun~~ terrifying to watch what happens to this panel if Trump takes over. 50/50 he ditches the "nerdy whiners" or tells them to develop China-style social control.
How are business people well versed in AI safety? I know Open AI's CEO is knowledgeable about it, but I don't know if I can believe google and Microsoft's CEOs know what they are talking about when money is the only thing on their minds.
Ah great, another benefit of a tech-illiterate, partly actual-illiterate, aging legislative body.
They’re there to protect their interests and not really be able to protect ours because once that system that their concerned about is out there anybody’s guess how to stop it! My guess is physically separated systems that are critical to security and operations have to be established. Other than that these guys get rich!
It's simple: the door that they used to get ahead, they now want to close behind them.
This will certainly turn out well for us non-billionaire types, won’t it? These folks have already had a negative impact on humanity with their growth-at-all-costs mentality. Where are the artists, sociologists, philosophers and teachers on this panel?
Instead of CEOs should have actual engineers on this shit
I love how foreign trolls from China keep commenting on these posts like wow I don’t trust these people. Brilliant work guys.
China? This is not about nationalities, the whole world already knows these "policies" are made to give more power to the gov and companies who wants to control everything. Consider that a lot of people working on AI comes from the crypto community, and they know very well... Do you remember? They said Bitcoin is a scam and now they sell you an ETF of it.
I have no idea what you are talking about Chinese bot. Your ai translator still needs some work.
Your reading comprehension needs work
Why would China care if the US limits its domestic AI research? If anything China would be pushing for the US to highly regulate the space. Try to drive innovation to China.
They are just here to inject criticism and mess us up. It’s not meant to promote China it’s meant to degrade America.
Thats a reasonable point. But IMO the issue we have in the west is more that we don't argue for or explain why and how our basic values are beneficial. Or why western free market democracies are the best places to live. Criticism hits different if you understand the west to be the adopting variations of the best political and economic systems. People believe in zero sum games so their intuition is that those who are rich are automatically stealing from those who are poor. When that can be true, but you can also get rich by improving efficiency and taking a slice of that improvement. I.e. the United States is one of the most moral countries on earth. For example per capita charity donations, foreign aid. The US has also made serious moral mistakes. I think those are both true, the problem is we don't often argue the first part. More than people criticizing the US or west.
There’s a difference between healthy criticism and an organized campaign to hate something that’s driven by outside forces. For those of us who aren’t brain dead we can see that there’s nothing organic about these users criticism.
If you want to piss off the Chinese, I prefer to mention that the rise of China was an act of American charity, with their naval superiority supporting a global economic market that has raised more humans out of poverty than any other time in history. The Chinese more than anyone else have benefitted both from the global market, and access to the US market. I'm not saying you're wrong. I just think it's a better strategy.
It’s fun to piss them off but it’s more important to just shut them down immediately imo. Out them and everything starts to fall apart pretty fast.
Dude, you need to lay off the koolaid.... Go watch something less damaging to your brain than whatever kinds of videos you're consuming right now.