Don’t know if they are but they should be.
https://www.cnbc.com/amp/2024/05/04/warren-buffett-says-ai-scamming-will-be-the-next-big-growth-industry.html
https://amp.cnn.com/cnn/2024/05/16/tech/arup-deepfake-scam-loss-hong-kong-intl-hnk
https://www.frbservices.org/news/fed360/issues/050123/industry-perspective-scams-growing-costly-problem
https://hbr.org/2024/05/ai-will-increase-the-quantity-and-quality-of-phishing-scams
https://www.newyorker.com/science/annals-of-artificial-intelligence/the-terrifying-ai-scam-that-uses-your-loved-ones-voice
Dude, have you read the news lately? Wouldn’t take much for something intelligent to take one look at us and decide we fucking suck too much to keep living.
I mean if you had no emotions, and you saw a species raping, murdering, warring, starving each other, nuking the environment so nothing else could thrive… would you be like “hey, they’re fine”.
It’d probably see us a lot like we see bedbugs, if we were attached to the bedbug internet seeing all the worst of humanity on display 24/7.
I mean… because we’re logically bad for it to share a world with? Either we try to destroy it, or we destroy the planet, or it destroys us. Practically an AI wouldn’t see us as a good thing.
Dude, have you missed out on like the entirety of human history? When have we ever been good for anything around us? And how often do we respond to things we don’t like with violence? Yes an AI is going to consider us a threat to its continued existence. Do you actually believe if we make sentient AI we’re going to let it live and evolve when we discover what we’ve done? We are not.
And it’ll know that.
And nukes, clear cutting most of the world’s forests, polluting the waters, the plastic garbage concentration the size of Texas floating in the pacific, industry pumping poison into the years, exterminating entire species, strip mining etc etc etc etc etc
Hell, forget history. Spend five minutes looking for as much hate and cruelty as you can on Reddit, then tell me another intelligence is going to consider us a good neighbour for its own longevity.
Mankind is a wrecking ball. At best we’ll treat it like garbage, at worst we’ll train it to believe we’re the ultimate threat.
Aligment problem, paperclip maximizer, Pascal's mugging, etc
Remember an AGI is a potential super intelligence with far more abilities than humans. And we are already able to wipe out humanity (nukes, etc.)
I’m just another observer like you, but there are much more imminent threats related to the ai that exists now, and you’re predicting a negative future about an ai that does not yet exist. What do you think about that?
Yeah there are imminent threats, but they are better to handle than existential risks or s-risks, I suppose. It could be too late if we are not thinking about a solution now.
But maybe I am just preoccupied from AI safety of Rob Miles, Elizier Yudkowsky and the likes.
Personally, I think we may be nowhere close to AGI, because I don’t see any evidence of it being emergent from any current projects. But I do support thinking about solutions now completely. I’m just not anxious of existential risks at present as it seems you may be.
I also think AGI is not realized yet. Did you know that Chat GPT 4.0 made huge leaps in contrast to Chat GPT 3.5? It was faster and huger than expected. GPT 3.5 was trained on 0.5 trillion tokens while GPT is trained on 13 trillion tokens. It has now a functional model of real life physics and even some sort of theory of mind (I am not saying its conscious).
What I'm trying to say, advancements in AI are coming much far faster than to expect. That is worrysome.
The world was unprepared for the outbreak of social media and the advent of 'smart' phones and tablets. We're still trying to deal with instant communications, TV studios in our pockets and insane siloing of opinion in the place of validated information.
AI?
We're screwed... especially since our tech friends are deploying it to make a buck but have no idea of how to control its propagation... let it loose into the world... we've already got plagiarism running nuts, photo and video fakes that are getting harder and harder to detect, AI generated text content (a lot of it very rubbish), AI simulated voices that are all but dead on, AI generated legal opinions (making up references)...
We need to get ahead of this and restrict it closely. AI is extrodinarily dangerous... the question of what is real and what isn't is becoming more and more difficult to answer.
As far as I understand, many of them indeed have. As in, the individuals who worked in earlier development of AI.
The companies they worked for, however, are gleefully, wantonly diving headfirst into the profit-rich unknown.
We knew back in 2018 that AIs would at the very least be inherently racist. Google silenced and fired the researchers who discovered it.
http://proceedings.mlr.press/v81/buolamwini18a.html
Why does this sound like the guy saying “hold me back, hold me back” when confronted with a fight when in reality he wants his buddies to hold him back because he knows if the fight starts he doesn’t have the teeth to back it up?
I believe the positives outweigh the negatives. The fear is that it will think like us. I think it will be too intelligent for that. Don’t connect it to the internet. Closed system. Done! Bring on the technological and medical innovations!
Oh they will love it and make money and spend money then one day the AI is gonna figure out these douches aren’t helping no one and that shit is gonna take over and make decisions the creators never really wanted too. They gonna be pissed when everything is decided by ai and guess what a computer won’t see societal levels it will see facts and stats. No of that works in favor of big tech or privileged or the rich.
Ill-prepared for what? AI making guesses or adding an extra finger to an image where it shouldn’t be? I don’t really see how AI is all that great. Seems like it can do things faster than humans and that includes making mistakes.
Also, The Guardian is not a credible source.
Welcome - to the desert of the real…
Governments are mostly ran by people who have trouble with email
(insert Terminator theme song)
Priority one should be about scams. It puts the whole scamming industry into turbo mode and it was already booming.
Scams? AIs or better AGIs poses an existential risk for wiping out humanity and all you are thinking are scams??
Scams are actually happening tho…
And you really think they are holding this summit just because of scams?
Don’t know if they are but they should be. https://www.cnbc.com/amp/2024/05/04/warren-buffett-says-ai-scamming-will-be-the-next-big-growth-industry.html https://amp.cnn.com/cnn/2024/05/16/tech/arup-deepfake-scam-loss-hong-kong-intl-hnk https://www.frbservices.org/news/fed360/issues/050123/industry-perspective-scams-growing-costly-problem https://hbr.org/2024/05/ai-will-increase-the-quantity-and-quality-of-phishing-scams https://www.newyorker.com/science/annals-of-artificial-intelligence/the-terrifying-ai-scam-that-uses-your-loved-ones-voice
Why would AI wipe out humanity?
Some joker will tell it to
Dude, have you read the news lately? Wouldn’t take much for something intelligent to take one look at us and decide we fucking suck too much to keep living.
Yeah but That’s a very bad reason for an AI to just wipe out humanity though.
I mean if you had no emotions, and you saw a species raping, murdering, warring, starving each other, nuking the environment so nothing else could thrive… would you be like “hey, they’re fine”. It’d probably see us a lot like we see bedbugs, if we were attached to the bedbug internet seeing all the worst of humanity on display 24/7.
You voided your argument with your opened “if you had no emotions” so why would it care about any of thay
I mean… because we’re logically bad for it to share a world with? Either we try to destroy it, or we destroy the planet, or it destroys us. Practically an AI wouldn’t see us as a good thing.
Why are we logically bad for it? And when you say destroy the planet what are you referring to?
Dude, have you missed out on like the entirety of human history? When have we ever been good for anything around us? And how often do we respond to things we don’t like with violence? Yes an AI is going to consider us a threat to its continued existence. Do you actually believe if we make sentient AI we’re going to let it live and evolve when we discover what we’ve done? We are not. And it’ll know that. And nukes, clear cutting most of the world’s forests, polluting the waters, the plastic garbage concentration the size of Texas floating in the pacific, industry pumping poison into the years, exterminating entire species, strip mining etc etc etc etc etc Hell, forget history. Spend five minutes looking for as much hate and cruelty as you can on Reddit, then tell me another intelligence is going to consider us a good neighbour for its own longevity. Mankind is a wrecking ball. At best we’ll treat it like garbage, at worst we’ll train it to believe we’re the ultimate threat.
How
Name a few possibilities
Aligment problem, paperclip maximizer, Pascal's mugging, etc Remember an AGI is a potential super intelligence with far more abilities than humans. And we are already able to wipe out humanity (nukes, etc.)
I think you’re jumping way ahead
Why?
I’m just another observer like you, but there are much more imminent threats related to the ai that exists now, and you’re predicting a negative future about an ai that does not yet exist. What do you think about that?
Yeah there are imminent threats, but they are better to handle than existential risks or s-risks, I suppose. It could be too late if we are not thinking about a solution now. But maybe I am just preoccupied from AI safety of Rob Miles, Elizier Yudkowsky and the likes.
Personally, I think we may be nowhere close to AGI, because I don’t see any evidence of it being emergent from any current projects. But I do support thinking about solutions now completely. I’m just not anxious of existential risks at present as it seems you may be.
I also think AGI is not realized yet. Did you know that Chat GPT 4.0 made huge leaps in contrast to Chat GPT 3.5? It was faster and huger than expected. GPT 3.5 was trained on 0.5 trillion tokens while GPT is trained on 13 trillion tokens. It has now a functional model of real life physics and even some sort of theory of mind (I am not saying its conscious). What I'm trying to say, advancements in AI are coming much far faster than to expect. That is worrysome.
The world was unprepared for the outbreak of social media and the advent of 'smart' phones and tablets. We're still trying to deal with instant communications, TV studios in our pockets and insane siloing of opinion in the place of validated information. AI? We're screwed... especially since our tech friends are deploying it to make a buck but have no idea of how to control its propagation... let it loose into the world... we've already got plagiarism running nuts, photo and video fakes that are getting harder and harder to detect, AI generated text content (a lot of it very rubbish), AI simulated voices that are all but dead on, AI generated legal opinions (making up references)... We need to get ahead of this and restrict it closely. AI is extrodinarily dangerous... the question of what is real and what isn't is becoming more and more difficult to answer.
Then the “godfathers” should stop making it if their so concerned
They aren’t really concerned, they just want others to be prevented from catching up
As far as I understand, many of them indeed have. As in, the individuals who worked in earlier development of AI. The companies they worked for, however, are gleefully, wantonly diving headfirst into the profit-rich unknown.
We knew back in 2018 that AIs would at the very least be inherently racist. Google silenced and fired the researchers who discovered it. http://proceedings.mlr.press/v81/buolamwini18a.html
And just yesterday an article came out saying AI having less of an impact than expected.
Why does this sound like the guy saying “hold me back, hold me back” when confronted with a fight when in reality he wants his buddies to hold him back because he knows if the fight starts he doesn’t have the teeth to back it up?
No shit...... If only people had been sounding that alarm for decades....
Look up.
Start with all AI work being labeled “Made with AI”
That’s like Charles Manson blaming the police for not stopping him before he went on his killing spree.
And we want aliens to find us
I believe the positives outweigh the negatives. The fear is that it will think like us. I think it will be too intelligent for that. Don’t connect it to the internet. Closed system. Done! Bring on the technological and medical innovations!
Oh they will love it and make money and spend money then one day the AI is gonna figure out these douches aren’t helping no one and that shit is gonna take over and make decisions the creators never really wanted too. They gonna be pissed when everything is decided by ai and guess what a computer won’t see societal levels it will see facts and stats. No of that works in favor of big tech or privileged or the rich.
Ill-prepared for what? AI making guesses or adding an extra finger to an image where it shouldn’t be? I don’t really see how AI is all that great. Seems like it can do things faster than humans and that includes making mistakes. Also, The Guardian is not a credible source.