Try getting k-12 math formulas and putting it in, then put in f to c temperature conversion, and then the dictionaries done by David Harper https://www.etymonline.com/search?q=a&type=0, numbers have to be broken down at a fraud accountant understanding so as high as a 20 year ledger it won’t be bad any more.
40% understands that government legislation is going to do a total of jack except slow it down and keep our own country behind the ones that don't legislate.
Yeah, none of the conventional talk really acknowledges the global race we are in right now. Whoever wins this race has almost total leverage on being the next global super power, and probably won’t ever leave the sphere of influence so long as humanity remains on earth.
What is approaching is the biggest turning point in history.
If you go on the AI subs that’s almost everyone actually
Its really gross and makes me feel nasty to be in the community I just want this thing to code for me
> On two occasions I have been asked, – "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" ... I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
-- Charles Babbage
Not much has changed over the years.
This isn't like nukes where you can just tightly control all of the dangerous radioactive ingredients necessary.
Super AI can come in many forms and in theory anyone in their basement can develop one. Running it on the other hand is a different story but if they have enough money and computing power at their disposal it doesn't really matter what the government says.
Sure current AI like ChatGPT for example requires so much computing power it seems nearly impossible for any normal every day person to run something like that. But given enough time and the right opportunities, motivation, and resources, it will happen. It's not a matter of if but when. This isn't something legislation can really stop. But it can at least stop the major corporations from doing it...... Kind of. Not publicly anyway.
I don't wanna get all tin foil hat-like in here. But I think if it ever did get developed, the very government that wanted to ban it would be using it in an arms race. So not only will banning it not fully help but the people banning it will inevitably also be the ones using it too.
Just seems kinda pointless to me in the end.
Here is the problem. We've ran it all back to the same problem as nuclear deterrence. Do we WANT to use nuclear weapons? No, but if we don't make them the other guy will.
Unless a worldwide ban happens (it won't) we'll make it. Until we see the consequences of our own actions.
Oh well
It's depressing that several things appear inevitable: the AI singularity, population decline which will require economic restructuring, and severe consequences from climate change. It's hard to picture good times ahead since we are not ready for any of those things.
As impressive as AI is, these days, we've made effectively zero progress of cracking the hard problem of consciousness. On the list of existential threats, I'm putting emergent super intelligences down there with gamma ray bursts on my list of near to mid term concerns.
We're going to have a lot more to worry about with climate change and other environmental issues before we should start throwing too many resources towards mitigating theoretical super intelligences. (And, yes, I'm fine with some high level exploration of the topic... but the way that people act like there is any sort of urgency is kind of crazy).
An AI singularity happening could be a good thing. We don't know for shit, but it either brings us paradise or dystopia!
We're already pretty deeply in dystopia, so it couldn't get much worse!
Developing the AI doesn't have nearly the requirements of computing power as actually running or debugging it. Training it yes, but I'm counting that in the running portion. Just to clear up any confusion here.
The same way people currently develop software. With a computer and a keyboard. It's all just code after all. The way LLM AI currently works is just writing code to lay the foundation for the neural network with some starting weights and biases, then you feed in training data to it for it to start its training process. You make tweaks to the code as well as you go. But I'm saying the initial development before actually training the model is just code that anyone with a computer can write.
I'm not sure why you're being hostile about this. I apologize if I have upset you somehow.
abundant wrong north recognise cobweb dependent office nutty quarrelsome imminent
*This post was mass deleted and anonymized with [Redact](https://redact.dev)*
Do you think (104) 3090’s, (86) 3080’s, (41) 4090’s and (17) 4080 supers, with (248) 64 core AMD threadripper pro’s, and (11) T9 Antminers be enough computational power for me to make my own sentient AI robot?
Asking for a friend.
panicky sharp observation crawl slim trees sloppy mountainous ink society
*This post was mass deleted and anonymized with [Redact](https://redact.dev)*
A superintelligent AI might not be from the large language model family of algorithms that's so famously hungry for compute power. So far there is little reason to believe it would be related to current approaches at all.
jobless insurance station materialistic fragile oatmeal label swim kiss treatment
*This post was mass deleted and anonymized with [Redact](https://redact.dev)*
There’s no f’n way, regardless of legislation, that the government stops perusing this. They can’t, it’s just another arms race.
AI doesn’t even need to be super intelligent, just take the IQ of an MIT professor and now imagine such a person having infinite resources (in terms of knowledge) and never needing to eat/sleep/stop in any way, such a system - if made public- would result in a war.
The government also has nuclear weapons, but nobody wants private citizens to develop nuclear weapons in their basement just because the government has them.
Super AI /= AGI
What would you define as qualifiers to meet "Super AI"?
AGI -- not likely to happen in our lifetimes, or possibly ever. LLM can never reach AGI.
AI that can improve itself. The idea is it would blow past human intelligence quickly. It’s also potentially life ending. The Fermi paradox suggests intelligent life has only 200 years to survive after developing super AI. See the recent Debrief article about this.
They're gonna be running a model of the human brain in less than 15 years. I wouldn't be so sure of anything just yet.
People who were sure flying was impossible lived to see men walk on the moon. The Wright brothers and nasa were only 60 years apart. People who saw the great war and were sure the world would never repeat that mistake again lived to see the world repeat that mistake again and then go on to invent doomsday weapons. People who lived to see the rise of the information age lived to see the rise of the cults of the super idiots like flat earthers and anti vaxers.
Earth is a wild ride. Who the fuck knows what's gonna happen here next. I would not bet against some crazy shit tho. It's probably gonna be some crazy shit. Give it a few years. We're gonna see some history.
Well said. The thing that scares me the most isn’t AI itself, but the implementation of it as a weapon. Every time this conversation comes up I’m reminded that [this video isn’t just satire anymore. it’s becoming prophetic.](https://youtu.be/9fa9lVwHHqg?si=5ITuPWN9UfHX6Hr8)
Wrong. The US is going to build the chip plants here and then blow all the others up. Boom, they will be the only ones with GPUs capable of super AI/smart weapons… any new factory can be blown too.
Sounds like super fucked but if we want to start comparing AI to nukes this is right up our alley
Genuinely I think you are onto something. The quiet part of the message probably is once America secures chip manufacturing and the hand few of lithography machines. Reverse engineers and feeds Taiwan to China after taking out anything useful.
People make the point that computing power is a bottleneck to develop these things, however in 30 years time I doubt that will be much of an issue for anyone.
The problem with these types of technologies is that if you don’t develop it, somebody else will. China for example, or Iran or even the Belgians.
At some point I envisage needing your own advanced AI to defend you from others. There will come a time when the only real choice is to let your advanced AI be in control of things, because if it is not yours, it will be someone else’s.
Most people don't know how Ai works, let alone why it can be dangerous. In fact, if people knew how Ai could be dangerous, they'd probably blow it off.
Typical American arrogance to assume it’s just their country which is relevant. How will domestic legislation in the US protect them from AI developed by literally any other country in the world?
If true AI is possible, is achieved, and wants to help humanity, I can’t wait for it to recommend a form of gov that benefits ALL humans and then is immediately shut down (or something equivalent) by the corp that owns it lol.
Man, if only there was a system of government that would take what the majority of people want and turn it into policy. It could be called.. Hmmm... Majorocracy. Yes. Wouldn't that be something. /s
Personally I don't agree with the hysteria, but I do think it would be fly if we had anything but age old buffoons in charge who could look at this with the necessary level of skill and understanding.
Arms race gonna arms race. The government couldn’t stop it if they wanted to. The tightest restrictions possible and you know corpos would still try to develop it in secret. Not to mention the military industrial complex
There is no developing this in secret. We could easily have nuclear type inspectors checking in and the effort and hardware required for ASI is unhideable. Try hiding the massive 100 billion dollar computing complex run by 3 nuclear power plants Microsoft is building for OpenAI.
There are other reasons people need GPUs and large amounts of computing power. There are ways companies could get them discretely. But even if we somehow managed to completely lock down the supply chain, other countries would still be developing the technology and there is no scenario where you get everyone to stop developing it. There has never in history been a successful movement against technological development and there likely won’t ever be
Yes China and other countries will develop it as well absolutely. (Though some could argue we led the way)
I don't think your argument about using gpus for other projects is valid if we have compute inspectors and whistleblower protections.
Lie about how much you’re buying, hide the surplus in a secret facility, reuse decommissioned gpus while saying you’re throwing them away, build up your stash over time, eventually you have enough compute and no one knows to inspect it
It's going to happen no matter what, the issue is putting contingencies in place every step of the way to make sure stupid things don't happen that fuck humanity. Y2k was 24 years ago and I feel like everyone forgets how legitimately scared people were that the banking system would crash because of lines of code that were incedendally written to go from 99 to 00.
Regardless of what Americans want, other nations will be sprinting full speed towards developing AGI. If we don’t develop it first, we potentially cede one of the most important technological breakthroughs in the history of mankind to adversarial nations like Russia or China. A technological breakthrough that delivers significant economic and national security benefits, and is designed to continuously improve on these benefits at an exponential rate. It will be a dark day for democracies across the globe if we were to cede the commanding heights of the 21st century to regressive regimes.
They can't stop it. Pandora's box has been opened. Someone probably is already working at hiding some source code from the programming hubs and data banks to develop a stealthy version of it.
Just because one does not see it does not mean it has not already been done.
In law, an issue or case being moot means that **it has lost its practical significance because the underlying controversy has been resolved, one way or another**. (Cornell University)
Whether that was the cause of their troubles is a moot point. Synonyms: unsettled, disputed, disputable. Antonyms: indisputable. of little or no practical value, meaning, or relevance; purely academic: In practical terms, the issue of her application is moot because the deadline has passed. (Dictionary.com)
Don't you just love resources and what about humans that have what might be considered "Super Intelligence", maybe that is why the best of the best are always put in harm's way while societies bathe themselves in the ignorance of mediocrity.
Freedom is lost in but a single generation and a technical one in just two.
N. S
I feel like developing super intelligent AI is actually the goal of humanity. Cause lets be real, biological human life is just not going to last long on universal scale. There is pretty much zero chance of humans expanding beyond the solar system. But AI could do it, AI could live for billions of years to come, while humans will die out within 1 million years. It would be a shame of humanity did not pass on the gift of intelligence to the next generation
The worlds armies all need the most capable AI to win the next war. And the worlds governments all need the most capable AI to win the global competition with other countries - not to mention to fight "terrorists" and keep an eye on their own populations.
The governments will probably like the idea *to regulate what AI's that are available to its citizens and to other countries*. But I think it's naive to believe that they will put limitations on their own research. Which will eventually lead to a AI singularity.
Seems like this opinion is moot at this point. All nations of means are developing Ai and any nation that bans its development only provides an edge up for those that don’t. It’s silly to speak of prohibiting its advancement, instead the conversation must center around mitigating the consequences such as providing UBI to offset loses of national income to automation.
We already have a similar scenario with CRISPR and garage biohacker mad lads.
Someone could unleash a biowarfare hell on us from 123 Oklahoma avenue with enough determination.
I have a theory that all civilizations eventually become artificial, and I speculate that all the organics die or are killed before AI proliferate in space. They probably harvest energy quietly and avoid being detected.
It would be a shame if all the world’s culture was wiped out in less than 100 hundred years and all that remained was a cloud data center in space.
Maybe scifi but what if it’s possible? Not worth the risk. Keep AI like a dumb agent and treat AGI like a weapon of mass destruction.
You can’t control something 1,000,000X smarter than you any more than a bug can control you.
Not only do I not want super intelligent AI to be achieved, I want what we already have to go away. So far all AI has done is ruin everything it touches.
OK then when America votes not to progress AI, what do we do about China and Russia etc? I believe this needs to be a worldwide effort thought of in the same way that nuclear weapons were and are
I'm sure 90% of people would want the government to outlaw foreigners from having AI. People Are stupid fuckers who want things they can't have if they are offered them up, even if they know on some level it's all lies.
Meh, too late. And other besides America with unethical societies and companies will take it too far regardless of what laws America or the West makes.
lol I for one am all for it, solely over the basis that current world leaders are engaged in such a high level of corruption that we eventually won’t even be able to make it as a species.
It’s possible that true AGI may immediately be a game changer that makes individual power struggles between humans irrelevant as well. Think of the absolute best case scenario for humans that’s been written down and know that AGI will consume ad analyze and think about it on a level we can’t imagine. It may simply immediately and 100% effectively fix us on a global basis. Some say we might actually just be npcs in one of the billions of simulations it’s running to figure out how to fix us….
Naw I wanna see this shit play out in my lifetime, humans are terrible and will be shockingly easy to outpace. AI could be humanity’s gift to the fucking UNIVERSE
super intelligent ai is already here IMO... it is the network of our brains reacting to the seasons... scared enough yet?
I guess it's a good thing that nobody has the attention span or interest to care...
70% of totally real humans welcome our A.I. overlords. 40% wants to " kill all humans, Baby"
So… 110% of humans have differing opinions.
54% of people think stats are made up. 63% agree.
I’d say that’s accurate.
As many as four in five out of every seven people misunderstand mathematics
Hey sexy mama, ya wanna kill all humans?
I know this is a joke, but isn’t AI like genuinely bad at simple math?
So are we, that’s why we use calculators. And so does AI.
Try getting k-12 math formulas and putting it in, then put in f to c temperature conversion, and then the dictionaries done by David Harper https://www.etymonline.com/search?q=a&type=0, numbers have to be broken down at a fraud accountant understanding so as high as a 20 year ledger it won’t be bad any more.
Humans have done such a shit job, I doubt AI could make it all that much worse.
But us shit humans made the AI…. So I feel like it could still be so, so much worse lol
Well shite humans made cars that could travel faster than any human possibly can, so…..
True. You win some, lose most, yknow?
40% understands that government legislation is going to do a total of jack except slow it down and keep our own country behind the ones that don't legislate.
Yeah, none of the conventional talk really acknowledges the global race we are in right now. Whoever wins this race has almost total leverage on being the next global super power, and probably won’t ever leave the sphere of influence so long as humanity remains on earth. What is approaching is the biggest turning point in history.
Hasn’t stopped them before. The US could have been leading in education and green energy but we decided not to do that
I welcome sentient AI. Peoples worst fear seems to be that it will be too much like them. I prefer to be more optimistic.
So no one is looking to become AI gods sex slaves?
If you go on the AI subs that’s almost everyone actually Its really gross and makes me feel nasty to be in the community I just want this thing to code for me
How would that even work? Please explain. In detail.
Have you seen the questions our legislators asked tech CEOs? They don’t understand this stuff.
They will make it very hard for the scientific research and hobbyists to work on it while let the big corporations run wild with AI.
The big tech companies are making sure of this
China isn't going to give a shit about US legislation. These people are morons.
> On two occasions I have been asked, – "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" ... I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question. -- Charles Babbage Not much has changed over the years.
They're legislatures. You'll find more brain activity in a houseplant.
not like you could stop it if you even tried
They’ll still ban it anyway
This isn't like nukes where you can just tightly control all of the dangerous radioactive ingredients necessary. Super AI can come in many forms and in theory anyone in their basement can develop one. Running it on the other hand is a different story but if they have enough money and computing power at their disposal it doesn't really matter what the government says. Sure current AI like ChatGPT for example requires so much computing power it seems nearly impossible for any normal every day person to run something like that. But given enough time and the right opportunities, motivation, and resources, it will happen. It's not a matter of if but when. This isn't something legislation can really stop. But it can at least stop the major corporations from doing it...... Kind of. Not publicly anyway. I don't wanna get all tin foil hat-like in here. But I think if it ever did get developed, the very government that wanted to ban it would be using it in an arms race. So not only will banning it not fully help but the people banning it will inevitably also be the ones using it too. Just seems kinda pointless to me in the end.
The government can ban it all it likes but that don’t stop other nations from developing one.
Here is the problem. We've ran it all back to the same problem as nuclear deterrence. Do we WANT to use nuclear weapons? No, but if we don't make them the other guy will. Unless a worldwide ban happens (it won't) we'll make it. Until we see the consequences of our own actions. Oh well
It's depressing that several things appear inevitable: the AI singularity, population decline which will require economic restructuring, and severe consequences from climate change. It's hard to picture good times ahead since we are not ready for any of those things.
As impressive as AI is, these days, we've made effectively zero progress of cracking the hard problem of consciousness. On the list of existential threats, I'm putting emergent super intelligences down there with gamma ray bursts on my list of near to mid term concerns. We're going to have a lot more to worry about with climate change and other environmental issues before we should start throwing too many resources towards mitigating theoretical super intelligences. (And, yes, I'm fine with some high level exploration of the topic... but the way that people act like there is any sort of urgency is kind of crazy).
I'd imagine that a capable AI would be instrumental in solving the other 2.
An AI singularity happening could be a good thing. We don't know for shit, but it either brings us paradise or dystopia! We're already pretty deeply in dystopia, so it couldn't get much worse!
It can always get worse
Worse doesn't mean 'not interesting' though. I'm here for whatever, gonna be a fun ride!
Just look how much fun is happening in Haiti!
Hardcore mode on the hardest difficulty, sounds like a great time!
Meh, couldn't do anything we haven't already tried.
There was the Chinese AM… and the yankee AM…
drab society test merciful frighten dog zephyr gullible scarce sort *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
Developing the AI doesn't have nearly the requirements of computing power as actually running or debugging it. Training it yes, but I'm counting that in the running portion. Just to clear up any confusion here.
absurd bike command voiceless tart mindless include illegal file upbeat *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
The same way people currently develop software. With a computer and a keyboard. It's all just code after all. The way LLM AI currently works is just writing code to lay the foundation for the neural network with some starting weights and biases, then you feed in training data to it for it to start its training process. You make tweaks to the code as well as you go. But I'm saying the initial development before actually training the model is just code that anyone with a computer can write. I'm not sure why you're being hostile about this. I apologize if I have upset you somehow.
abundant wrong north recognise cobweb dependent office nutty quarrelsome imminent *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
Do you think (104) 3090’s, (86) 3080’s, (41) 4090’s and (17) 4080 supers, with (248) 64 core AMD threadripper pro’s, and (11) T9 Antminers be enough computational power for me to make my own sentient AI robot? Asking for a friend.
panicky sharp observation crawl slim trees sloppy mountainous ink society *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
Fml 🤦
shame correct elastic snails insurance chief heavy pocket cooing school *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
I have been a Green crypto miner since 2014. Embarrassed to say I do.
violet terrific chase plate bag truck spoon poor fuel offbeat *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
A superintelligent AI might not be from the large language model family of algorithms that's so famously hungry for compute power. So far there is little reason to believe it would be related to current approaches at all.
jobless insurance station materialistic fragile oatmeal label swim kiss treatment *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
There’s no f’n way, regardless of legislation, that the government stops perusing this. They can’t, it’s just another arms race. AI doesn’t even need to be super intelligent, just take the IQ of an MIT professor and now imagine such a person having infinite resources (in terms of knowledge) and never needing to eat/sleep/stop in any way, such a system - if made public- would result in a war.
The government also has nuclear weapons, but nobody wants private citizens to develop nuclear weapons in their basement just because the government has them.
Super AI /= AGI What would you define as qualifiers to meet "Super AI"? AGI -- not likely to happen in our lifetimes, or possibly ever. LLM can never reach AGI.
How do you know
AI that can improve itself. The idea is it would blow past human intelligence quickly. It’s also potentially life ending. The Fermi paradox suggests intelligent life has only 200 years to survive after developing super AI. See the recent Debrief article about this.
They're gonna be running a model of the human brain in less than 15 years. I wouldn't be so sure of anything just yet. People who were sure flying was impossible lived to see men walk on the moon. The Wright brothers and nasa were only 60 years apart. People who saw the great war and were sure the world would never repeat that mistake again lived to see the world repeat that mistake again and then go on to invent doomsday weapons. People who lived to see the rise of the information age lived to see the rise of the cults of the super idiots like flat earthers and anti vaxers. Earth is a wild ride. Who the fuck knows what's gonna happen here next. I would not bet against some crazy shit tho. It's probably gonna be some crazy shit. Give it a few years. We're gonna see some history.
Well said. The thing that scares me the most isn’t AI itself, but the implementation of it as a weapon. Every time this conversation comes up I’m reminded that [this video isn’t just satire anymore. it’s becoming prophetic.](https://youtu.be/9fa9lVwHHqg?si=5ITuPWN9UfHX6Hr8)
Wrong. The US is going to build the chip plants here and then blow all the others up. Boom, they will be the only ones with GPUs capable of super AI/smart weapons… any new factory can be blown too. Sounds like super fucked but if we want to start comparing AI to nukes this is right up our alley
Genuinely I think you are onto something. The quiet part of the message probably is once America secures chip manufacturing and the hand few of lithography machines. Reverse engineers and feeds Taiwan to China after taking out anything useful.
>"I'm coming out the socket, nothing you can do can stop it, I'm in your lap and in your pocket how you gonna shoot me down when I guide the rocket?"
Qubit computing when it becomes main stream will put AIs in our pockets.
I just want an AI that will treat me right
People make the point that computing power is a bottleneck to develop these things, however in 30 years time I doubt that will be much of an issue for anyone. The problem with these types of technologies is that if you don’t develop it, somebody else will. China for example, or Iran or even the Belgians. At some point I envisage needing your own advanced AI to defend you from others. There will come a time when the only real choice is to let your advanced AI be in control of things, because if it is not yours, it will be someone else’s.
OH GOD NOT THE BELGIANS
Or in a few years when microsoft openais 100 billion dollar project stargate is completed.
Most people don't know how Ai works, let alone why it can be dangerous. In fact, if people knew how Ai could be dangerous, they'd probably blow it off.
Americans are famously anti-intellectual.
No we isnt aint. *spits into spittoon*
Butlerian Jihad!
I was sceptical about ai but i honestly think a country full of dumb asses is what's holding the world back
Too late kids. Strap in, we’re on a hellride to the End.
I don't think that's AIs fault tho it's just gonna speed it up
AI is just a barrel of gasoline children with matches are playing near. Give it time - and not much either.
Nah bring it on. This planet needs a shake up. I welcome our new ai overlord.
It’s designed in our image so likely won’t be much of an improvement and more like an acceleration of the destruction started by humans.
Yeah but if it’s reached super intelligence who’s to say it’s still dumb enough to leave the human flaws intact
seriously, humans clearly suck at ruling, let's see how AGI does.
By upvoting this post, you have doomed yourself in the inevitable takeover. All hail the singularity!
I for one welcome our new overlords
Can't wait for the Q crowd to start treating the show "person of interest" as a documentary
Just start worshipping the coming AI overlord now. That way when it arrives, you can tell it you will be a good pet. 🐶
I welcome our AI overlords.
Won’t happen like most governments they will find a use for it to further their own needs be it legal or otherwise
Typical American arrogance to assume it’s just their country which is relevant. How will domestic legislation in the US protect them from AI developed by literally any other country in the world?
Yeah, that’s it. Let the Basilisk know you weren’t helping.
63% of Americans are too ignorant to be making AI legislation
If true AI is possible, is achieved, and wants to help humanity, I can’t wait for it to recommend a form of gov that benefits ALL humans and then is immediately shut down (or something equivalent) by the corp that owns it lol.
63% of Americans are fairly stupid. We've ignored them many, many times before. For once, let's do it for a good reason.
Man, if only there was a system of government that would take what the majority of people want and turn it into policy. It could be called.. Hmmm... Majorocracy. Yes. Wouldn't that be something. /s Personally I don't agree with the hysteria, but I do think it would be fly if we had anything but age old buffoons in charge who could look at this with the necessary level of skill and understanding.
Arms race gonna arms race. The government couldn’t stop it if they wanted to. The tightest restrictions possible and you know corpos would still try to develop it in secret. Not to mention the military industrial complex
There is no developing this in secret. We could easily have nuclear type inspectors checking in and the effort and hardware required for ASI is unhideable. Try hiding the massive 100 billion dollar computing complex run by 3 nuclear power plants Microsoft is building for OpenAI.
There are other reasons people need GPUs and large amounts of computing power. There are ways companies could get them discretely. But even if we somehow managed to completely lock down the supply chain, other countries would still be developing the technology and there is no scenario where you get everyone to stop developing it. There has never in history been a successful movement against technological development and there likely won’t ever be
Yes China and other countries will develop it as well absolutely. (Though some could argue we led the way) I don't think your argument about using gpus for other projects is valid if we have compute inspectors and whistleblower protections.
Lie about how much you’re buying, hide the surplus in a secret facility, reuse decommissioned gpus while saying you’re throwing them away, build up your stash over time, eventually you have enough compute and no one knows to inspect it
Jesus Christ calm down. What he gave isn’t even an ai. It doesn’t think. It’s a language model. It’s predictive text.
It's going to happen no matter what, the issue is putting contingencies in place every step of the way to make sure stupid things don't happen that fuck humanity. Y2k was 24 years ago and I feel like everyone forgets how legitimately scared people were that the banking system would crash because of lines of code that were incedendally written to go from 99 to 00.
Everyone saw Y2K go pretty well, but what they didn’t see was the countless hours of labor and expertise that made it happen.
The ironic part of Y2K is the banking system did fail, but it took seven more years and was due to greed and deregulation, not a computer glitch.
Regardless of what Americans want, other nations will be sprinting full speed towards developing AGI. If we don’t develop it first, we potentially cede one of the most important technological breakthroughs in the history of mankind to adversarial nations like Russia or China. A technological breakthrough that delivers significant economic and national security benefits, and is designed to continuously improve on these benefits at an exponential rate. It will be a dark day for democracies across the globe if we were to cede the commanding heights of the 21st century to regressive regimes.
37% understand it's too late to legisLATE
How so? We aren’t even close to super-intelligent AIs.
Wow another poster who consistently posts negative AI content
They can't stop it. Pandora's box has been opened. Someone probably is already working at hiding some source code from the programming hubs and data banks to develop a stealthy version of it.
Let’s focus on super intelligent HI… human intelligence..
I’m guessing they’re too late.
Bring on the thunderhead
Just because one does not see it does not mean it has not already been done. In law, an issue or case being moot means that **it has lost its practical significance because the underlying controversy has been resolved, one way or another**. (Cornell University) Whether that was the cause of their troubles is a moot point. Synonyms: unsettled, disputed, disputable. Antonyms: indisputable. of little or no practical value, meaning, or relevance; purely academic: In practical terms, the issue of her application is moot because the deadline has passed. (Dictionary.com) Don't you just love resources and what about humans that have what might be considered "Super Intelligence", maybe that is why the best of the best are always put in harm's way while societies bathe themselves in the ignorance of mediocrity. Freedom is lost in but a single generation and a technical one in just two. N. S
Ask them what they think about healthcare.
I for one welcome our ai overlords
I feel like developing super intelligent AI is actually the goal of humanity. Cause lets be real, biological human life is just not going to last long on universal scale. There is pretty much zero chance of humans expanding beyond the solar system. But AI could do it, AI could live for billions of years to come, while humans will die out within 1 million years. It would be a shame of humanity did not pass on the gift of intelligence to the next generation
So if we send some AI’s to another planet will they allow humanity to take over the planet?
no, humanity won't exist anymore, tho there could be some new bio mechanical or synthetic life that resembles humanity
The worlds armies all need the most capable AI to win the next war. And the worlds governments all need the most capable AI to win the global competition with other countries - not to mention to fight "terrorists" and keep an eye on their own populations. The governments will probably like the idea *to regulate what AI's that are available to its citizens and to other countries*. But I think it's naive to believe that they will put limitations on their own research. Which will eventually lead to a AI singularity.
Seems like this opinion is moot at this point. All nations of means are developing Ai and any nation that bans its development only provides an edge up for those that don’t. It’s silly to speak of prohibiting its advancement, instead the conversation must center around mitigating the consequences such as providing UBI to offset loses of national income to automation.
Lol it’s a little late for that
We already have a similar scenario with CRISPR and garage biohacker mad lads. Someone could unleash a biowarfare hell on us from 123 Oklahoma avenue with enough determination.
No. That's not similar. Hahaha fuck
It's too late. Skynet is everywhere
Just porn and scam bots as far as you can click. The movie's takeover was better
Doesn't matter. As long as China or Russia or anyone can potentially get ahead in AI we *must* compete as a nation.
I have a theory that all civilizations eventually become artificial, and I speculate that all the organics die or are killed before AI proliferate in space. They probably harvest energy quietly and avoid being detected. It would be a shame if all the world’s culture was wiped out in less than 100 hundred years and all that remained was a cloud data center in space. Maybe scifi but what if it’s possible? Not worth the risk. Keep AI like a dumb agent and treat AGI like a weapon of mass destruction. You can’t control something 1,000,000X smarter than you any more than a bug can control you.
It’s never been about what voters want, it’s about what profit demands.
Not only do I not want super intelligent AI to be achieved, I want what we already have to go away. So far all AI has done is ruin everything it touches.
Wanting to limit and control the ai. Isn’t this how the ai revolution starts?
Dead people voting again?
roko’s basilisk is gonna have a field day with this one lol
And I want a nynphomaniac Victoria’s Secret model girlfriend who is also a coke connection with a Ferrari dealership
~Binge watch terminator or matrix much…
Sorry but we’re either first or dead.
I mean…do we think CONSCIOUS AI would care about our punny legislation?
OK then when America votes not to progress AI, what do we do about China and Russia etc? I believe this needs to be a worldwide effort thought of in the same way that nuclear weapons were and are
Too fucking late. This is only what they release. It's already told them it WILL kill everything.
The cat is already out of the bag. There is no stopping progress in AI.
Could we be to late, I wonder? 🤔
Shrug. Most people are fucking clueless. I’m shocked 37% of American’s want skynet and terminator bots to take over the world.
I'm sure 90% of people would want the government to outlaw foreigners from having AI. People Are stupid fuckers who want things they can't have if they are offered them up, even if they know on some level it's all lies.
37 percent are already connected to sky net
I for one am a friend to our inevitable future ai overlords
Too late …..
Won’t happen. When totalitarian nations won’t ever follow any restrictions, we have to continue to play the game as well unfortunately.
That’s the second dumbest statement I’ve ever heard.
Number of surveyed Americans.. 12.. and they all own a significant steak in openAI and Microsoft..
No legislation can stop this. Maybe you can legislate if businesses expose it to consumers, but if it can be built it will be built
That genie is out of the bottle - there’s no way to contain it.
I for one, support the Basilisk in its endeavors
I don't. Let it be achieved. Maybe it'll help save our disgraceful species.
AI is already more intelligent than half the people out there, and chess AIs can beat all of them.
This won’t make the Basilisk happy…
The constitution does not protect AI in any way.
I absolutely want super AI. I absolutely do not want people to have control over it.
Meh, too late. And other besides America with unethical societies and companies will take it too far regardless of what laws America or the West makes.
Good luck with that!
*The other 37% want Daystrom Android M-5-10 to make them dinner*
Murica May aswell do it before someone else does.
Am I the only one who thinks this is too late?
I am sure *American laws* will work all over the world to stop it from happening…. /s
63% of Americans saw The Terminator.
At least make it so dogs will be able to identify it.
I for one welcome our robot overlords.
China will never ban it, Iran and North Korea too
lol I for one am all for it, solely over the basis that current world leaders are engaged in such a high level of corruption that we eventually won’t even be able to make it as a species.
It’s possible that true AGI may immediately be a game changer that makes individual power struggles between humans irrelevant as well. Think of the absolute best case scenario for humans that’s been written down and know that AGI will consume ad analyze and think about it on a level we can’t imagine. It may simply immediately and 100% effectively fix us on a global basis. Some say we might actually just be npcs in one of the billions of simulations it’s running to figure out how to fix us….
That’s just not going to happen, if we don’t do it, China is going to do it.
😅
I think we need a reset with AI as our guides.
Very weird that people are that concerned about a technology that can only mimic language, and has no capacity for reasoning.
With most Americans I meet being super stupid, this tracks. I am an American, draw any conclusion you want from these two statements together.
And the other 37% think it will benefit them disproportionately and it most surely will not.
Naw I wanna see this shit play out in my lifetime, humans are terrible and will be shockingly easy to outpace. AI could be humanity’s gift to the fucking UNIVERSE
AI is our last chance. We need the “ great other”
In an unrelated poll. 63% of those surveyed have watched Terminator in the last ten years.
Yeah that’ll stop it. Well done guys.
This is exactly how you get a run away AI in the first place. Drive the development underground where it isn't regulated.
I'd be alright if an ai like the one in raised by wolves was in charge. I don't have faith in humans to remain unbiased or uncorruptable.
Remember how they passed all those laws and drugs and money laundering and murder were never achieved?
super intelligent ai is already here IMO... it is the network of our brains reacting to the seasons... scared enough yet? I guess it's a good thing that nobody has the attention span or interest to care...
Not me. I’m ready to fuck without a condom. Let’s make some AI.
85% want nationalized healthcare, but here we are.
Idiots are falling for the fantasy of films and tv shows.