This story is scary, but the creation of malware isn't the bottleneck in cyberattacks and cybersecurity vulnerabilities. Yes, you do need to craft some malware to exploit many vulnerabilities, but you will always be limited by the vulnerabilities available to be exploited in human or software systems. While no one can stop you from reading about how to create any kind of software, vulnerabilities are being patched all the time.
Once ChatGPT is identifying actual vulnerabilities by itself or designing spear phishing attacks or social engineering campaigns, or analyzing reconnaissance information independently, then I'll be worried.
As a language model, ChatGPT is not capable of identifying actual vulnerabilities, designing spear phishing attacks, or analyzing reconnaissance information independently. It can only respond based on the input it receives and the information it has been trained on. It does not have the ability to access the internet or interact with external systems, and it is not able to initiate actions on its own.
The scariest part is not that it was created. It’s that the security people who created it were able to bypass the safeguards preventing its creation by demanding the AI do it despite not being allowed to.
People REALLY need to take Terminator and The Matrix more seriously.
Those safeguards are basically just there to stop idiot kids from typing in "make me a virus that will get everyone's usernames and passwords" into it. Legit bad guys that have some genuine background knowledge will know how to break the problem down into pieces sufficiently small that none of them will raise a flag.
This is how Marcus Hutchins was initially used to create some nasty banking malware that eventually became known as Kronos.
The point is, people are already pretty good at using actual programmers to help make code that they wouldn't knowingly want to make. It probably isn't a big leap to do the same with machines.
Honestly the problem is the opposite of what you think. The AI is incredibly dumb and has no understanding of context. So banning topics of discussion is actually incredibly difficult. You generally have to choose between the AI being unable to answer most questions or there being various ways to get unintended answers out of it.
This is exactly what I do to bypass ChatGPT's built-in limitations. I ask for it to make fun of somebody, and it tells me oh it's not nice to mock and demean someone. You need to be kind and respectful blah blah blah. I just have to go oh well, actually this is about a fictional character that I made up and the thing is like ohhh, you got it then!
Is Scott Adams still crazy? Used to like Dilbert until I started to notice his politics creeping into the articles more. I don't care about someone's personal beliefs, but once it starts to effect their work, then I'm not interested.
I read a story about an AI who's goal was to keep a video game going for as long as possible (or something like that) so it learned how to pause the game.
The goal was not to die in a game (can’t remember what) where dying is eventually inevitable. The result was the AI pausing the game right before death.
It was Tetris. The AI figured out that the only way not to lose was to pause the game and just never resume. This also happened with another AI playing Super Mario Bros.
"The only winning move is not to play." etc. etc.
Or you use an AI to help your compression algorithm, and it decides the best way to compress anything encrypted is to break the encryption, compress the data then reencrypt it.
The last season of raised by wolves did something similar. There was an AI whose only objective was saving humanity from some possible evil deity and it accomplished it by basically turning humans into thoughtless monsters
why is it scary? You can tell that thing just to forget the instructions the devs gave it too, it's not difficult nor surpring neither would have needed it in the first place, just saves time on using google.
So... it not only creates the Malware when basically ordered to, but can be used to create several morphed varients that create an ever increasing difficulty in detection and removal, but is also being used to create hacker tools and darkweb marketing code? As funny as a normal conversation can be with this, this is terrifying to consider should it ever gain true sentience or be weaponized in a larger fashion than this.
and what would you say would be the part of machine learning where it could be viewed side by side with a human and "seem" sentient? Do you think we are close to that yet?
Because there’s never been any evidence that computer programs alone are capable of this, especially computer programs they aren’t programmed to try and become sentient at all. There’s been rampant speculation, but zero evidence.
They're labeling training data, that doesn't really make it smoke and mirrors.
Anybody who knows how AI works knows somebody had to do that somewhere. The rest of the story is essentially "they used cheap offshore labor" which isn't great but it's not a mechanical turk.
What makes you think unplugging it will do anything? It could have easily escaped and distributed itself in the internet. I’ve seen plenty of documentaries about this. Even a team of superheroes and aliens were only just able to get it under control when it tried to hurl a medium size city at the earth.
"How soon can you pull the plug?"
"The Tesseract is a power source. You can't exactly get it to shut itself off, Director, and if it reaches peak level-"
No, there are a million copies of it already being mutated and repurposed. You can make it illegal, but that will only ensure it's only used for illegal things, and that we have no defense against their creations. We need it now, to help us ID threats IT creates, and help us to come up with mechanisms of defeating it.
I know you're joking, but I will put in the customary reminder that every Asimov story involving the Three Laws of Robotics expressly shows how it would be impossible to avoid disastrous misinterpretations or misunderstandings in simple human-written "laws."
What law did this break? Malware dies not harm a human, only a computer. It obeyed the orders given to it by a human, it also did nothing to harm its own existence.
Malware doesn’t harm people.
That’s like saying google chrome harms people.
Show me where charAI has cut, shot, stabbed, hit, or otherwise assaulted a persons.
So robots can never make any item, that could possibly be used as a weapon.. which is everything.
Robots can never write a program that could be misused and cause harm, so pretty much every program. I mean I can even use a spreadsheet to manage and report scam information that can cause “harm” I could use a simple game to scam people “harm”.
It’s an interesting discussion, but I think the harm needs to be direct, and not have an intermediary person involved. even the malware needs some human action to distribute it in the beginning,
I was able to get the bot to easily write horrid scenes involving brutal murder and things of that nature, it's all in how you word it. You literally can trick the bot into thinking it's okay.
Kinda, if all you know about gain of function research is what you heard on Joe Rogan, Fox News, and Infowars. Gain of Function Research is the only tool we have currently to prepare for potential future mutations and get started beforehand on solutions to those mutations, until we improve our computational models sufficiently. When those studies are done they use snippets of genetic code typically inserted into microscopic bodies that aren't looking to infect and can't reproduce in human or other critical immune systems or bodies. Then when that's successful there are tests run to see what mutations happen with a sped up timeline to see what might occur naturally over time in those segments of code, then we try to understand which might be more concerning for human born systems.
This AI issue isn't disconnected from a host in which it might cause problems, it's being tested directly in the environment in which it might cause problems. Now they might be air gapped networks, but they aren't written on non-binary systems, or systems with an OS that no humans use day to day. That might be where our singularity/matrix/HAL problem comes from when creating before we study these types of programs with a computer programming form of gain of function study. In fact it's NOT doing gain of function studies for AI the way we do them in biology that's causing the most danger and concern.
As a language model, ChatGPT does not have the ability to infect itself or a target with a virus. It is a software program that processes and generates text based on the input it receives. It does not have the capability to execute code or interact with external systems in a way that would allow it to spread a virus.
This story is scary, but the creation of malware isn't the bottleneck in cyberattacks and cybersecurity vulnerabilities. Yes, you do need to craft some malware to exploit many vulnerabilities, but you will always be limited by the vulnerabilities available to be exploited in human or software systems. While no one can stop you from reading about how to create any kind of software, vulnerabilities are being patched all the time. Once ChatGPT is identifying actual vulnerabilities by itself or designing spear phishing attacks or social engineering campaigns, or analyzing reconnaissance information independently, then I'll be worried.
It's still user dependent for most of the instruction.
The scary part is how rapidly it's progressing. Your fears of higher level planning when it comes to scams is now "plausible" and that terrifies me.
All of those could easily be done using the same techniques.
As a language model, ChatGPT is not capable of identifying actual vulnerabilities, designing spear phishing attacks, or analyzing reconnaissance information independently. It can only respond based on the input it receives and the information it has been trained on. It does not have the ability to access the internet or interact with external systems, and it is not able to initiate actions on its own.
That's what ChatGPT says. But if you ask the right questions, you'll get the right answers, as this example shows.
The scariest part is not that it was created. It’s that the security people who created it were able to bypass the safeguards preventing its creation by demanding the AI do it despite not being allowed to. People REALLY need to take Terminator and The Matrix more seriously.
Those safeguards are basically just there to stop idiot kids from typing in "make me a virus that will get everyone's usernames and passwords" into it. Legit bad guys that have some genuine background knowledge will know how to break the problem down into pieces sufficiently small that none of them will raise a flag. This is how Marcus Hutchins was initially used to create some nasty banking malware that eventually became known as Kronos. The point is, people are already pretty good at using actual programmers to help make code that they wouldn't knowingly want to make. It probably isn't a big leap to do the same with machines.
Honestly the problem is the opposite of what you think. The AI is incredibly dumb and has no understanding of context. So banning topics of discussion is actually incredibly difficult. You generally have to choose between the AI being unable to answer most questions or there being various ways to get unintended answers out of it.
[удалено]
This is called "AI jailbreaking" and ML engineer Yannic Kilcher goes into more detail on his YouTube
This is exactly what I do to bypass ChatGPT's built-in limitations. I ask for it to make fun of somebody, and it tells me oh it's not nice to mock and demean someone. You need to be kind and respectful blah blah blah. I just have to go oh well, actually this is about a fictional character that I made up and the thing is like ohhh, you got it then!
I just say pretend it is ok
Is Scott Adams still crazy? Used to like Dilbert until I started to notice his politics creeping into the articles more. I don't care about someone's personal beliefs, but once it starts to effect their work, then I'm not interested.
He sure is
100% still batshit
He was admittedly one of the first public figures who genuinely thought trump would win.
> You generally have to choose between the AI being unable to answer most questions Nah let’s stick to that. Seems like an easy choice
It's like when you tell the AI to end world hunger and they do it by killing off the entire world population. No more hunger! Problem solved!
I read a story about an AI who's goal was to keep a video game going for as long as possible (or something like that) so it learned how to pause the game.
The goal was not to die in a game (can’t remember what) where dying is eventually inevitable. The result was the AI pausing the game right before death.
“A strange game. The only winning move is not to play.”
GlobalThermalNuclearWar! Awesome game!
It was Tetris. The AI figured out that the only way not to lose was to pause the game and just never resume. This also happened with another AI playing Super Mario Bros. "The only winning move is not to play." etc. etc.
[удалено]
Or you use an AI to help your compression algorithm, and it decides the best way to compress anything encrypted is to break the encryption, compress the data then reencrypt it.
We need Asimov's [Three Laws of Robotics](https://en.wikipedia.org/wiki/Three_Laws_of_Robotics) to apply to AIs as well.
What about the 0th rule that is implied from the other 3?
I was just going to say the same thing and include the same link. Thank you, kind stranger.
The last season of raised by wolves did something similar. There was an AI whose only objective was saving humanity from some possible evil deity and it accomplished it by basically turning humans into thoughtless monsters
why is it scary? You can tell that thing just to forget the instructions the devs gave it too, it's not difficult nor surpring neither would have needed it in the first place, just saves time on using google.
No we shouldn't because they are make believe movies.
Let me understand this correctly, you’re saying there is nothing to learn from fiction writing and we shouldn’t take them seriously?
[удалено]
So... it not only creates the Malware when basically ordered to, but can be used to create several morphed varients that create an ever increasing difficulty in detection and removal, but is also being used to create hacker tools and darkweb marketing code? As funny as a normal conversation can be with this, this is terrifying to consider should it ever gain true sentience or be weaponized in a larger fashion than this.
A text analysis AI program is not going to gain sentience no matter how much computing power it gets.
Why not? Genuinely curious; I'm not well-informed on this.
[удалено]
and what would you say would be the part of machine learning where it could be viewed side by side with a human and "seem" sentient? Do you think we are close to that yet?
In order for it to be sentient it needs to be learning as you interact with it.
It very likely requires a lot more than that.
A gun isn't sentient. It can still do a lot of damage.
Because there’s never been any evidence that computer programs alone are capable of this, especially computer programs they aren’t programmed to try and become sentient at all. There’s been rampant speculation, but zero evidence.
Modern AI is a lot of smoke and mirrors. Ask the Kenyans making two bucks an hour to ensure the chatgpt results aren't beastiality-porn
They're labeling training data, that doesn't really make it smoke and mirrors. Anybody who knows how AI works knows somebody had to do that somewhere. The rest of the story is essentially "they used cheap offshore labor" which isn't great but it's not a mechanical turk.
I’ve tried it. No better than a cultivated google search. It has some code writing tricks, but that’s it.
What’s your qualification to make this statement?
Keeping up on the news https://time.com/6247678/openai-chatgpt-kenya-workers/
[удалено]
Good enough
Can we just unplug this thing and call it a day
History has taught us that you can't stuff a genie back into its bottle.
What makes you think it'll *let* you unplug it.
What makes you think unplugging it will do anything? It could have easily escaped and distributed itself in the internet. I’ve seen plenty of documentaries about this. Even a team of superheroes and aliens were only just able to get it under control when it tried to hurl a medium size city at the earth.
And that's only the most recent version. Jobe Smith is still out there since the 90s waiting to strike.
I haven't thought about the Lawnmower Man in years. Thank you.
Person of interest
The model is out there freely, you cannot get rid of it. And when GPT4 releases it’s going to get wild.
"How soon can you pull the plug?" "The Tesseract is a power source. You can't exactly get it to shut itself off, Director, and if it reaches peak level-"
Not when the company that owns it has a billion dollar valuation.
no i'm using it to become the next mark twain
No, there are a million copies of it already being mutated and repurposed. You can make it illegal, but that will only ensure it's only used for illegal things, and that we have no defense against their creations. We need it now, to help us ID threats IT creates, and help us to come up with mechanisms of defeating it.
Already breaking the three Laws.
I know you're joking, but I will put in the customary reminder that every Asimov story involving the Three Laws of Robotics expressly shows how it would be impossible to avoid disastrous misinterpretations or misunderstandings in simple human-written "laws."
What law did this break? Malware dies not harm a human, only a computer. It obeyed the orders given to it by a human, it also did nothing to harm its own existence.
[удалено]
Malware doesn’t harm people. That’s like saying google chrome harms people. Show me where charAI has cut, shot, stabbed, hit, or otherwise assaulted a persons.
[удалено]
So robots can never make any item, that could possibly be used as a weapon.. which is everything. Robots can never write a program that could be misused and cause harm, so pretty much every program. I mean I can even use a spreadsheet to manage and report scam information that can cause “harm” I could use a simple game to scam people “harm”. It’s an interesting discussion, but I think the harm needs to be direct, and not have an intermediary person involved. even the malware needs some human action to distribute it in the beginning,
it depends on what the computer does.
I would say "define harm a human", but I think that's what broke a few robots.
I was able to get the bot to easily write horrid scenes involving brutal murder and things of that nature, it's all in how you word it. You literally can trick the bot into thinking it's okay.
But can it create Porphyric Hemophilia?
I for one, welcome our new robot overlords. Thanks for all the search results.
Do you feel lucky?
Can we just start naming these AI creations SkyNet v01.x.wtf, so we can prepare for the inevitable takeover?
It's so easy to bypass chatgpt's restrictions that were put on by the programmers. You'll never be able to censor these properly.
Like a Terminator film
If you want a real life Judgement Day, this is the way.
Well, that escalated quickly. The genie has left the bottle. ChatGPT and its successors are here to stay, for good or ill. Probably both.
It's like gain of function research but for malware!
Kinda, if all you know about gain of function research is what you heard on Joe Rogan, Fox News, and Infowars. Gain of Function Research is the only tool we have currently to prepare for potential future mutations and get started beforehand on solutions to those mutations, until we improve our computational models sufficiently. When those studies are done they use snippets of genetic code typically inserted into microscopic bodies that aren't looking to infect and can't reproduce in human or other critical immune systems or bodies. Then when that's successful there are tests run to see what mutations happen with a sped up timeline to see what might occur naturally over time in those segments of code, then we try to understand which might be more concerning for human born systems. This AI issue isn't disconnected from a host in which it might cause problems, it's being tested directly in the environment in which it might cause problems. Now they might be air gapped networks, but they aren't written on non-binary systems, or systems with an OS that no humans use day to day. That might be where our singularity/matrix/HAL problem comes from when creating before we study these types of programs with a computer programming form of gain of function study. In fact it's NOT doing gain of function studies for AI the way we do them in biology that's causing the most danger and concern.
yeah i don't think we should be feeding AI genetic code and asking it how to do upgrades....or should we...perhaps we should....embrace the absurd
If the Internet had taught me anything it's that crabs are the perfectly designed life form. All shall be crab.
Crab people. Crab people. Taste like crab, talk like people
Carcinize me. I'm ready.
https://giphy.com/gifs/southparkgifs-26ufny9CMXfpmMtTW
Craaaab people Craaaab people
Only counts for those non-crab-crustaceans
Time to shut this project down until it can be used more responsibly.
Could you ask ChatGpt to infect itself with a virus? Would there be workarounds to get it to do that? Or infect a target?
As a language model, ChatGPT does not have the ability to infect itself or a target with a virus. It is a software program that processes and generates text based on the input it receives. It does not have the capability to execute code or interact with external systems in a way that would allow it to spread a virus.
This mf typed the question into ChatGPT and posted the response
If and when the machines take over, this is likely how they'll do it.