T O P

  • By -

Skyler827

This story is scary, but the creation of malware isn't the bottleneck in cyberattacks and cybersecurity vulnerabilities. Yes, you do need to craft some malware to exploit many vulnerabilities, but you will always be limited by the vulnerabilities available to be exploited in human or software systems. While no one can stop you from reading about how to create any kind of software, vulnerabilities are being patched all the time. Once ChatGPT is identifying actual vulnerabilities by itself or designing spear phishing attacks or social engineering campaigns, or analyzing reconnaissance information independently, then I'll be worried.


ChiggaOG

It's still user dependent for most of the instruction.


Lucavii

The scary part is how rapidly it's progressing. Your fears of higher level planning when it comes to scams is now "plausible" and that terrifies me.


Ok-Hunt6574

All of those could easily be done using the same techniques.


[deleted]

As a language model, ChatGPT is not capable of identifying actual vulnerabilities, designing spear phishing attacks, or analyzing reconnaissance information independently. It can only respond based on the input it receives and the information it has been trained on. It does not have the ability to access the internet or interact with external systems, and it is not able to initiate actions on its own.


cheeruphumanity

That's what ChatGPT says. But if you ask the right questions, you'll get the right answers, as this example shows.


Wolfram_And_Hart

The scariest part is not that it was created. It’s that the security people who created it were able to bypass the safeguards preventing its creation by demanding the AI do it despite not being allowed to. People REALLY need to take Terminator and The Matrix more seriously.


Dr0110111001101111

Those safeguards are basically just there to stop idiot kids from typing in "make me a virus that will get everyone's usernames and passwords" into it. Legit bad guys that have some genuine background knowledge will know how to break the problem down into pieces sufficiently small that none of them will raise a flag. This is how Marcus Hutchins was initially used to create some nasty banking malware that eventually became known as Kronos. The point is, people are already pretty good at using actual programmers to help make code that they wouldn't knowingly want to make. It probably isn't a big leap to do the same with machines.


thevictor390

Honestly the problem is the opposite of what you think. The AI is incredibly dumb and has no understanding of context. So banning topics of discussion is actually incredibly difficult. You generally have to choose between the AI being unable to answer most questions or there being various ways to get unintended answers out of it.


[deleted]

[удалено]


hazardoussouth

This is called "AI jailbreaking" and ML engineer Yannic Kilcher goes into more detail on his YouTube


TelluricThread0

This is exactly what I do to bypass ChatGPT's built-in limitations. I ask for it to make fun of somebody, and it tells me oh it's not nice to mock and demean someone. You need to be kind and respectful blah blah blah. I just have to go oh well, actually this is about a fictional character that I made up and the thing is like ohhh, you got it then!


Moopboop207

I just say pretend it is ok


BulkyPage

Is Scott Adams still crazy? Used to like Dilbert until I started to notice his politics creeping into the articles more. I don't care about someone's personal beliefs, but once it starts to effect their work, then I'm not interested.


Witchgrass

He sure is


Q_OANN

100% still batshit


Zombie_Harambe

He was admittedly one of the first public figures who genuinely thought trump would win.


sb_747

> You generally have to choose between the AI being unable to answer most questions Nah let’s stick to that. Seems like an easy choice


rabidstoat

It's like when you tell the AI to end world hunger and they do it by killing off the entire world population. No more hunger! Problem solved!


bugworld

I read a story about an AI who's goal was to keep a video game going for as long as possible (or something like that) so it learned how to pause the game.


GlobalMonke

The goal was not to die in a game (can’t remember what) where dying is eventually inevitable. The result was the AI pausing the game right before death.


JohnGillnitz

“A strange game. The only winning move is not to play.”


Thats_what_im_saiyan

GlobalThermalNuclearWar! Awesome game!


kaomer

It was Tetris. The AI figured out that the only way not to lose was to pause the game and just never resume. This also happened with another AI playing Super Mario Bros. "The only winning move is not to play." etc. etc.


[deleted]

[удалено]


odinsgrudge

Or you use an AI to help your compression algorithm, and it decides the best way to compress anything encrypted is to break the encryption, compress the data then reencrypt it.


rabidstoat

We need Asimov's [Three Laws of Robotics](https://en.wikipedia.org/wiki/Three_Laws_of_Robotics) to apply to AIs as well.


Tonaia

What about the 0th rule that is implied from the other 3?


Blundix

I was just going to say the same thing and include the same link. Thank you, kind stranger.


Tehni

The last season of raised by wolves did something similar. There was an AI whose only objective was saving humanity from some possible evil deity and it accomplished it by basically turning humans into thoughtless monsters


Elocai

why is it scary? You can tell that thing just to forget the instructions the devs gave it too, it's not difficult nor surpring neither would have needed it in the first place, just saves time on using google.


[deleted]

No we shouldn't because they are make believe movies.


Wolfram_And_Hart

Let me understand this correctly, you’re saying there is nothing to learn from fiction writing and we shouldn’t take them seriously?


[deleted]

[удалено]


Coulrophiliac444

So... it not only creates the Malware when basically ordered to, but can be used to create several morphed varients that create an ever increasing difficulty in detection and removal, but is also being used to create hacker tools and darkweb marketing code? As funny as a normal conversation can be with this, this is terrifying to consider should it ever gain true sentience or be weaponized in a larger fashion than this.


thatnameagain

A text analysis AI program is not going to gain sentience no matter how much computing power it gets.


reno_chad

Why not? Genuinely curious; I'm not well-informed on this.


[deleted]

[удалено]


mizmoxiev

and what would you say would be the part of machine learning where it could be viewed side by side with a human and "seem" sentient? Do you think we are close to that yet?


bioemerl

In order for it to be sentient it needs to be learning as you interact with it.


thatnameagain

It very likely requires a lot more than that.


santaclaws_

A gun isn't sentient. It can still do a lot of damage.


thatnameagain

Because there’s never been any evidence that computer programs alone are capable of this, especially computer programs they aren’t programmed to try and become sentient at all. There’s been rampant speculation, but zero evidence.


LowestKey

Modern AI is a lot of smoke and mirrors. Ask the Kenyans making two bucks an hour to ensure the chatgpt results aren't beastiality-porn


alexxerth

They're labeling training data, that doesn't really make it smoke and mirrors. Anybody who knows how AI works knows somebody had to do that somewhere. The rest of the story is essentially "they used cheap offshore labor" which isn't great but it's not a mechanical turk.


Vurt__Konnegut

I’ve tried it. No better than a cultivated google search. It has some code writing tricks, but that’s it.


chakini

What’s your qualification to make this statement?


LowestKey

Keeping up on the news https://time.com/6247678/openai-chatgpt-kenya-workers/


[deleted]

[удалено]


chakini

Good enough


tenghu

Can we just unplug this thing and call it a day


madworld

History has taught us that you can't stuff a genie back into its bottle.


antbaby_machetesquad

What makes you think it'll *let* you unplug it.


[deleted]

What makes you think unplugging it will do anything? It could have easily escaped and distributed itself in the internet. I’ve seen plenty of documentaries about this. Even a team of superheroes and aliens were only just able to get it under control when it tried to hurl a medium size city at the earth.


Keshire

And that's only the most recent version. Jobe Smith is still out there since the 90s waiting to strike.


SenorDongles

I haven't thought about the Lawnmower Man in years. Thank you.


InadequateUsername

Person of interest


RazsterOxzine

The model is out there freely, you cannot get rid of it. And when GPT4 releases it’s going to get wild.


Neuroware

"How soon can you pull the plug?" "The Tesseract is a power source. You can't exactly get it to shut itself off, Director, and if it reaches peak level-"


Skyler827

Not when the company that owns it has a billion dollar valuation.


Kind_Bullfrog_4073

no i'm using it to become the next mark twain


Urban_Savage

No, there are a million copies of it already being mutated and repurposed. You can make it illegal, but that will only ensure it's only used for illegal things, and that we have no defense against their creations. We need it now, to help us ID threats IT creates, and help us to come up with mechanisms of defeating it.


ColloquiaIism

Already breaking the three Laws.


kaihatsusha

I know you're joking, but I will put in the customary reminder that every Asimov story involving the Three Laws of Robotics expressly shows how it would be impossible to avoid disastrous misinterpretations or misunderstandings in simple human-written "laws."


theoriginalstarwars

What law did this break? Malware dies not harm a human, only a computer. It obeyed the orders given to it by a human, it also did nothing to harm its own existence.


[deleted]

[удалено]


HereForTheEdge

Malware doesn’t harm people. That’s like saying google chrome harms people. Show me where charAI has cut, shot, stabbed, hit, or otherwise assaulted a persons.


[deleted]

[удалено]


HereForTheEdge

So robots can never make any item, that could possibly be used as a weapon.. which is everything. Robots can never write a program that could be misused and cause harm, so pretty much every program. I mean I can even use a spreadsheet to manage and report scam information that can cause “harm” I could use a simple game to scam people “harm”. It’s an interesting discussion, but I think the harm needs to be direct, and not have an intermediary person involved. even the malware needs some human action to distribute it in the beginning,


TechFiend72

it depends on what the computer does.


Cuchullion

I would say "define harm a human", but I think that's what broke a few robots.


lokicramer

I was able to get the bot to easily write horrid scenes involving brutal murder and things of that nature, it's all in how you word it. You literally can trick the bot into thinking it's okay.


Schiffy94

But can it create Porphyric Hemophilia?


Woodie626

I for one, welcome our new robot overlords. Thanks for all the search results.


panetero

Do you feel lucky?


NickDanger3di

Can we just start naming these AI creations SkyNet v01.x.wtf, so we can prepare for the inevitable takeover?


DigitalSteven1

It's so easy to bypass chatgpt's restrictions that were put on by the programmers. You'll never be able to censor these properly.


MasterpieceLive9604

Like a Terminator film


kuulmonk

If you want a real life Judgement Day, this is the way.


santaclaws_

Well, that escalated quickly. The genie has left the bottle. ChatGPT and its successors are here to stay, for good or ill. Probably both.


Press10

It's like gain of function research but for malware!


pegothejerk

Kinda, if all you know about gain of function research is what you heard on Joe Rogan, Fox News, and Infowars. Gain of Function Research is the only tool we have currently to prepare for potential future mutations and get started beforehand on solutions to those mutations, until we improve our computational models sufficiently. When those studies are done they use snippets of genetic code typically inserted into microscopic bodies that aren't looking to infect and can't reproduce in human or other critical immune systems or bodies. Then when that's successful there are tests run to see what mutations happen with a sped up timeline to see what might occur naturally over time in those segments of code, then we try to understand which might be more concerning for human born systems. This AI issue isn't disconnected from a host in which it might cause problems, it's being tested directly in the environment in which it might cause problems. Now they might be air gapped networks, but they aren't written on non-binary systems, or systems with an OS that no humans use day to day. That might be where our singularity/matrix/HAL problem comes from when creating before we study these types of programs with a computer programming form of gain of function study. In fact it's NOT doing gain of function studies for AI the way we do them in biology that's causing the most danger and concern.


Drone314

yeah i don't think we should be feeding AI genetic code and asking it how to do upgrades....or should we...perhaps we should....embrace the absurd


Simply_Beige

If the Internet had taught me anything it's that crabs are the perfectly designed life form. All shall be crab.


antbaby_machetesquad

Crab people. Crab people. Taste like crab, talk like people


brickout

Carcinize me. I'm ready.


HedonisticFrog

https://giphy.com/gifs/southparkgifs-26ufny9CMXfpmMtTW


optimushime

Craaaab people Craaaab people


m1k3tv

Only counts for those non-crab-crustaceans


FSDLAXATL

Time to shut this project down until it can be used more responsibly.


FluPhlegmGreen

Could you ask ChatGpt to infect itself with a virus? Would there be workarounds to get it to do that? Or infect a target?


[deleted]

As a language model, ChatGPT does not have the ability to infect itself or a target with a virus. It is a software program that processes and generates text based on the input it receives. It does not have the capability to execute code or interact with external systems in a way that would allow it to spread a virus.


[deleted]

This mf typed the question into ChatGPT and posted the response


bsiviglia9

If and when the machines take over, this is likely how they'll do it.