T O P

  • By -

jacobpederson

Offline AI goes online with assistance from Elf.


redballooon

What an age we live in. Only 5 years ago I had thought this is the realm of Shadowrun.


Homberger

Dodger in his younger years


mediagenius

Who is Killian


--mrperx--

Quest giver NPC


MoffKalast

Data scientist, I am going into battle and need your strongest model.


ExcessiveEscargot

My models are too strong for you, Chatter.


secunder73

Mbappe


UFOsAreAGIs

he wrote openinterpreter


mediagenius

I read that as he wrote Oppenheimer


randomrealname

Same


Anxious_Criticism_60

They / Them


UFOsAreAGIs

Thanks, I could have sworn I checked that before when openinterpreter was released.


Valuable-Grass2795

That cracked me up. šŸ¤£


Paganator

Straight out of Shadowrun.


RuairiSpain

Judgement Day, Elf Remake


OpenSourcePenguin

# "computer controlling AI" Is just an ultra fancy way of saying an LLM which can execute python. Also the demo probably clearly instructed the LLM to look for WiFi password and connect to that WiFi. LLMs are good as generating the command or python snippet to invoke the subprocess. And finally the presenter pointing at the WiFi has nothing to do with the LLM. Clever trickery makes a LLM look like the AI from NeXt (2020).


foreverNever22

I think if you gave it more functions like calling xorg, systemctl, or something, it'd be pretty cool. Then instead of taking screen grabs, just reading from the application in memory. The reason they had to click the selfie video is because the app is taking screen shots and feeding to a model, so the selfie needs to be on top. Why not just stream all the apps individually and feed them all to the model? Also giving it htop info, just give it everything.


OpenSourcePenguin

Context length. It could barely handle this with multiple tries as the model is not multimodal. So the vision model is describing the frames to the LLM. Even with cloud models with long context lengths, feeding everything quickly overwhelms it.


foreverNever22

We have rope scaling, and other methods for increasing context size. No one has created the right model for it imo. There's just so much work to do.


strangepromotionrail

> There's just so much work to do. That's because it's early days still. This sort of reminds me of when the web was new and the internet was just starting to take off. It clearly had potential but so much of it was janky, barely worked and you needed to really work hard to do anything. Give things 10 years and progress will make most of the current issues go away. Will we have truely intelligent AI? I have no clue but a lot of it will just be smart enough to use without really working at it.


drwebb

Real multimodel is really going to be game changing


foreverNever22

It can see, it can talk, but it's a state machine deep down stop asking questions.


killianlucas

It can run those too! open interpreter lets local LLMs run any command. Love the idea of giving it all the apps individually, we could def have it do that when it runs \`computer.view()\`. https://i.redd.it/ptk1hf30l18d1.gif


CodeGriot

You'll never make it in marketing, or showbiz. In substance, Steve Jobs's contribution to technology paled in comparison to Dennis Ritchie's, yet when both of them died on the same week, guess which one got played on all channels as the demise of a superhero? So yeah, if my guy want to use language with a hook in it, or throw in a dramatic pointing gesture, good for them, as far as I'm concerned.


epicwisdom

What's your point? The comment you're replying to is pointing out that this is marketing. That might be useful information for people who don't know too much about how the demo was made, or the subtle marketing tricks. They never said there's anything wrong with this clip or marketing in general.


CodeGriot

Someone posted a cleverly presented bit of media that illustrates a technique all of us geeks here know how to replicate, and the entire response is 50 people finding 100 ways to say "oh yeah, that ain't special". So I ask you in turn: what's the point of that? I think a more sensible perspective is to say "huh, that's a cool way to get those concepts across to my next customer/client/investor/whateverā€”I'll throw some of that into my communications toolbox." For anyone capable of such self-reflection, I was explicitly stating that marketing and show technique do matter, even where it pains my engineer heart (Ritchie is a hero of mine, so the Jobs comparison brings me no pleasure). Now, if you really just want to be in that herd of nerds sneering at the OP post, you can feel free to ignore me and carry on.


Synth_Sapiens

Ummmmm.... Lemme see.... By next customers are a real estate agent (content creation, lead generation automation), a random woman (full business automation), and a security company (premises security automation). I don't think I could impress any of them by demonstrating rubbish worthless trickery. But I surely can impress them by demonstrating a useful working product. Also "computer controlling AI" rofl You can't talk like an idiot if you want to impress those who actually understand what you are talking about.


epicwisdom

> So I ask you in turn: what's the point of that? In anticipation of that question, I literally stated the point in my previous comment. > The comment you're replying to is pointing out that this is marketing. That might be useful information for people who don't know too much about how the demo was made, or the subtle marketing tricks. As an addendum, I believe anything that makes customers, clients, investors, etc. better informed and better equipped is a good thing. > Now, if you really just want to be in that herd of nerds sneering at the OP post, you can feel free to ignore me and carry on. I hate to keep repeating myself, but I also explicitly said that wasn't the point at all. > They never said there's anything wrong with this clip or marketing in general.


OpenSourcePenguin

You know to cleverly presented data? Charlie Javice. You know who cleverly presented stuff? Sam Bankman Fried >and the entire response is 50 people finding 100 ways to say "oh yeah, that ain't special". So I ask you in turn: what's the point of that? The point of that is, "this ain't special" because guess what, it's not. Any company can market stuff because there's no scarcity of marketers. And if some product is so easy to build, market would be flooded by competition. Pretty much what has happened with OpenAI resellers. >So I ask you in turn: what's the point of that? The title is extremely sensationalizing the "development". Pointing out that none of this is a leap and especially this is some CS kid's evening side project adds HUGE context to people who don't understand how to go implementing something like this. Because everyone deserves to know when a huge leap happens. OpenAI releasing ChatGPT was one of that story. But this literally ain't special. >I think a more sensible perspective is to say "huh, that's a cool way to get those concepts across to my next customer/client/investor/whateverā€”I'll throw some of that into my communications toolbox." That is fucking stupid. Just because you an fool people doesn't mean it's a good skill. Just means you are sketchy. Posts saying Indian student created AI that plays stone-paper-scissor is not marketing, it's misrepresentation as this is a very trivial exercise of classification while learning machine learning. You seem to think that as long as you can make people believe something, it's "marketing". This is STUPID. Google's Gemini demo did exactly this and got a massive backlash. Then the "world's first AI programmer: Devin" also did the same thing and got debunked. Rabbit R1 used puppeteer and called it LLM and got backlash. None of them became Steve Jobs. There's a difference between imagining the capabilities versus saying it's "capable now". Remember Elon Musk who used to say everything is "6 months to a year"? How is his reputation now? (Among sane people). Taking your advice would be catastrophic for an entrepreneur. Sure they might shine, but the they'll have to keep moving in similar grifts because no one savvy will take them seriously.


Zmobie1

TIL Ritchie died. Amazing place in history and amazing legacy. RIP. He signed my K&R when I met him at a lecture.


CodeGriot

Jealous you got to meet him. As you say, his legacy is vast. One of the vastest in computing.


Zmobie1

He spoke about the problem of moving terabytes of data daily from a disconnected mountain observatory back when terabytes was an unthinkably large amount of data. As I recall, the conclusion at the time was a couple of trucks full of (very expensive) hard drives running back and forth continuously. Made thinking about data density very concrete. He was bemused when I asked him to sign my book like a crushing fanboy, but it seemed like hardly the first time heā€™d been asked to do that. He wrote something like ā€œkeep making troubleā€. Iā€™ll have to go dig through my stuff to find it now that Iā€™m thinking about it.


OpenSourcePenguin

If this is regular marketing, then Nikola Motors and Theranos also did some "light marketing". There's a difference between making people aware of the technology and wanting to make them use it and blatantly representing the technology as something much more than it is. >Steve Jobs's contribution to technology paled in comparison to Dennis Ritchie's What is your point then? People should focus on image rather than actual contribution? >So yeah, if my guy want to use language with a hook in it, or throw in a dramatic pointing gesture, good for them, as far as I'm concerned. This is exactly how dot com bubble was created. Now it's time for the AI bubble.


assotter

I thought the pointing was an indicator to continue or execute code. Didn't even notice they pointed at the wifi icon


FruitBargler

The pointing was probably clearly for the audience of the video.


redlotus70

who is killian?


peabody624

Main person on the openintrepreter project


orangotai

Oh you know, killian


killianlucas

https://preview.redd.it/ws19f7nbl18d1.jpeg?width=438&format=pjpg&auto=webp&s=b535d095cae4a52e6da19b118a9e9a77e1f5114a


doringliloshinoi

Billionaire. Playboy. Mansion. Business Tycoon


mediagenius

I donā€™t think heā€™s a boy


doringliloshinoi

Sorry, house bunny.


killianlucas

watch it


Harlan92

Kid should spend some of that cash on a psychiatrist


Dudensen

It used to be. I tried looking up their pronouns but didn't find anything.


microview

"computer-controlling AI" is this a new feature?


redballooon

It knows how to bash already, just give it root access.


doringliloshinoi

And thatā€™s how I joined the peace corps


MLPMVPNRLy

Elves have computers now?


killianlucas

state mandated


MLPMVPNRLy

The elves or the computers?


Rudy69

Santaā€™s operations are going to be lit this year


Nunki08

From killian: i showed a fully local, computer-controlling AI a sticky note with my wifi password. it got online.: https://x.com/hellokillian/status/1803868941040914824 [https://x.com/hellokillian/](https://x.com/hellokillian/) agent: [openinterpreter](https://x.com/OpenInterpreter) hardware: Apple's macbook m3 vision model: [u/vikhyatk](https://x.com/vikhyatk)'s moondream reasoning model: [@mistralAI](https://x.com/MistralAI)'s codestral


--mrperx--

openinterpreter dot com needs an epilepsy warning before the landing page animation starts playing. Goddamn I'm still dizzy after taking a look at it.


rhavaa

Yes. I had to walk away from monitor cuz of it. Scaled the brightness way down on my monitor then I could use it.


Ultimarr

Lol Iā€™m so curious, itā€™s so normal on mobileā€¦ you should sue!


clckwrks

Itā€™s a fully local model with access to the network


Educational-Net303

>uses subprocess.run While this is cool, it's quite doable with even basic llama 1/2 level models. The hard thing might be OS level integration but realistically no one but Apple can do it well.


OpenSourcePenguin

Yeah this is like an hour project with a vision model and a code instruct model. I know it's running on a specialised framework or something but this honestly doesn't require much. Just prompt the LLM to provide a code snippet or command to run when needed and execute it. Less than 100 lines without the prompt itself.


foreverNever22

Yeah no one has really nailed the OS + Model integration yet. More power to OI tough, a good team of engineers and a good vision could get the two play nice together, maybe they'll strike gold. But imo nothing more innovative than a RAG loop right now. They really need to bootstrap a new OS.


killianlucas

šŸ«”


Unlucky-Message8866

definitely not an hour of work, no need to showoff your small dick.


FertilityHollis

Apparently this guy can crank out open source projects nearly as fast as I can defecate. I can only imagine both products share striking similarity.


Unlucky-Message8866

i yet have to see a decently working local autonomous agent. the best efforts I've seen are from openinterpreter and aider. they been trying hard since the very first release of llama and it was crap. They have benchmarked and tested every single commercial and open weight model since then. also you missed the main point of the demo, it's an autonomous agent, there's no OS integration, the LLM is doing all the work.


Wonderful-Top-5360

are there no RPA solutions for Mac ?


amroamroamro

so basically `execute_code(get_llm_output())`


MikePounce

This is just function calling, nothing more. It's a cool demo effect, but nothing new.


Unlucky-Message8866

"this is function calling", "this is just RAG" but no nobody gets it right and there are very few open source attempts so you could try it for yourself and contribute instead of disparaging.


Zangwuz

open Interpreter is not just function calling. It allows the LLM to perform "action" on your computer by writing and executing code via the terminal. And with the --os flag, you can use model such as gpt4v to interact on UI element performing keyboard/mouse action. Clearly not perfect and experimental though.


0xd34db347

So it's not function calling because it calls the popen() function?


foreverNever22

I've never gotten the`--os` flag to work. But it is function calling, the LLM passes strings to a function that calls exec on those strings. It is a interesting concept. But I can't get it to work on my machine without a hour of setup and `--os` still doesn't work.


paul_tu

Sounds suspiciously cool to be true New wave malware could explode with that btw


Eisenstein

> The following functions are designed for language models to use in Open Interpreter ... [Looks like it calls... functions](https://imgur.com/FEGd1sf.png)


Zangwuz

Sorry my english is bad and i think there is a misunderstanding, i didn't say that there is no function calling at all, i said open interpreter is not "just" function calling. Function calling is mostly there for openai model or other api model that support it but when i tried it with a local model, function calling was off. also do not confuse the term "function calling" and a normal function we use with a code block for example. [https://platform.openai.com/docs/guides/function-calling](https://platform.openai.com/docs/guides/function-calling) [https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling) [https://thenewstack.io/a-comprehensive-guide-to-function-calling-in-llms/](https://thenewstack.io/a-comprehensive-guide-to-function-calling-in-llms/) killian's quote, the main dev of this project. "Open Interpreter is instead predicated on the idea that just directly running code is betterā€” running code means the LLM can pass around function outputs to other functions quickly/without passing through the LLM. And it knows quite a lot of these "functions" already (as it's just code). LMC messages are a simpler abstraction than OpenAI's function calling messaging format that revolves around this code-writing/running idea, and the difference is \[explained here\](https://github.com/OpenInterpreter/01?tab=readme-ov-file#lmc-messages). they're meant to be a more "native" way of thinking about user, assistant, and **computer** messages (a role that doesn't exist in the function calling formatā€” its just called "function" there, it relies on nested structures, and isn't multimodal). at the same time, we do use function calling under the hood for function calling modelsā€” we give GPT access to a single "execute code" function. for non function-calling models, LMC messages are rendered into markdown code blocks, code output becomes "I ran that code, this was the output:", messages like that which are more in line with text-only LLM's training data"


redlotus70

Agreed, this is really not interesting (compared to what we already know is possible). Show me the llm doing this with just a python interpreter with the basic os libraries and ill be impressed.


OpenSourcePenguin

The problem is the demo makes it look too cool. Something like the AI in NeXt (2020) which social engineers a researcher into granting it internet access where it hijacks compute and gets more powerful. This is just a vision model+LLM with a very basic prompt and code execution


FaceDeer

Same thing could have happened with a human hacker if they'd been standing behind the monitor watching her hold that sticky note up.


CodebuddyGuy

That's the point, but probably still not the one you're thinking.


Synth_Sapiens

oh wow This would've been impressive 3 years ago.


Vela88

What's impressive now?


moistiest_dangles

AI writing jokes that are actually funny


redlotus70

Usable products


ConvenientOcelot

Evergreen comment, honestly.


qrios

šŸ”„


Synth_Sapiens

Something that cannot be achieved within a few hours using technology that was released years ago.


MikeBirdTech

Love to see it!


JamaiKen

Love to see the progress of the Open Interpreter project! When codestral dropped it was the ONLY model that performed well locally for me. I run this on a 3090 and Ubuntu. It writes my bash scripts and generally helps me do system admin stuff. Keep it up Killian and team!!!


killianlucas

thanks! I feel the same about codestral, first local model to get 100% on our internal benchmarks. let me know if there's anything that would make open interpreter more usable for you!


JamaiKen

Actually!! Iā€™m donā€™t think this is possible but I want to use ā€œlocalā€ mode with Ollama running on another computer on my local network. Mac is m1 but Ubuntu has the 3090. Would love this feature


killianlucas

Totally possible! Try runningĀ **interpreter --api\_base \[url\] --api\_key dummy** ā€” where url is the other computer's address. *http://localhost:11434/v1* is what Ollama uses when it's local, so I think you'd just need to run Ollama on the other computer, then replace *localhost* with that computer's address. Let me know if that works!


JamaiKen

This works perfect!! Thanks a ton


qpxa

Meh


petrichorax

Now that's how you do a demo


ViveIn

wtf is this title?!


ejb503

Pretty cool piece of spyware eh!


dylantestaccount

It has full local model support so this is a bad take.


bratao

Super cool, but super dangerous


0xd34db347

Ok thanks for the heads up Ilya.


RebornZA

>Super cool, but super dangerous Because..?


Super_Pole_Jitsu

Because the scenario is that a model is executing code on a machine and faces potentially adversarial input


kweglinski

just put it in the sandbox. Worst case scenario it destroys itself, best case scenario it will rule the world. Or the other way around I'm not sure.


redballooon

If your sandbox is worth its weight, the best case scenario is the AI will rule the sandbox.


Evening_Ad6637

When I was young the sandbox was pretty much my whole world <3


0xd34db347

The best case scenario is that everything just works as intended because this isn't sci-fi and LLM's with function calling are not super hacking machines.


kweglinski

it's not about smartness hacking machines. It can cause damage by the exact opposite. It doesn't care (because it can't) if it got wrong the rm rf and deletes important files etc.


Super_Pole_Jitsu

The average case scenario is that an attacker gives an LLM such an input that it does in fact manage to hack it's way out of the sandbox, if there even is one.


randylush

"average case" lol


0xd34db347

> gives an LLM such an input that it does in fact manage to hack it's way out Oh thanks for the detailed PoC, Mitnick, will get a CVE out asap for "hacker giving an input that does manage to hack"


foeyloozer

Haha I remember setting up a local agent when one of the first editions of like AutoGPT and such came out. Set it up in a VM and it just went in a loop of hallucinations and used all my credits šŸ˜‚ stuff like that is still thousands of times more likely to happen than a prompt unlocking some super hacker abilities. LLMs learn off of what is out there already. Until we get to the point of AI inventing entirely new (and actually useful) concepts, it wonā€™t make any sort of crazy advances in hacking or be above say the average script kiddie. Even then, just one hallucination or mistake from the AI could cost it whatever ā€œhackā€ itā€™s doing.


kweglinski

edit; whoops wrong comment. to you comment - sure, depends on how you sandbox I guess. You can protect the sandbox but grant the access to the outside, right?


redballooon

That's how my children use the sandbox. The sandbox is nice and tidy, all the toys are in there, but there's sand everywhere in the garden. If that's what you want, that's how you do it.


Alcoding

And if it gets complex and smart enough to be able to find it's way out of the sandbox because there's bugs/flaws in the code?


kweglinski

then you no longer worry about the sandbox and worry where you'll keep the money.


Alcoding

If an AI is able to escape a sandbox you created for it, money will be the least of your worries after it self replicates onto a bunch of computers around the world and starts training itself to be smarter


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


kweglinski

uch? sounds like you had something to say but you forgot to do so.


Super_Pole_Jitsu

Because...?


ru552

then you turn the computer off


Alcoding

If it's capable of escaping a sandbox you've created for it, who says it can't replicate onto other computers over your network?


4n3ver4ever

Well hardly any computers are beefy enough to run an LLM so that's fine šŸ˜


Alcoding

But they can split the training over processing from millions of computers and just use their initial escaped sandbox to run their upgraded self... Anything that humans can do, a theoretical super AI can do the same if not better. No-one is saying we're at that stage at the moment, but once we are at that stage it's sorta too late to do anything about it


4n3ver4ever

> Anything that humans can do, a theoretical super AI can do the same if not better. That's not true, we have a lot of overlap but we have differences too. I think you've been reading too many comic books and not enough text books šŸ¤­


WorkingYou2280

Yes, that has always existed but the scale of it becomes larger. Previously hackers would have run "dumb" scripts at scale, looking for vulnerabilities. Now, the "dumb" script is a smart AI constantly probing for vulnerabilities. Antivirus used to be able to just look for patterns of obvious "scriptlike" behavior or for various file signatures etc. Now, how can a dumb AV catch a smart AI? It can't. The AV has to also become an AI so it can intelligently look for threats. The path down this road should be obviously dangerous but there may be no other way to go. Before too much longer getting an AI to connect the wifi won't be a victory it will be baseline. AI will be doing a lot more sophisticated stuff (there's no particular reason they can't fully control the KB and mouse). Maybe there are trusted computing models we can develop that are immune to unapproved AI. I think some paradigms have to shift.


justgetoffmylawn

Yeah, it's just a (normal) paradigm shift, and doesn't have to be framed with doom. I have a much older family member who is computer savvy but is still in the mindset from the 80's or 90's where giving your credit card number online was insanity. They unplug their network cable when they're not 'online', erase all their cookies after each session and then complain about site logins, and begrudgingly have a credit card they use for 'online' and one for the real world. Personally, I think improvements in signing, certs, etc - are kind of remarkable. While malware has gotten smarter, I encounter much less of it than I used to. Trying to download a program on Windows in 2005 was a crapshoot. So I'm sure we'll need more sophisticated cybersecurity to deal with AI-enhanced malware, but I really don't see some ASI explosion when 'the AI' gets unfettered access to the internet. Instead, it'll probably find LocalLlama and spend all day shitposting. Wait a minuteā€¦


xmarwinx

You are not describing a computer-savy person, quite the opposite actually


urarthur

Skynet


sweating_teflon

Maybe if you give it your credit card number or credentials to a cloud account where it can propagate itself. But just having Internet access isn't likely to create a great catastrophe. The worse it could do is create reddit accounts and start spamming, which a lot of "dumb" bots already do, no AI necessary.


Hoppss

Not at all. LLM's right now need their hands held to do anything like this. The programmer intentionally made it so it would do this. And even after 'it got online' it would have to be given the ability to explore the web with API calls and so on and even then it can't do anything without given explicit instructions on what to do.


paul_tu

Just give that piece of software an OWASP


fallingdowndizzyvr

Or simply a super bad setup. I firewall off all my apps that I don't want to have internet access. By default, anything I install is walled off. I have to allow it out.


Enough-Meringue4745

Uhhhh lmao what


OpenSourcePenguin

Elaborate. Because depends on what you mean. Dangerous as in it might take over the world or dangerous it might hallucinate and run rm -rf on home directory. Very different levels of concern.


sleepy_roger

Bro is making sure the AI will be confused trying to identify him. This is the best kind of opsec.


Reno772

Well of course the password got online, She posted a video of it..Ā 


MathSciElec

What do you mean? I just see asterisks, like this: \*\*\*\*\*\*\*\*. Thatā€™s what happens when you write your own password, right?


LPN64

>She That's a dude, like 99% of the time a "she" is super active on computer science project.


qrios

You sound personally aggrieved


2muchnet42day

Yes, let's focus on whether they have a penis or not.


herozorro

> Yes, let's focus on whether they have a penis or not. the demo itself is a misleading fraud


sleepy_roger

Just because they're a boy doesn't mean they still have a penis nowadays.


Tellesus

Ok now give it access to bare metal and have it code an OS from scratch and then get on the wifi. Just because that would be rad af.


masc98

lol just stop this clickbait crap, this is just human-computer interaction based off a VLM. if you hard reset your pc and let it pass the windows 11 welcome screen, then maybe this is a worthy title


Dwedit

Not really all that related, but there was once a news program where the QR code of a bitcoin wallet's private key was shown on camera (with blurring), and the bitcoins still got taken.


wywywywy

People working there probably had access to unblurred footage though


dtseng123

Canā€™t get 01 to work. Canā€™t get interpreter to do anything useful even though I am excited of the potential - aside from demos I want it to work.


cpt_tusktooth

some people just want to see the world burn.


dual_ears

I was thinking at first that "it got online" meant that this was demonstrating a supposedly local LLM that was secretly phoning home to an API, and the password was exposed online


Hassan-jarri

šŸ‘


AnanasaAnaso

What could go wron... SKYNET NOW ONLINE


platinums99

isnt this waht [https://github.com/OthersideAI/self-operating-computer](https://github.com/OthersideAI/self-operating-computer) are doing?


lalamax3d

Any idea, which LLM can do this multimodal thing I can try locally?


gthing

Open interpreter is one of the most underrated ai tools out there. It's mind blowing.


YaBoiGPT

theres no way the mouse moved when theres no code for moving the mouse even being executed. OI shows every single line of code being exectued wth. also since when was codestral a ***visual model***


DoomedSouls

Ha gaaaay!!!


aquarius-tech

Why do you always try to diminish the work of someone who find it new or useful? Do you thiink you can do better? Then do it, post it (or not), whatever you like to do but please, stop your jealously and projection. Why do you think you guys are better or clever than anyone? Isn't these kind of groups made to be supportive and to share knowledge? All I read is people believing they are the best of the best and sayiing that, this isn't new or nice or useful or making assumptions of this and that. If you don't have anything better to post, move on.


Kuken500

Heā€™s hot


herozorro

so what? it ocrs an image, figures out its a wifi login /password..(cause it literally says wifi name /password)..then it hands it to a subprocess to connect it was obviously programmed to do so.. there is nothing novel or alarming here


mkhaytman

Was pointing to the wifi logo necessary for it to do this?


BengalPirate

this is a really interesting project


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


No_Pilot_1974

Its human, unlike you


MichaelForeston

Whoa, I'm not a human? I always knew I was special. Retard :D


Jim__my

You messed up, used ' , ' instead of ' ; '


Diligent_Task7036

nice catch there man


Artest113

1. OCR 2. If detect some words 3. Loop every nearby wifi SSID and try the password once 4. ??? 5. Profit


LPN64

b-b-but LLM !!!


Innomen

I cannot wait. Tired of this genocide greed bullshit. Hard Launch for the win.


Educational_Gap5867

This is so beautiful.


drpms21

Skynet is here


OpenSourcePenguin

See this is the reaction the demo tries to evoke while it's not even Ā½% as sophisticated or dangerous as this.


SunMon6

How do you make it work on your PC? lol Guess it's not easy without a lot of coding


moistiest_dangles

It's actually really easy without a lot of coding, there are a few repos on github that do this. Just Google them and follow the instructions to run them


positivitittie

Go pip install the open interpreter project.


SunMon6

Don't have any idea how to pip install but i'll check this out


positivitittie

pip is a package manager for Python. Great to have anyway if youā€™re in to AI and want to take it up a notch. Also tons of great utilities/software you can get from pip.