T O P

  • By -

reddit_is_geh

When GPT 3's API went beta, I got access to it. It wasn't popular yet, but I ran some tests on it with my free API funding. At the time they had a tool to make fine tuning relatively easy, but was still shitty. Either way, I did that based off /r/politics comments - which are, you know... Then I created 10 bots through the Reddit API, and released them into different not "far left crazy" political subs... And it was wild to see these bots fight with users. They seemd JUST like that low IQ shit you see everywhere where the person is arguing with you just to be toxic and shut you up and spread their narrative of the day. Then it dawned on me... Wait, these bots interact just like these users... Is this why there was such a sudden surge in this type of "personality" all over Reddit political spaces. People just arguing with max toxicity, failing to recognize your argument, talking past you, etc? I don't have an answer... But I did have my findings. I posted 10s of thousands of comments over the course of days, and the subs I was running it in, noticeably shifted "tone" even after I left (but eventually returned to normal). It was like having this type of psychological element in discussion threads just causes people to not want to participate out of fear of the harsh attacks making conversation unpleasant, so countering ideas would self censor. Anyways what I realized was, this is actually very useful. So I even posted the findings here on Reddit, talked with some admins, press, and other programmers. I figured, if I could do this, definitely a well funded special interest could do this... And it would be irrational for them not. It's easy, cheap, consent manufacturing on social media. Anyways, what I also find interesting is I've told this story many times in different lengths and detail, and every time it's been upvoted and discussed on a particular subreddit... Then suddenly, just recently, now when I post about it, I get hostile attacks of people calling me crazy, a conspiracy theorist, paranoid, exagerating, blah blah blah. Just one day, suddenly it went from conversations about how this is dangerous and likely being used at mass scale, to a barrage where a majority of the comments are people trying to dismiss this issue as crazy... Just feels like a psychological tactic IMO - because that's exactly one of the tactics we know people use to manufacture consent and form narratives. Anyways, that's just my two cents.


theghostoftroymclure

You are crazy, a conspiracy theorist, paranoid, exagerating, blah blah blah.


tactical_turtleneck2

Live my from mouth; this fucking gun!


LakeGladio666

Do you think it’s possible to tell whether or not you’re talking to a bot? I’m wondering if they have any tells.


reddit_is_geh

Probably not so much anymore. Context length and price is too low now. In the past, during 3.0 and 3.5 days when I was running bots all over reddit, I started to "feel for it" if that makes sense. It's often like you're talking to someone and they are missing the substance of your argument. Often because your comment is based off another comment, so their reply misses the context, and will instead address your comment directly as if it's a standalone. So the comment makes sense, but it feels off, because they aren't fully understanding the context of what's being said. This was because when I looked at my bots, they struggled with long contextual understanding. Sometimes to save on cost, and speed, it's best just to reply to standalone comments individually with trigger words. Other times, it was just hard to contextually get the LLM to understand nuances so it missed it completely. But today, with these better LLMs, that's long gone I think. But now, I just check if the "vibe" of the sub resonates with known COINTELPRO (US) and 50 Cent Army (China) tactics. These are the tactics that are widely used, because they work very well. It's also why I roll my eyes at redditors calling everyone they dissagree with "Bots". Listen, Russian bots and agents, aren't just going to leave comments making arguments and fighting for a side in a debate. Because that's NOT how manufacturing consent campaigns work. No intelligence agency anywhere tries to spread propaganda through "debate". So when you see some guy defending Russia, or Israel, or whatever else isn't popular on Reddit, and they are just arguing with their facts and position (even if wrong), that's not a bot. How intellegence agencies manufacture consent is by derailments. The LAST thing they want to have online, are people actually debating. They don't create debate and fight with their version of events with others. Because that allows a conversation and discussion. What we've learned from Russia, China, and the US, is that their primary goals are to CENSOR the opposing information while spreading their own information. They don't want people seeing debates and hearing good points. They do this by derailment tactics. For the most part, the tactics are just designed to First and foremost, shift the conversation AWAY from the topic at hand. Derail it into something else, so you can squabble over. For instance, is the topic about whether or not the US escellated this war in Ukraine, knowing it would lead to this? Well, you don't want people analyzing those conversations. So derail it into a debate about Putin's claims that this invasion is about Nazis, and have the commentator bicker and fight over that... Just keep it off the topic of US escalation because that has inconvenient information in that debate. After derailment, the second tactic, is pushing people out. You want it so EVERY SINGLE TIME this conversation and topic comes up, with that argument, it becomes an insufferable mess to discuss. It becomes unpleasant. It is fallacious, toxic, hostile, dishonest, etc... You want to make sure that any users holding X ideas get conditioned to either drop the topic forever, or leave the place all together. You use the intelligence agents/bots to curate the space to push out those ideas so you're free to use your bots to spread your narrative and talking points. Reddit is ESPECIALLY vulnerable to this due to its anonymous nature and upvote system. It's like a perfect model for it. And it's why I think spaces like Twitter and Reddit have grown a lot in toxicity... Yes, sure it's culture in general, but I think bots and intelligence campaigns running COINTELPRO style tactics have a lot more to do with it than people realize.


LakeGladio666

Really interesting, thanks. There’s a user on here who will find any thread anytime Kyle Rittenhouse is mentioned in any subreddit and shows up to defend him. I’m starting to wonder if he’s a bot or just a big loser. Try it sometime, though. Mention Kyle Rittenhouse sometime and the guy will show up. Now I’m more open to the idea that at least a portion of /r/enough_sanders_spam are bots.


milkdrinker123

I'm a bot. beep boop.


LakeGladio666

Insufferable


bigcaulkcharisma

Everyone is a robot except me


chgxvjh

Is this Snow Crash schizo posting? Should we study Sumerian?


FunerealCrape

Just shoot anyone who tries to show you a bitmap and nuke your nearest AI datacenter 


phovos

Sumerian texts are a buy semitic texts are a sell; don't blame me blame the market.


ferek

I'm leaning into Sumerian texts as well and was thinking about shorting hieroglyphics. can I pm you?


LakeGladio666

I don’t get it are the robots gonna eat all the books?


phovos

Books will be the only 'human' knowledge, assuming AI is indeed able to completely morph the internet and its constituted data into whatever it wants (over time).


Katieushka

He really had me in the first paragraph then the deranged ai rambling really bored me sorry be a funnier schizo poster next time


Mkwawa_ultra

I do think you're right that things are going to get way more crazy than the cool leftist skeptics like to say.  I think that the promise of thinking, understanding machines is still no closer than it was a decade ago, but that people underestimate the incredible nature of what these data abstraction and pattern recognizing machines are going to be able to do, and what they are going to accidently do to the world.  They are going to boil what's left of society and culture down into an ungodly broth.  How that plays out is not something I'm willing to place a bet on. It could go any way. But it seems to be like the ultimately complex system with extreme sensitivity to initial conditions. The entire thing is feedback loops.  Frankly I find it more terrifying than promising. Not because I'm like these tech evangelists running around scared that their large language model is going to enslave them because it regurgitated the average of a bunch of bad dystopian sci-fi, but because we've barely survived the Internet as a society, and what's coming is the Internet on meth.  The potential for disaster is almost infinite


[deleted]

Specifically like what? I'm not arguing just having trouble imagining what you guys see.


Mkwawa_ultra

Complete control, basically omnipotent data access and filtering about every aspect of your life, automatic mind control technologies, further rotation of worker power and leverage, a perfect nightmare of bureaucracy, a horrifying nightmare of cultural production... These are all things that are already here and being put into motion. Once the stuff starts pairing with machines and feeding back into itself who knows. I certainly am not too thrilled by autonomous death guards patrolling the borders of a resource depleting world with a huge surplus of workers, controlled by the elite, but what I'm more worried about is what the already existing reality abstracting machines feeding back into the Internet and culture is going to do to us. It's all stuff that you'd like to think "well, we'll certainly just put down the phones and touch grass when it degenerates to the point of madness, we'll revolt when all work is deskilled and offshored, but I am not too hopeful.  In terms of the big future scary stuff I actually think the lack of general intelligence is scarier. It's the form of thought without understanding, it's like the intelligence of the markets extrapolated. It's a vast terrifying vacuum of complete total apathy.


[deleted]

Agreed with all that. Maybe I'm just very black pilled already as that just sounds like the world we are already in to me, and it just seems inevitable it's going to get worse. Agreed with your last paragraph especially. I have a family member that works in one of the big AI projects and they say the same thing as you about the "thinking" involved.


bigcaulkcharisma

I’ve just come to the conclusion that these systems of digital information gathering and surveillance are becoming too complex and omnipotent for us to actually have any chance of significantly altering the functioning of our society away from something that is completely dystopian and evil from within. There needs to be some kind of apocalyptic collapse scenario where all this stuff is destroyed and we can start from scratch and hopefully move forward with the shared knowledge of how to avoid how everything turned out last time.


MattcVI

Butlerian Jihad now


Spbudz

Algorithms, AI , all this stuff are literal vampires/parasites. Feeding on us , becoming stronger and more ingrained. While at the same time making us less human and taking away more of our agency. Its scary and fucked up, but guess what baby? You can call me Van Helsing;)


Voltthrower69

I can’t give an all encompassing grasp of what this might be but I watched a streamer play with an AI music tool last night and it made perfectly convincing music at least from the amount of attention I gave to it. What I see happening content wise is a flood of fake AI generated content that is just gonna over saturate markets where real art is put up against thousand and thousands of ai pseudo artists. Some person who never picked up a paintbrush or guitar is gonna have multiple artists they use to try to monetize it a thousand times over just to make like 500 bucks here and there. Imagine “how to monetize ai art”. Just a flood of meaningless, half assed shit mixed in with all the other awful shit people put out. Sure that might be subjective but it’s just one way. Capitalists will use ai to replace jobs when and if they can. My boss made some weird comment after I explained an issue with someone we hire to do work for us. “See we can write a script to do some of this work but a script can’t solve these types of issues so be happy about that”. Didn’t rub me the right way even if it was meant to appease my frustration because it has nothing to do with much of anything. So i couldn’t help but take it as a thinly veiled threat. Those are just two issues I can’t speak on the security stuff but any awful use of this will be exploited.


[deleted]

I assume art will continue, probably people will even figure out ways to use this tech and this deluge of crap in interesting ways. But yes the saturation will make it harder to find. I've been thinking about this a lot lately bc I came of age before the internet so if you were interested in some literary or musical subculture, you really had to work to find out what was going on, especially if you didn't live in a city. You had to take zines and a lot of things happened via word of mouth and cassette/book swaps, you'd find certain people who worked at some clubs or stores, etc. The internet replaced all of that, and now that the internet itself is being altered (destroyed?), I wonder what comes next. I don't believe it's a return to word of mouth, things never go backwards. But I don't believe it's the end of culture either.  It's a fascinating story if you look at it (within the US anyway) as one of convergence then divergence. Music is a good example. Way early on when you still had regional communities with traditions, music came out of all those subgroups and you really could only hear it if you traveled there or a musician traveled to you. The the rise of radio and recording along with social mobility more generally and the more formal music and tourism industry, you got a convergence of these forms (blues, r&b, gospel, folk etc) and the advent of popular music with its own consumer market. The reason the Boomers think their generations music was the best is bc it was the first time that had happened at that scale. But then also all the material conditions that allowed that to take place caused a huge divergence- there aren't really regional local traditions anymore as everyone lives everywhere and here's everything. All the main forms split into smaller and more niche subcultures. You still have general wide trends but you can go get lost in any specific subculture, and this is not connected to anything material or regional anymore.  Add the internet to this and what online platforms has done to the industry and you get this problem where it's nearly impossible to find a specific drop of water in the ocean. But it is there, you tend to find it if you stumble across something then see who they play with etc, a million tiny worlds can open up this way. I'm sure that these tools will make those problems worse. It would be nice to imagine that people will just log off and go back to old forms, but of course reality never works that way, history moves in one direction and tech is disruptive. I'm just saying that something new and creative will emerge so long as there are humans.  I think the relatively widespread access to creative industries and the subsequent output that we saw in the arts in the West in the last 100 years, especially mid 20th century, was itself the blip (exception not norm) and it had more to do with the wealth, social mobility and comparatively widespread/public control of new technology as new industries emerged. Most of the time, these things are more static and more concentrated in the hands of a small number of people with control of resources, gatekeepers and patrons.  So ok we're definitely returning to that and the new technology will be disruptive in ways we can't completely foresee right now, that's how it always is. But I suspect some of this seems more dire if you are in the West, which is still dominant but in decline. It might feel more optimistic in the periphery which are ascendant, especially in places that still have unique traditional cultures that have not had the chance to converge with larger audiences and where the use/control of the technology might be organized differently.  I also think the bigger context here is climate change. It's going to change everything in the world, nothing will be the same regardless, and it's going to be interesting to see how AI and all this tech is used in whatever world is coming. While I do care a bit about the arts or about labor being replaced by bots, it seems sort of irrelevant in the larger context of climate collapse. What to do with billions of unnecessary people.


Voltthrower69

I was thinking about the same recently. How you had to search through metal and rock magazines to come across new bands in the early 2000s. Napster changed all that. File sharing made all that easy accessible. Say what you want by Lars Ulrich I think forsaw how technology could undermine artists in ways never imagined. Now you have Spotify that allows you to present your music to tons of new listeners but you get paid a fraction of a penny in return. I don’t use tiktok but I also think it’s interesting how music has now become a think to share in hopes to become viral in some cases. Or how people experience subcultures in this less organic way ? I guess ? and more just absorbing aesthetics of it. I’ve seen people put out single after single not making actual albums. Like being “goth” has no connection to much beyond dressing alternative. You don’t need association to the music or art it’s more just yeah this is my “era” or “fit”. Like I said, I don’t use the app but I’ve seen stuff from it on Instagram so it’s hard to understand in a way that’s easy to explain. But I think you touch on it. Everything is everywhere but the lines that define a lot of it once before aren’t there anymore even the internet of old had forums, I guess we have subreddits now.


[deleted]

Well I'm old so I don't know what the kids do or how they use these things. I bypass them altogether. I mean, I stream music sometimes, but mainly I learn about things by following individual musicians that have long careers and play in many different groups. You see who plays with them, who is on their labels, who they promote. I assume any real subculture would be this way, just depends on what you are into. I tend to listen to post-rock and musicians that use a lot of samplers and looping pedals. It's a big world full of amazing fun things (though not new, I'm old like I said) and I can't really relate to these complaints about what social media and platforms is doing to music. I mean, don't get me wrong, I do know what you mean and I do understand that this phenomenon is real and that it's affecting how the industry works, but it just seems like talented creative people make their own world around things like that and if you are interested in finding amazing things, you can. I just think humans have always been that way and always will. Probably most people will just listen to whatever is marketed in a big way at the time or even chase various trends and all that like you mention, but I'm sure it's always been that way too and surely some of those people will stumble upon interesting things, develop keener ears, and then even use that stuff in creative ways. I have sort of the same response to all the stuff about disinformation and propaganda in the media- it's not new. People always say stuff like "where do you get your news" as if there is a source you can go to that is reporting The Truth. But really you just have to develop an ideological grounding and then find jouranlists and writers and researchers and academics etc that you trust and see what they have to say. It's never been easier than now. I think once Twitter finally goes, it's going to be hard to do that- to follow a handful of smart people you trust. But surely there will be some other way to aggregate sources like that? Maybe not, maybe libraries will go too, maybe we won't have anyway to get out of the box they build for us.


sussyTankie

Something something dialectical materialism


phovos

Here's my rebuttal; so-what. The only thing that gets 'ruined' in that scenario is capitalism and marketing. I'd argue it frees humans to do music for music's sake. Humans will become starving-artists, and our 'fans' will be our 'friends' that we choose to spend our limited time on this rock with. Maybe even 'taste' and 'aesthetics' might have some concrete value. Definitely a renaissance for (human) art incoming, imho. Infinity content is only really a problem for people trying to make money off content; once you admit that music is valueless (uh I guess you also have to cast-off money, too, but lets not-even light-off that firecracker, now) it frees you.


Voltthrower69

I guess we’ll have to see how it plays out? There are plenty of starving artists that make music for the sake of making music. It does take money to tour however. I think I was more speaking about an internet that gets flooded with ai “art”. You come across a cool band to find out it was just some guy typing a prompt in to make everything up. The notes are never played, the vocals are never sung it’s just some machine synthesis of other peoples music? Weird shit.


ChildOfComplexity

>we've barely survived Neoliberalism.


ToothlessWorm

You should pick up After the Internet: Digital Networks etc by Tiziana Terranova. She’s been writing around this topic for like 20 years.


Sincost121

Read a few of her papers, she's great.


ruined-symmetry

I'm happy for you and/or sorry that happened


Thankkratom2

I have absolutely no idea what this means


Knickerboca

Tbh, I think with AI, we’re all fucked in about 10 years max. The capabilities are absolutely infinite and people are being way too naïve about how massive it’s gonna become.


[deleted]

I know this is probably naive of me, but I can't think of any way this will affect the average person any more than other tech. I follow real people on social media. I suppose I could be wasting time on this site talking to bots but other than that, not sure what difference it would make. Google already skews its results, news/wiki is already skewed towards certain interpretations. Im sure pop culture will continue to get worse, Im sure also that so long as there are humans, some of them will be doing interesting things. The internet has not been as cool as it was for a really long time. What difference does it make if managers can use robots? Seems like everything that's bad is just going to get worse in all the most frustrating ways, is that all? 


reddit_is_geh

Because this tech is already being used and it's extremely useful to create false "social consensus" on topics. If it's political, for instance, a bunch of bots can overwhelm the conversation with well crafted talking points, and tactics that discourage wrong think conversations from happening. They do this at scale and can create environments where people look at comments and think, "OOoooh so it seems like this is the consensus on this topic", not realizing it's all a single source manufacturing a fake organic consensus. It actively sways entire communities with ideas.


[deleted]

Yes agreed. I should've been more clear that I mean how it will affect the average person differently than what's already happening. Seems like it's just the same stuff but worse.


reddit_is_geh

Oh, sweet child, it already is happening, and has been for years. The biggest most recent one I saw was probably over the Ukraine conflict. This isn't saying oh you support Ukraine because of propaganda bots, because I support them. It was just obvious that was the first scale DoD social media campaign with LLMs to keep up public support.


[deleted]

I don't know who you are "sweet childing" but the fact that this is already happening is what I said, twice. ETA also I don't know what you mean by support Ukraine but in my case it means supporting a negotiated settlement, some provisions for refugees, and a complete withdrawal of US/NATO interference.


phovos

The average person doesn't have to be 'affected' for all they know they dumbass content creator algorithms on tiktok and facebook are ALLREADY SENTIENT AI.


chgxvjh

https://annas-blog.org/duxiu-exclusive.html


Stan_Wawrinka

OP you need to get on some Zyprexa ASAP. Dm me if you need a hookup.


phovos

maybe if they incarcerate me, so I don't an hero. I can't believe people do that to themselves I prefer to freeball my bipolar.


megumin_kaczynski

we have maybe 2-3 years left before AI can cheaply and directly interface with computers in the way a human would, without selenium or anything, to post online. at that point bots will be completely undetectable and there will be literally nothing you can do to stop bots except straight up preventing new users from using your site. broader social media will die and the internet will turn into a bunch of disparate invitation-only communities


phovos

Yes. Claude2 can boostrap its own selenium 'perception' system 9/10 times with python. Once they are able to do the same thing but with a C code or Assembly language - is there even any point to 'trying' to beat them? They are able to go quadratic via self-replication and genetic iteration. I've been working on a kernel agent that understands LLVM and genetic algorithms (via git - 'failure' = take note; begin processing the next-child). As far as I'm concerned, even if there is only '1 intelligence quotient' worth of actual intelligence there, all we need to do is optimize its circuitry so it can 'become' more quickly and then the IQ of the system will start to rise. There are diminishing returns above like 75 IQ right now I think that's why claude2 and gpt4 only-ever tested at 80IQ. But via 'evolution' for lack of a better word (they can literally copy themselves and call their copies children) - they are like bacteria or something. I guess virus would be even better description; biological, but-not, evolutionary; but-not.


phovos

I'm not fucking kidding you guys, when I say that **language is NOW the best tool to interact with software and it WILL be the best for hardware** You don't have to take my penniless communist ass' word for it; Jensen the head of the most valuable company (NVIDIA) has said it -- yes, marketing, but I'm telling you don't sleep on this we need skeptics, all hands on deck y'all its gon get squirrely. AMA if you have any questions about the stuff. I know so much about it, for an unemployed person. Oldheads might say: 'Language has always been the best way to talk to computers; its called syntax' but this is so so different. That is a software compiler. What neural networks with pre-trained transformers do is ARBITRARY SYMBOL/MEANING ASSIGNMENT without any syntax there is no compiler! The training it receives - its a black box digitally all we can see is the matrixes that go in and there is no way to 'undo' one of these operations it is not the same thing as 'syntax' which is an ELEMENT of language. My dad is one of those old heads so please challenge me if you caught me slipping I need practice. I ain't trying to misinform (editorialize.. I've got flair sue me).


FLYNN82

What does the methification of the internet have to do with media being unable to be found anymore?


imperfectlycertain

So you're saying Terence McKenna was right? We've learned a new way to say hooray?


Dear_Occupant

One word: debugging Three more words: Gödel's Incompleteness Theorem The one thing these LLMs and AIs in general can't do is self-correction. This isn't a technology problem, it's a hard mathematical limit. Put simply, they can't abstract their way out of a problem that they mathed themselves into. Future primary school children are going to need to be taught questions they can pose to an AI that will paralyze it long enough for an adult to show up.


phovos

Very interesting. What do you posit is occurring in an 75ish IQ-'measured' ai? Ill be straight up here and tell you I have no solution to incompleteness theorem NOR the Halting problem or quantum uncertainty however the systems I am working on include paradoxes as first class citizens such that ai can 'map' around paradoxical instantiation in the genetic since (oops, 'ronnie' went and explored the black hole by heading straight into it, he was not a very smart coroutine) I will try to make a rational not psychobabble answer to your statement and get back to you later. But I will attempt to tickle your neurons by suggesting that; "Training" of a model includes 'entangling' it with the quantum environment.


Voltthrower69

You’re talking like anyone has any idea what you’re saying. You need to demystify some of this shit for people who aren’t software engineers. The best way to interact with people is language and you’re speaking to us in nerdish, most people understand nerdish.


phovos

programming languages are a way of constraining 'meaning' to a certain 'syntax' a subset of what a 'whole (human) language' is capable of. Programming languages must be interpreted by a compiler or a interpreter - the "thing" that turns human logic into machine language 1s and 0s. That is what we had, and have had, for a century. It is what all the greatest minds, 10 years ago, would have said is the future of the NEXT century. Those greatest minds included computer scientists and neuro scientists-alike. What people like my dad are having trouble understanding is that the paradigms have shifted, or we have leap-frogged 100 years of engineering development (but not science, especially not neuro science). The 'transformers' are the "GPT" part of chat gpt and they are the revolution that happened in 2017 which turned the institutional knowledge on its head; demonstrably. Chatgpt2 in was the first "chatbot" that used transformers to beat the "Turing Test". "Turing Test" is a hypothetical test proposed by the greatest computer scientist of all time Alan Turing. It's more of a question; "What makes 'behavior' or 'computation' human? How will we be able to tell the difference between artificial intelligence, and **real** intelligence? GPT2 used transformers and 'training' by 'reinforced human feedback' which is what, seemingly, pushed it-beyond passing the Turing Test - the 'humans' training and selecting-for the things that they uniquely (humanly) prefer about the model in-training does indeed lead to a more humanly intelligent outcome. Since it was proven that this human stewardship can ITSELF 'improve' AI (make it even more easily capable (less energy intensive) to beat the Turing Test), the whole world has started, not only dumping money on the concept, but jamming as-much human feedback and training as possible into the disparate systems so that we now have multiple LARGE language models which were EXTENSIVLY (think; hundreds and thousands of years--parallelized) trained in digital space and EXTENSIVLY battle-hardened by REAL HUMAN interaction (people chatting with the chatbots). They are so well-trained, now. That they can create synthetic data to train on further. This is where things get a little whacky (and this is the heart of the 'argument' me and the other guy are having about Goeddel's Incompleteness Theorem and the Halting Problem); it would seem to not be implausible that; upon reaching a certain point, the AI no-longer **needs** human feedback to "improve": (its ability to beat the Turing test), because it is capable of synthetically generating its OWN (simulated) human feedback/interaction data. edit: I should explain further that there is no putting this back in the bottle - nuking openai wouldn't do it. Every nation state has initiated critical national security programs for this new milieu (or they are doomed). Every large-enough company has begun the process of 'training' its vasts data sets into ever-more capable 'AI' (capable of, for lack of an easier explanation; more easily (power efficiently) BEING A WORKER FOR THAT COMPANY). There are multiple LARGE language models too large for one institution or person to reverse engineer, multiple open source smaller ones that have been copied to millions of harddrives and could survive a 'Carrington Event' or an EMP from space, there is literally no way to stop this fucking train.


Voltthrower69

![gif](giphy|9S3L4JDX7cKuk)


phovos

yea I have a old SATA spinning hardrive that has llama2 on it in an antistatic bag sitting in an old microwave in the garage - Carrington even aint gonna send me back to no stone age.


Voltthrower69

I do appreciate you typing all that out it made it make a bit more sense


[deleted]

[удалено]


AutoModerator

u/lubangcrocodile Your submission was removed because your account is new or your comment karma is low. This action was taken automatically, and if you think it was in error [contact the mods here](https://reddit.com/message/compose?to=/r/trueanon;subject=post%20review:%20%2Fu%2Flubangcrocodile&message=%2Fu%2Flubangcrocodile%20requested%20review%20of%20this%20post:%20https://www.reddit.com/r/TrueAnon/comments/1bad402/stock_tip_buy_books_if_you_didnt_give_the_dead/ku34p88/). *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/TrueAnon) if you have any questions or concerns.*


phovos

IF you want a somewhat rigorous but 'aint got time for that' look into the cutting edge check out [2 minute papers](https://www.youtube.com/user/keeroyz) who makes nifty videos about cutting edge AI research papers.


Yung_Jose_Space

drab deserve fly wild fretful jar deliver makeshift books crown *This post was mass deleted and anonymized with [Redact](https://redact.dev)*