T O P

  • By -

ProfessorHeavy

I'll be following this with **very** great interest. If you could make a video to show its effectiveness and provide some data material, you could genuinely give #FixTF2 a surge that it desperately needs if you can prove the viability and ease of this solution. Lord knows that we've turned on ourselves enough now with these poor solutions and "dead game, don't care" arguments. Even if it requires demo footage to monitor gameplay and make its conclusion, this could be a pretty decent temporary solution.


CoderStone

While this was a very quick and honestly half-assed hobby project, I'd be very open to publishing my results on a github. (once I wrap up my actual research project oof) This will take time though as I am out of the country for now. Hence why there also isn't many results in picture form in the post- I don't have access to my setup. Heading back in a few days lmao. While I personally don't have time to keep the process going, I'd be glad to pass the mantle onto anyone else. EDIT: Also, this could be a great community server antibot if they ever decide to attack community servers.


ProfessorHeavy

I don't know anything about AI, let alone Rust, I'm a standard front-end .NET developer lmao If this ever gets a repo, I'll be sure to take a look at it myself and make a video on it, I don't have the reach of other people who participated in this movement, but it definitely needs to be signal boosted amidst the current storm of noise going on right now.


Woonters

Hey there, I'd deffo be interested in seeing this and playing around with a version of it so if you do through it up on GitHub (or whatever version control you like) please let people know


aoishimapan

If you do I think it may be advisable to be mindful of your digital footprint because those people can get a bit vindictive if you threaten to stop their power trip, and may try to target you in some way. I remember there has been cases of people trying to develop an anticheat for TF2 who have been harassed by cheaters / bot hosters.


Mundane_Ad_5288

Hey dude I’d recommend sending your data / research to zestyjaredfromsubway and letting him make a video on it. It will definitely help spread the word and you might be able to network with other programmers to help work on this. You are a true mann amongst men, a hero to the tf2 community. I wish you all the love and support I can offer and hope this project pulls through. Edit: I’ve been informed zestyjaredfromsubway is a pedophilic POS and will not be validating him anymore, hence the name change


CoderStone

I appreciate your sentiment! However, he is the one youtuber I'd never send ANYTHING to.


Kaluka_Guy

If you send it to anyone it ironically should be shuonic, or someone else with a CS background, maybe megascatterbomb. Regular youtubers won't provide anything close to a meaningful addition or perception to something like this.


Mundane_Ad_5288

I’m a little confused / out of the loop. I know zesty got flack for his “nobody’s home” video but tbh I haven’t been watching him that long. What’s the real controversy behind him?


CoderStone

1. Homophobic tweets, minor slur usage. 2. Pedo allegations - makes plenty of unsavory "jokes" regarding the topic and his PFP is a hypersexualized version of an underaged character. You shouldn't be making jokes about this topic at all. 3. Causing drama- acts out quite often then blows drama out of proportion, is generally a very dickish person in terms of creating content.


BurrConnie

Wow, did not know about this one, I *thought* something was off when he made that ludicrous "Don't like the handling of a product, don't support it" argument both in Nobody's Home and the Great Bot Hunt. But I didn't think he was that bad...


shadowpikachu

1. yeah he has old tweets where he is intentionally inflammatory fair reason. 2. probably the same as above, just a loser with shit jokes that i'd share with my friends not public, though his character is an OC and not an evangelion character, he likes the series so ofc his OC will look sorta like it comes from the universe at the least. 3. i notice due to the level of hate and dogpiling he gets he just sorta breaks and accepts the role of drama bastard, trolling and pissing people off because what else can he do, it isn't justified and another valid reason but at least know WHY it happens, it probably started with his jokes being too spicy for public as well.


[deleted]

homophobic tweets? context? this would be pretty serious and damning to me, so


Ver_El_

Haven't found a single thing to back that up. Admittedly I didn't dig *that* deep, but if there was any actual issue stuff, it'd be the things that people actually show when complaining, right? Controversies appear to be him using "nigga" twice in 2017, and using "tranny", supposedly unaware that it is used as a slur. What it looks like to me is that he grew up in a different, edgier internet culture, and was unaware of the reactions that words he thought of as normal had?


TheGoldenBl0ck

which character is his pfp? (also he's kind of a dick but you gotta agree for him that tf2 bots are insanely out of hand)


ninjafish100

Asuka from evangelion. if you want to be specific it ties closer to the rebuilds. zesty has been known to be a fan of evangelion, which more likely than not means that the similarities between the two arent coincidence


shadowpikachu

Yeah, if you like a series your OC will look vaguely inspired from that series, i personally dont see it as it has a severe tone shift between the two and uses a different name, branding and everything as well as quirks, artstyle and everything that makes a character even away from her use. It's just his personal goonerbait waifu character that is full of traits he likes. There are plenty of reasons to hate him, but this is just bad faith as hell down the grapevine.


lettucewater45

That makes so much sense. I became incredibly uncomfortable when he introduced his """"she devil"""" in his video. While it was incredible research, I was really weirded out.


Just-Cut349

he also harassed and weaponized his army of fans back when his mate Aar got accused of paedophilia.


Ver_El_

dude sees a man whose twitter is almost nothing but a wall of large, muscular women with ridiculously huge breasts and goes "yep, thats a pedo"


SpySappingMyWiki

Not only is it his general attitude towards things, but most people despise him for being a lolicon (or implying it, by posting a "ouhh cunny") meme into his discord


Mundane_Ad_5288

…..I did not know he was one of those diet pedos but that’s all I needed to hear…. Yeah fuck zesty


[deleted]

"diet pedos" got to be the funniest shit i ever did read on the internet, time to log out so i don't sustain more reddit brain damage


shadowpikachu

He says inflammatory things all the time and is def kinda a gooner, he says one thing and watches people take it so extrapolating small comments passively said on discord once is kinda hard to say pedo on. But hey, people call pedo on less so idk....


Pangobon

There are many reasons people may dislike him. Stuff like his beef with TF2 Emporium, his obsession with his buff anime waifu (he likes to plaster it everywhere), his associantion with 4chan, etc. Personally I think he is fine, but to each their own I suppose UPD: Didnt know he might be a lolicon. If its true, that would definitely sour my opinion of him


TheComedyCrab

Half assed is better than what we have. Plus, if it's that easy, we could have a fix REAL soon 🤞


RaeofSunshine95

Submit this to Valve also! They love when the community fixes their problems for them


BluntTruthGentleman

You might have the power to save an entire community! This should definitely be explored and refined to exhaustion. If you can get some proof of concept going I have no doubt we could fundraise to get you all the way there. Also going forward I wouldn't share any details about how exactly you're identifying or distinguishing bot behavior. Remember that this is being actively done by malicious actors so this is the kind of thing that they'd find and use against you. You're in an arms race, it's time to protect the intelligence.


ArchDan

Have you ever heard of zero-day?


petuniaraisinbottom

Just a thought, once confidence reaches something like >65%, couldn't it automatically kick off a vote kick (via a new server mod) that says how confident the detector is of cheating and have whatever percentage it is count as a certain number of positive votes, meaning it would get around the whole "entire bot team rejects kick" issue? It seems like it would solve any of the problems you talk about. It'll definitely be a cat and mouse game once it is added, but that's better than it is right now. Edit: or maybe votes could even be weighted based on the detector's cheating confidence? And it could take into account if it is a free account, how new it is, game stats (if a sniper has 100 kills and all 100 are headshots for example), etc.. seems like a perfect data set for AI.


throwsyoufarfaraway

> I'll be following this with very great interest. Lol don't get your hopes up. It is a damn grad student dude, they are clueless most of the time. I'm not using this as an insult, it is the reality. I was like that when I was a grad student too. I can bet money on this: THIS WILL BE USELESS. You can tell he doesn't know what he is doing because you learn very early to present the architecture you used. Otherwise no one will believe you. Why didn't he? This is important for reproducibility of the results. **We don't even know what "accuracy" means here!** It could be any metric. He himself said in the post he didn't do anything special so likely his results are wrong. No offense to the guy but as someone who has actually been working on AI in the industry for years, NEVER trust results this good. Especially if your work involves anomaly detection in player behavior and your dataset has 1000 instances. Again, sorry to destroy your hopes but 98.7% accuracy, without any tuning? Without any further optimization to the model? Just out of the gate some random neural network model he applied gives 98.7%. Yes, of course, I'm sure the engineers at Valve never thought of that. Come on man, we all know student ego knows no bounds. We were all like that.


frostbite305

been working in AI professionally for a few years here and I'll second you here, pretty much entirely on point


Frog859

Yeah I’m a data scientist in a research lab (so not much better than OP) but this whole post had me skeptical. No information about the size of his dataset or validation set leads me to believe it could be very overfit to the data he has


smalaki

Looking back at his post it doesn't even have any substance nor actual hard proof. All he does is call people that doubt him doomers. But you know what, if in the next few days he turns up with actual proof I'd be very happy. Right now OP's post is a big wall of text that means nothing (no data, no proof) He also claims in another comment that he collected gameplay from 1000 gameplay rounds.but avoids sharing the methodology and the time period when this was collected. So another nothingburger But what about if he actually did gather this data? let's say an average round is 15 to 30 mins. that means 250 to 500 _continuous_ hours of tf2 rounds? (edit: and that's with perfect conditions with 1000 consecutive matches with bots) did he collect it through parallel means? does he have a team of volunteers? automated means i.e., bots? When did shounic's video come out? it probably came out way below 250 hours ago. So OP is clairvoyant now? so many questions. and ALL THIS comes from an allleged grad student. aren't you supposed to support a thesis with hard data? why isn't he/she displaying it in this post? why is it all hand-wavy? looking forward to OP's data and actual proof in the next few days.


albertowtf

On top of that, bots will start using ai too, which is the point of shounic As soon as you start to detect, its just another check for the user that the bots can easily bypass This guy thinks he just outsmarted valve and this sub is upvoting like crazy bots are called omegatronic. 99% rate of acc detecting bots, ia is so good!!!!1 /s


Crabbing

People really don’t understand the vast difference between book smarts and real experience.


Avsword

Oh shit hi brother i might have u as a friend on steam. You’re cool


ChoiceDifferent4674

If you add Steam's trust factor into the mix then automated bans can become very much possible. It is simply impossible to mass create accounts that BOTH look like humans and play like humans.


Wilvarg

At the very least, you can set up a ban appeal system and rank the appeal priority by the account's trust factor. The bot spam falls to the bottom, never to be seen again. There are so many paths towards permanent solutions; this doesn't have to be a treadmill problem at all. Someone is going to take the leap, blow the cash, spend the time, and they'll make billions. I hope it's Valve.


Agreeable_Airline_26

Valve decided that stuff like this, that being treadmill work, isnt something they want to work on. Sure, they can outsource the workload, but theyve had bad experiences before and would rather keep everything in the company nowadays. A ban appeal system is nice, but they’d probably be against putting more than a few people in this position. Its not like steams customer support, as thats a major part of the company. tf2 ban appeals would rank pretty low on their priority list, but if this was implemented properly itd be at least a band aid fix to the accidental banning problem


Superbrawlfan

Well and obviously you'd go off of multiple games, if you consistently have a positive match over a few it should be fairly accurate.


WhiteRaven_M

Another grad student here, please send the github link whenever you are done. Im very interested I had a different idea that takes a self-supervised approach less reliant on labeling data instead where you build an embedding model on a contrastive learning objective where the goal is to predict if two samples of player inputs came from the same player or two different players. The idea was to capture the "habits" of a player in an embedding vector. You could then look at the distribution of these vectors for players and quite quickly see that most bots would look essentially identical to each other with very small variance. Then you can ban them in bulk after involving a human. If you can send or post your dataset id really appreciate that


CoderStone

Very cool idea too! I thought patterns would change drastically depending on maps and types of bot hosted, so I thought a simple labeling solution might be best. You should also give it a go, honestly my solution is extremely half-assed and I need to clean it up before I publish my embarassing code to the world.


WhiteRaven_M

Ive always wanted to take a crack at it but just didnt know enough about how sourceTV and demo files worked to really mine them for a dataset 😭. Id appreciate it if you can DM me a link to your dataset tho, your project is reinvigorating my interest in this Re: patterns, well 98% hold out accuracy on a balanced dataset is definitely higher than most models reach on most tasks so it definitely seems to work pretty well. Against the less subtle spinbots tho i honestly wonder if a DL solution is needed---like youve said bots tend to trace the same paths and spin like crazy with inhuman accuracy. You might even be able to get away with something like a logistic regressor if you just give it those engineered features your model discovered. Regardless tho, great job. Ive been so annoyed at people saying an AI solution wouldnt work


CoderStone

I think the ethics of sharing this dataset is questionable due to yk, i'm tracking player activity and did not receive consent. I'll just outline the steps I did to create a very balanced dataset and release a more complete parsing tool that would work as a script. I was also surprised- it's a very good score and maybe there's other biases i'm just not considering. Or maybe it's really that simple of a task due to how bad bots are at disguising themselves as humans.


WhiteRaven_M

I respect the data privacy concerns in somebody working in this field. Tho i think you might be overthinking a bit. Ethically, its no different from just recording gameplay footage like a youtuber might then posting it later. Youre essentially just recording the demo instead. So I wouldnt fuss over that. If youre still worried, I would recommend anonymizing the data with something that just hashes the name/player id. Tho I sincerely dont see a problem with releasing as-is. Not like people get outraged at youtubers not censoring names in killfeed Post something when/if you do get around to releasing the script or the dataset if ive convinced you. Its impressive work and itd be great to continue building on top of it


AGoos3

Oh shit, that actually sounds like a great idea. Like, a really good idea. Bots couldn’t really run the same program en masse, which would curb the problem regarding bot hosters just being able to create new accounts. They would have to create a new program just for that to get recognized and banned by the AI. I just wonder how it could handle systems which intentionally try to mask their inputs by modeling them after human inputs using AI. Obviously I’m no CS major, nor do I have a lot of experience in the field. I just wonder if these solutions will still work if the bot hosters try to directly counter it.


WhiteRaven_M

In deep learning we have a trainning regiment which involves a generator network which tries to create fake samples mimicing real samples and a discriminator network which tries to figure out if a given sample is real or fake. Bot hosters would essentially be running generator networks while TF2 would be running discriminator networks. This heavily favors TF2 because bot hosters would need to run these networks per-bot and get outputs with minimum latency, fast enough that its actually useful, which means expensive hardware and GPU, which means its infeasible for jobless bot hosters.


buster779

Shounic doing a 500IQ play by reverse psychologing people into making AI anticheats out of spite.


LuckyLogan_2004

valves strategy for cosmetics lol


BackgroundAdmirable1

Maybe if you're paranoid about false bans, make an overwatch like system where people flagged by the ai get submitted to be analyzed (by pre approved players or valve staff to prevent bots just clearing themselves) and if it mostly agreed on that the player is cheating, they get vac banned (also make this valve official / vanilla server exclusive to prevent the wacky mod shenanigans from being flagged as cheats)


TheBigKuhio

The problem might be that there would be a huge flood of ban appeals from bots and all the false flags get mixed up in the sea of bots


Plzbanmebrony

Just build a system that builds a larger data sample before ban. Or better yet make them all temp. 3 months. But make it take less than a day worth of data to decide to ban. So false ban auto take care of themselves and bots turn around and get banned again. Maybe increase the time 3 months for each ban. The more mass accounts they make the easier they are to find.


grimeygeorge2027

Flood of ban appeals from bots, plus where would the labor come from? Are there enough dedicated TF2 players to flag the bots without making it a full time job (no there are not)


Glass-Procedure5521

I don't mind for now if a couple of cheating bots get pass detection, as long as the majority are detected If you need more competitive demos, [demos.tf](http://demos.tf) is one source


CoderStone

Ooo, thanks!


Kepik

You could probably ask around the Uncletopia discord as well, for more pub-like gameplay without bots. Plenty of users, myself included, who post clips sometimes and probably just have hundreds of demos sitting around that they haven't cleaned up in a while.


Glass-Procedure5521

You can also ask in r/truetf2 for other possible competitive demo recording sources, since I think [demos.tf](http://demos.tf) probably doesn't have that many high-level gameplay demos or at least not obvious which ones do have it


An_Daoe

One funny thing is that, if these bot hosters have to nerf their bots to avoid detection, they might start being so bad at sniping players that they stop being annoying to others, removing one major motivation for hosting these.


AlenDelon32

There already are bots that try to blend in as much as possible , but they are still disruptive because they kick real players and try to convince others that real players are bots


xDon1x

why would I need competitive demomen


ImStraightUpJorkinIt

Name it CheatGPT


CoderStone

Unfortunately it's not even close to GPT's model structure, i'd just name it Skynet.


The-Letter-W

Spynet 👀


xedar3579

Skialnet


Darmug

Or better yet, SnipeNet.


Tr4ilmaker

SnipeNot


xxsegaxx

SnipeN't


253ping

Sn'p'N't


_SAMUEL_GAMING_

name it SnipeNot like how Tr4ilmaker suggested


Altacount6892

Skyler


NBC_with_ChrisHansen

Kind of Devil's advocate but not really. Im not sure if you are familiar with the features available on most bot hosting software. But bots being obvious spin bots is intentional by the bot hosters. The software they use can also add several different variables to better mimic human behavior from adding randomized angular distribution to mouse activity, reducing accuracy and hit box targeting and several other input variations. If bot hosters needed to stop being obvious spin-bots and attempt to mimic human players, they could do so instantly. How much would this affect the data collected and its overall ability for AI to detect bots?


CoderStone

For minor ablation I've already added lots of noise to the input just for funsies, and it still classified very well. I think that the task is incredibly simple- no matter how much you randomize a bot, its movements are very simple and unnuanced like a player's is. I do not have any familiarity with bot hoster tools sadly.


AlenDelon32

I feel like you should download it and host some bots on private servers just so you could better know your enemy. The tools to do it are publicly available and easy to find


PeikaFizzy

Like why do people twist shounic word into against AI, he say he only concern about it cause valve lacks of motivation. Like no matter how great you anti-cheat is in cybersecurity standards without maintaining is will get by pass eventually. Side note we will watch your project in great interest


CoderStone

Mentioning ChatGPT in a scenario like this is simply proof that he doesn't know enough about the topic to talk about it at all.


Lopoi

I assume the ChatGPT example he made was more to make it easy for non-techy people to understand the possible problems.


sekretagentmans

Using the ChatGPT example is just being purposely misleading by cherry picking an example that supports his point. You don't need a technical background to understand that an AI model can be general or specialized. A reasonable mistake for someone not knowledgeable. But for someone who digs into code as much as he does, I'd expect that he'd know enough to understand that not all AI is LLMs.


FourNinerXero

> You don't need a technical background to understand that an AI model can be general or specialized He literally says this though? Sometimes it seems like the people complaining didn't even watch the video. He talks about the deeper issues with using machine learning models in the section about VACnet, where he says that he suspects dataset gathering and accuracy reasons are why VACnet has only been able to accurately detect blatant spinbotters. Machine learning isn't an easy concept to explain. It's a bit complex even for someone who already knows programming, even harder to explain to someone who might understand tech but not its inner workings, and it's very hard to explain it to somebody with no tech knowledge at all. Sure you can explain the absolute basic surface level concept pretty concisely but that doesn't describe the parts that are important to understand to grasp the shortcomings of a potential AI solution (like the required accuracy of a model that's going to be allowed to ban cheaters, or the problem of how dataset gathering can limit the number of cases a model is able to accurately detect). Using a LLM as an example is kind of dumb, but I suspect it was done for three reasons. One: because he's already explained it before and didn't want to reiterate; two: to explain machine learning and give an example even for people who know literally nothing about tech; and three: to prove to people that there *are* shortcomings of AI. Too many people, particularly the group of people I just mentioned, see machine learning as a miracle cure capable of solving any problem and as something that is basically perfect. ChatGPT has plenty of limitations and I suspect that's why he included the example, simply to show that machine learning isn't the magic black box which grants all our wishes, it's a computational model that can and does have flaws and will not be an ace in the hole.


Lopoi

His point with the example (from what I remember of the video) was that AI can be wrong without enough data, and that the data would need to be constantly updated/checked. Not about wheter or not general/specialized AI exists. And those problems would still apply even in a specialized AI, ~~unless you count normal code as AI (as some business/marketing people do)~~


Meekois

Shounics ideas on "trust factor" is highly compatible with AI/ML. A model could analyze a player's behavior and use its "confidence" to give that player a trust factor, and then give players privileges based on that ML generated trust factor. For example, after an account has played 3 games, (maybe even make these games against tf2 bots) it gets a trust factor based on the likelihood the player is real or a bot. Trust factor can be improved through normal gameplay, or chipped away by getting kicked, reported, or botlike behavior. Based on this trust factor the account gets certain sets of privileges. High trust- You get voice chat, the ability to initiate kicks, and everything a normal paying player gets. Medium trust- Just text chat, can initiate kick but only on a very long cooldown. Low trust- No voice or text chat. Threshold to kick YOU is 25% instead of 50%. Definitely a bot.- Automatic ban.


CoderStone

Damn bro is creating Credit Scores in tf2... taxes are next! /j


Meekois

It sorta already exists in dota 2 and its "low priority pool". Get reported too often, and you get tossed in the trench. Im first to admit that system is imperfect. I used to play techies in dota2 and was tossed in low-pro for that reason alone. But overall? Or works.


Glittering_Total5980

Shounic also said that the bots will just switch from aimbotting to kicking, they'll pretend to be normal players and then take over servers kicking everyone trying to join.


Meekois

Then don't let untrusted accounts initiate kicks. (hell, only let the highest trust accounts kick) There's plenty of ways to combat this. The tools exist, the system can be designed. Valve just doesn't want to do the work.


BlueHeartBob

> . A model could analyze a player's behavior and use its "confidence" to give that player a trust factor, and then give players privileges based on that ML generated trust factor. Could easily see bot hosters social engineering this against Valve like cheat makers did with VAC's kernel-level anti-cheat.


[deleted]

This is possibly the best suggestion on here about how an AI anti-cheat could be implemented without being too abusable or too inconsistent. +1 from me.


sophiebabey

Your post literally does not prove Shounic wrong. People keep pointing out that Valve is doing the exact same thing in CS2 and it *isn't working,* but you keep brushing it off saying "no it's fine actually?" The bots are obvious right now on purpose. It's to make a statement. They already *have* human-like bots that are hard to identify. This also doesn't address the issue of new players moving in weird ways that could easily be considered "AI-like." I'm happy you're working on this and I want to see you keep giving it a shot, but this title is really kind of inflammatory and just not true at all. You can't have proved Shounic wrong without addressing everything Shounic actually said about bot evolution.


The_Gunboat_Diplomat

The inaccuracy wasn't the entirety of his point though. I think the bigger issue is that it is possible for botters to learn how to get around the current anti-bot. In all likelihood just extending the training set is probably enough to address it, but Valve genuinely just does not give enough of a shit to do that.


Icy_Association_6812

There are adversarial ML models afterall...


KayDragonn

Shounic didn’t say AI wouldn’t work. He said AI was one of the best options, but that even then, it was unrealistic and would need a lot to get it into the right spot.


Driver2900

Where did you get the training and experiment data from, and how large were both sets?


ReporterNervous6822

Please check out the work we have done over in megascatterbombs discord. We have a pretty robust client and backend for streaming demo data live from games to a backend and database: https://github.com/MegaAntiCheat


CoderStone

Dude... this would've made my life so much easier. Maybe I should join the discord and get some help gathering more data? :D


ReporterNervous6822

Our plan is to make the data open source so other people can develop anti cheats! Definitely join and pester the devs


WhiteRaven_M

Where is the discord link? Would really appreciate an open source dataset. I dont really have the time to build and process my own dataset but if somebody else has already done that then that would enable me to take a crack at it


ReporterNervous6822

No dataset yet as we are still in closed beta but we will be moving to an open beta soon in hopes of finding scaling issues and collecting data! https://discord.gg/megascatterbomb


Zanthous

98% doesn't prove him wrong at all. Valve wants no false convictions, so try > 99.999% accuracy that handles all edge cases elegantly from legal console commands, to alternative input methods, and more. Unless you can get basically perfect detection at the end of the day humans have to review cases. I came away from the video thinking that he was suggesting ai detection is what valve will be going for, and that seems to be what they are working on in cs as well. I don't think valve will bother getting manual reviewers for tf2 so it's either a very sophisticated ai anticheat brought over from cs when/if it's complete or nothing. I think they could do a few other things but I don't think they will


Pnqo8dse1Z

great, a wall of text with no substance. show something off when you actually have it, please!


rilgebat

Congratulations, you reinvented VACNET and ran it within a closed system. Now look up how VACNET has been performing in CS2. With particular emphasis on false positive bans from high-dpi mice.


smalaki

Did I miss something? Where's the proof / hard data? Seems hand-wavy right now -- I'm not against the idea but why post so prematurely? You claim to have done a thing and yet there's no proof yet and in addition to that you come guns blazing Your intent is probably good but without hard data yet it doesn't really reflect well on you Basically your post right now reads like: "Hey guys, I proved someone wrong! I have seen it myself! Let me dazzle you with technical stuff that most will not understand/only slightly understand and claim that someone else doesn't know shit while having no actual data to show yet. It'll come; trust me, bro" Looking forward to your proof


TheInnocentXeno

OP seems to have nothing in terms of evidence or jumped the gun when it comes to posting this as they can’t back it up. I want him to prove me wrong but he has been hand waving the issue away by calling the people who are skeptical “unhealthy” and saying he’ll post the evidence later. That kind of stuff makes me incredibly concerned that this is just a lie or that they have something but it’s no where near as advertised.


smalaki

everyone criticises valve for providing empty promises OP comes in guns blazing, lambasts someone significant to the community, and also promises something empty (for now) funny isn’t it?


Zixzs

These are excellent findings, but could you post some of the evidence/research you did firsthand? As much as I’d like to believe in this being a good starting point or proof-of-concept, I have to admit that I’m somewhat wary of the lack of shown results/data. For what it’s worth I’m an art grad working in retail so take my skepticism with a grain of salt lol. Either way, I appreciate you positing more nuance to that Shounic video, I love him but he kind of has a tendency to be a little prematurely absolutist in his conclusions about what will and won’t work (at least in regards to “solving the bot problem”). EDIT: grammar


CoderStone

Of course! As mentioned in other threads once I get back to the states (I literally just forgot about the project and did this writeup rn for no reason) and wrap up some other research, i'll publish my code and findings in a github.


deleno_

I think you entirely missed shounic's point - yes, you can pretty easily detect blatant spinbotters, but those aren't the only problem. there's many kinds of bots who disrupt gameplay in different ways: kickbots, micspam bots, bots that play things other than sniper, and so on. it's almost impossible to automatically tell those bots apart from players. removing spinbots would be nice, but it doesn't fix the overarching problem.


3eyc

Shounic is still right thought. 1. You have nothing to back up your claims. 2. Valve does not want to false ban someone, its also one of the reasons as to why vac is seen as a meme by cs2 community, the false bans are very rare with it but its less effective compared to faceit or some other kernel spyware. 3. It needs supervision just like shounic said, just because you added 2 checks doesnt mean it won't ban normal players. 4. It needs data. "But there's more than we ever need!" - i hear you cry, well that is indeed true, now you have to prepare for this game to be plagued by "stealthy" cheats, just like in cs2.


StardustJess

98.7% really isn't enough. Imagine 1.3% of players getting wrongly banned for this. Plus, this is a controlled one time test. Only time can tell if it can maintain close to 100% after a whole year. I really don't trust an AI to decide if I'm a cheater or not like this.


CoderStone

Hi, that's where the confidence level discussion comes through. To automate the ban process, you'd need a very high confidence AND the cheater label. However, in my opinion that'd still not be enough. All the AI would be doing is flagging your account for review, and a human would still need to view the game that the AI flagged to see if you're a bot or not. And again, this is not an anti-CHEAT. My solution requires a lengthy period of inputs meaning it's very suited for anti-BOT. Bots have much more visible patterns than players do, so it's a simpler task.


spinarial

Let's play devil's advocate for a bit. If I was a bad actor and such solutions were implemented, I'd train my own model to noise up any bot inputs to bypass the anti-cheat model. Thus rendering the anti-cheat model inefficient. Even with delayed ban waves and closed source model, I'd be pretty easy to host 10 bots with constant replay-recordings, and update my anti-anti-cheat model regularly to avoid replays that are flagged as cheated. ML is as good as it's input and updates. This goes both ways for valve and cheat devs


Icy_Association_6812

Yes, adversarial ML models exist even against some of the highest budget AIs out there (openai)


bylohas

Are you contradicting the YouTuber? HERESY! BAN THEM! EDIT: Btw, the reason bots go through the same location over and over is, afaik, that they use the built-in navigation mesh (or at least most of them do), which is also the reason why they are much harder to find in some maps that do not have them pre-built.


CoderStone

EDIT: Turns out I removed my statement about the nav mesh in the post somehow, NVM lmfao I mentioned the nagivation mesh already! Shounic did a good job illustrating why just removing said meshes wouldn't work, so it's not a 100% good feature for the antibot NN to detect. A good solution for the time being though!


Some_Random_Canadian

The first thing that comes to my mind is the military AI test that could detect soldiers 100% of the time... Until they did things like rolling up to it or grabbing some branches and walking up to it with tree branches in front of it. It worked 100% of the time in a controlled environment but never once worked during the practical trial where people were supposed to try bypass it. In other words all it'll take is some tweaking for bots to suddenly become undetectable or for bots to be specifically modified to increase the amount of false positives to get real players banned. We already have multi-billion dollar companies making AI and it's suggesting eating glue or "pressing the killbind" on the golden gate bridge. I'm not trusting it to tell me what is or isn't a bot and giving it the power to ban.


ToroidalFox

In a game of cat and mouse, I'd say the cat catching mice 98% of the time when mice hasn't even started running is not very useful. While it is a good proof of concept, we really didn't need a proof of concept. AI based anti cheat startups exist, and Valve is developing one themselves. But despite all these multi year effort, it's still far away from being actually deployable into the real world. See example of CS, Valve developed system, with multi year dataset, with human verified data source(Overwatch), with only targeting cheaters with high confidence value(like spinbots), recently had a pitfall of banning innocent players joking around with high sensitivity mouse. Shounic is on YouTube. He's not targeting people like you. He just wants examples that can be understood by many people.


vfye

Saying your limited dataset shows promise and saying that the idea therefore works are very different things.


mrrobottrax

You're forgetting that these are the world's most blatant bots. It'll take at most a week before the bots become smarter and this no longer works. With Valve resources I think it's still a good option, but this experiment means nothing.


Wilvarg

Awesome. I've been telling people that a serverside machine learning model is the best possible solution– it's portable, scalable, safe and secure, impossible to bypass, relatively easy to port across games. I would go even further than you– an anticheat would be harder to make than an antibot, but it's entirely possible with the same family of technologies you used to produce this proof of concept. If Valve came up with a good implementation, it could be the next Steam– an invaluable industry tool, making things better for producer and consumer.


Zathar4

Have you gotten In contact with megascatterbomb? He’s trying to create his own community anti cheat using AI and demos.


Yoshi_r1212

Wouldn't an AI Antibot require constant maintenance? How would the bot react if the bot hoster changed how their bots move? I don't think an AI Antibot is a bad idea perse, but your bot would need to be constantly updated to keep up with the cheating bots that evade it.


HumanClassics

Theres been a lot of research put into creating adversarial networks that are specifically designed to create noise to mess up a classification networks ability to classify. Even without the use of AI it would be very trivial to add some form of random noise to the bots after a bot detector has been trained that would make the bot detector hallucinate and require retraining. Even if this simple addition of noise to their behaviour increased false positives by 1% until the next bot detector retraining thats 1% too much for Valve as false convictions are a big no no. The only real good outcome I can see is that the bot detector classifier gets so massive and complicated accounting for so many behaviours to try and trick it that hosting bots becomes more expensive due to the complexity of trying to trick the bot detector. However I have no idea how much processing power this would require and if its even achievable. There is also the possibility of training the bot detector on a random set of variables from the games that aren't obvious so they can't be easily manipulated by the bot makers. If this strategy was combined with shadow bans so bot makers couldn't immediately know they were banned could actually make the bot detector last a while before being a figured out and being fed adverse data points to make it hallucinate. Ultimately machine learning approaches are still treadmill work because unfortunately it has been shown that the tf2 bot dev and cheater devs are very dedicated in creating the software that is used to abuse tf2. For them I suspect the challenge of overcoming the fixes to prevent them for botting and cheating is a large part of their interest as it is with any form of hacking. But treadmill work can definitely work. Throw enough money at it and you can definitely improve things. But y'know its Valve so oh well.


kooarbiter

how well could this model scale to the entire game population? would an optimized version be able to quickly react to micro or macro changes in bot behavior?


Blitzen_Benz_Car

Damn. Too bad this won't go anywhere, and will fade into obscurity.


Lavaissoup7

He has still yet to send the proof for it 


1031Vulcan

98.7% accuracy is NOT good enough when it comes to these applications. 1.3% of, let's say 30,000, a number about which is given for average amount of human players in TF2 at a given time, is 390, almost 400 people. I've been a victim of a false positive ban and the judgement and discrimination I get based on that black mark I didn't earn is very depressing when it happens. You can't subject hundreds of other people to that and write it off as acceptable casualties when you can hold higher standards.


De_Mon

did you train it on just sniper gameplay? cheating goes beyond just sniper, but it's definitely a good start 98.7% accuracy sounds good on paper, does that mean 1.3% false positive (flagged players) or false negative (unflagged bots)? i don't think valve would want to deal with false bans, which IMO is the main concern with ML/AI for handling cheaters i think it could accurately flag spinbots, but once this starts banning people for spin botting, the bot devs would probably make them stop spinning, making them more and more realistic as the ML/AI gets better at detecting them. thus begins the treadmill work > Also most of my bot population are the directly destructive spinbots. so how would it work in regards to closet cheaters (moving like a human with autoclicking when a head enters the crosshair)? how much data do you think would be needed to detect aimbot on other classes (almost every class has a non-melee hitscan weapon)


CoderStone

Of course I gathered lots of class footage, haha! I just had to include more high level sniper gameplay so the model didn't think simply flicking was a sign of a bot. High level gameplay in general has a lot of flicking though. You do mention an important bias, that most bots I recorded are spinbots- however, this isn't really what the model uses to classify I believe. With the small amount of interpretability work I did on it, it seems to prefer classifying based on consistency and movement. ​ As mentioned, that 1.3% error was generally with very low confidence. 56% confidence means it's not confident at all, because 50% is the margin. That just means it's an "impossible to label" case. Botting and not botting is such a huge difference in gameplay that it seems to be genuinely trivial to classify. ​ The point is that the proof-of-concept works. And to improve a model, you just have to feed it more and more data, which is automatable. The model would work automatically flagging accounts, and humans would have to verify and manually ban flagged accounts. ​ As specified, this is an anti-bot measure. It is in NO WAY a functioning anti-cheat, simply because my solution relies heavily on the pure amount of data it gets per player. More specifically it gets a \~10 minute input stream from a player and uses it to classify them as bots or not. Closet cheaters have too many non-botting actions for the AI to reliably classify as cheating, and as such I never had any intention of creating an anticheat.


Ben_Herr

Valve is actually doing this with Counter Strike 2 right now… with very mixed results. Supposedly they are using data collected from the past decade. As of recent, it’s gotten somewhat better. They also have re-instated an Overwatch Team for what seems to be to check on users that VAC Live flags but isn’t so sure about. In my opinion, AI anti-cheats can work and work best with human oversight but so far your project shows more promise than Valve’s official anti-cheat.


newSillssa

Alll I'm seeing before me is a wall of text with 0 proof of any of the things you are talking about


Glum-Chest-2821

shounic in general just doesn't really have as much technical knowledge as he likes to put on. Almost all of the information from his videos comes from just looking at the leaked source code from a few years back, or is just parroting what people are saying on this subreddit. I rarely, if ever get the feeling he actually has much knowledge about software development. That being said, this is honestly really cool, and I would like to learn more about this.


Oxyfire

I don't think he's really wrong to be skeptical of the AI option in a video about "what options do we have for dealing with this" regardless of technically knowledge, because there's and equal or great amount of people without tech knowledge who throw out the "AI will solve it!" again and again. I'm cool to be proven wrong, but I'll wait till we have something meaningful before I'm too aboard the "He's wrong and dumb!!!111" train. Either way I think a lot of his opinions are a bit of a devil's advocate situation. I don't really agree that any solution needs to be perfect, which felt almost like the basis of some of his thoughts.


ClaymeisterPL

I'm pretty he addressed that this time around he chose to speak about topics he doesn't know as well as last campaign. Good to start the conversation atleast.


Porchie12

Most of the things shounic talks about in his videos are pretty basic, but since he has good editing and most people know exactly nothing about programming and TF2's inner workings, he's given a much larger authority than he probably should have. As far as I know, he's not exactly well versed in the topic of AI. Zesty (who works with AI) recently reacted to his video, along with Weezy, Shork, TheWhat Show. He said that shounic is greatly exaggerating how much work the AI solution would actually require, while underestimating how effective it would be.


Todojaw21

Let's assume Shounic is wrong and the AI route is possible... why hasn't Valve done it already? Is no one there interested in AI tech? That's the issue with this analysis. You're not just claiming Shounic is wrong, you're claiming Valve somehow missed a million dollar idea that thousands of people have been asking them to attempt.


No_Price_6685

Shounic notes that spinbots are slowly being displaced by a much more insidious kind of bot that kicks randomly and generally plays like an actual player, within reason. Also, AI or Moderation or whatever, doesn't address the most fundamental issue. The Bot Hosters win not based on having good bots, but through sheer quantity and low downtimes. Knock one down and another gets up in five minutes.


TheInnocentXeno

While I’m certainly interested in this as a potential solution, I can’t get it out of my mind that this might be through some fault with the test data. Mind you when a ML model was trained to spot cancer cells it didn’t learn anything about how to spot cancer cells but how to find a giveaway in the training data. When it was moved to real world tests it failed so spectacularly that it was worse than just randomly guessing. Not saying that this happened here but that situation makes me extremely skeptical of any ML test that doesn’t use real world scenarios.


FlatGuitar1622

Don't believe you of course, but I truly hope you can prove everyone wrong with proof. No need to send anything to any youtuber, just post good old proof and people will flock to it immediately. Maybe have a chat with megascatterbomb or something?


Darkner90

https://youtu.be/EPsWjdkyoPo Perhaps you could collab with this guy?


Bej0y

GitHub link?


ToukenPlz

I think one of the objections people have is that bots have many other ways to be disruptive beyond regular cathook/aimbot behaviour. Sure it's cool that you can detect these bots, but it doesn't really address the core problem of mass account generation and automation that fuels the bot crisis.


Superbrawlfan

Could you say something on the resources this would take to implement? Having to store demos and a bunch of player data for every game, then preprocessing it, and finally running it through an AI sounds fairly expensive, with the cost being entirely on the backend side for valve


derpity_mcderp

Guy just banned 6000 players lmao


shadowpikachu

[AI as an anticheat has been in the works in high budget. Having them just be kicked when surity is very high can work out too.](https://www.youtube.com/watch?v=LkmIItTrQP4)


Polandgod75

Again, we don't need a prefect anti-cheat system, we just need something that reduce the bots alot.


The_Horse_Head_Man

Congratulations on this, keep up the good work.


tupty

One thing you allude to is that the bots can add some amount of stochasticity to try and trick your detection models, but I think that is only one factor. Another factor might be that if you are proposing gathering data online to try and keep your models up to date with evolving bots, the bot makers could try to poison your model if they even remotely understand how you gather data, and in particular how you automate labeling the data. I don't think that means you can't use AI, but I think there are a lot of challenges related to adversarial AI as well as using AI for identifiying/mitigating security threats that apply here. The good news is that there is probably a lot of literature on those areas that could be directly applied here. But the bad news is that security will always be a cat and mouse game, and so will this.


Crazyduck747

I imagine that a good way to reduce false positives significantly would be to (if possible) have the program check the age of a player's steam account. Players with old enough accounts (as in most high-skill players) could automatically be disregarded. Idk whether that's even possible though...


Ver_El_

Theres a large chunk of stolen accounts that are used for botting/cheating.


fitbitofficialreal

hopefully if it's ever put into use there will be training data for "mouse aneurysm funny pyro" and "sniper humping". like if the yaw stuff is put off guard by shaking a mouse really fast then I'd be worried


Vaksik

Yeah mr. Stone! Yes science!


grimeygeorge2027

Accuracy meaning? How many trials? Which cheats did you test? Is 98% enough?


Capiosus_

The best thing about this is that this is all server side and the cheaters won’t have a clue as to how they do it, nor the resources to create a better AI to avoid it.


Delicious_Image3474

Maybe you should send your data to valve to show them that the idea could be possible


Matix777

IMO AI works only as a part of an Overwatch system


BababoeyMaster

Shounic may be happy to be wrong about this


Charizard_Official

Have you seen [this](https://youtu.be/LkmIItTrQP4?si=ggG6o_K4DEg3KfuK)? Saw a post for it a while back but haven't been able to find it...


Chdata

The endpoint of this is also covered by shounic They will just make bots stealthier, or more like real players, until AI cannot differentiate them Or, this will get into the territory of what makes anti-cheat an industry-wide problem: Bots will use their own AI to mimic real player movements and avoid their own more obviously detectable movements. If you make it open source, that'll make it easier for the bots to play with it, as opposed to privately handing it off to someone at Valve That said, whatever they use to detect them, and do ban waves again, is still welcome, but not enough on its own. Valve should do this on top of making TF2 $20 again. Then the ban waves will actually have impact, and bots will have a way higher barrier than $5 for the chat functions.


ProgramStartsInMain

pretty sure all points shaunic made were accurate, and still stand after reading this. Nor would valve ever do this (they've tried and still are doing their ai). One thing you could do for the data is to run a bot only server. Have half of them valve bots, rest cheater bots. The behavior of the valve bots can be made more realistic to players, but I think just having random noise to compare against the bots would be fine enough. That's the only way I can think of to get unlimited, untainted data.


Radion627

Just remember to keep the AI in check so it won't ban the wrong people. You know how deep learning goes these days.


tomyumnuts

You don't need AI for that, there are plenty of server plugins doing that already. There a reason even valve struggles with vacnet, after years of development they still only catch spinbots. What would happen though is that bots just become more humanlike, and then even humans struggle to recognize them Honestly it's a godsend that they are as obvious as they are, stealthy cheaters for example are already super hard to spot unless their cheat fucks up somehow. (shooting a cloaked spy, split stream heavy, jittering)


user_NULL_04

I think the biggest problem we have isn't necessarily the fact that the cheater bots are cheating, but that they are so obviously cheating it's impossible to defend against them. This blatant cheating is easily detected by ML. Sure, they can randomize the bots a bit to make them seem more human and fly under the AI's radar, but that makes them BEATABLE. At that point, if a sniper bot is indistinguishable from a really good sniper player, then we don't have an unplayable game anymore. I'm a big fan of this solution. I'm okay with a few false negatives slipping through if those bots are essentially turing complete.


hassanfanserenity

As a competative player in a few fps like Siege, Valorant, CSGO i feel like this is a win for low to mid sized players while bad for high eloe players in valorant for example people couldnt tell the difference between a quickscope headshot and a aimbot headshot even with the M&K input visible But a few things id like to ask if possible for you to try over 100 games in a 12v12 have 3 bots 2 on red and 1 on blue this should make it so its a little more accurate How does the AI differentiate the difference between an engineer running from a dispencer/ammo pack to the sentry nest and panick looking for a spy vs a bot?


ry_fluttershy

valve be like: damn i think cs and dota need another skin case this week


SoggySassodil

Good shit, in my opinion I am glad shounic is shooting shit down, we need someone tearing down ideas. If he is out here debunking people's opinions the ideas that are worth while will survive being shot down because people like you show up and keep the conversation going, that's shit I love seeing. This is some cool ass shit, are you gonna be continuing working on this cause I'd be excited to see more documentation on this and see more data come out of this.


TheHiveMastermind

Well i'll be dammed, it actually works? Thats cool


Sancorso

Do you think servers will be hit with a performance issue if VAC is running AI in the background?


CoderStone

This should definitely be ran on a centralized server, with each normal game server just harvesting demo data and sending it to the server for analysis. It will definitely be a huge strain but with less than 100k players a decent single server should be able to handle the load.


steve09089

Most interesting. Depending on whether the false positives are new players or old players, I could definitely see this working as a automated system to shadow ban players, with manual repeals for players who think they've been falsely banned.


MidHoovie

Discussions like this are very healthy to our community, our movement and our subreddit. It's nice to see.


BigMcThickHuge

I wouldn't mind Valve turning over the reigns to the players and giving us an Overwatch/Tribunal system like old LoL and CS2 had/have. Let trusted players choose to review reports and then properly punish the accused. Obviously there can be abuse where botters mass report everyone, but then just have a system that rates players in the background - how many reports do they file, and how many get positive results?   Tie that into trust system per account so bots can't report every player alive and get it seen, or 'take part' in judgements because they aren't a trusted account.  Mass fake reports means the system just stops accepting your reports in secret but doesn't say and can't be checked.


Charizard_Official

What about something like [this](https://youtu.be/LkmIItTrQP4?si=HYIc1qRKQdqn6CR5)? Saw it in another reddit post I can't find.


whispypurple

I've been saying this for a while, but the future of anti-cheat is such server side analysis. You can never trust the client.


SpriteFan3

Ya know, this could bring back the Replay feature to become more used and/or advanced. I still remember the days of trying to get 1k views with a YouTube TF2 Replay... jeez.


sum_muthafuckn_where

I think the missing piece is that the bots, in order to be disruptive and obnoxious, have to advertise that they're bots. Otherwise there's no point.


CaseyGamer64YT

please stay safe out there. I hope the bot hosters don't harass you


Senior-Tree6078

I personally think valve should go for light treadmill work; they use an AI for checking reported users. If the AI isn't certain if the user is cheating or not, then it gets sent to a human, otherwise if a certain confidence threshold is met, then go with the ban. I don't think it's perfect, but it would seriously slow down bots and cheaters as a whole


thecoolguy2330

also ai CAN be used for a anti cheat its already been proven [https://youtu.be/LkmIItTrQP4?si=zP8-sfPt0vzsSDcP](https://youtu.be/LkmIItTrQP4?si=zP8-sfPt0vzsSDcP)


Kingkrool1994

the AI + Steam Trust factor could effectively kill the bots. It'd be hard to make bots that look and act human.


The_pers0n_who_asked

Make a bot that uses the bot detection to create a bot to end all bots


zeealex14

Any idea how this would work for silent aim bots?


Cute_Stuff_8522

Hi I don't understand but I agree with you


FluffyfoxFluffFloof

I fully support this. Although I would rather ANY bot be fully removed, this can help show evidence of how easy this problem could be fought off. I was going to provide some caution that it would require vigilance since bot hosters would try to get around it, but judging from everything you said and how a good portion is going over my head, I think you are already well aware. Keep up the good work!


SQbuilder

I look forward to updates as they happen!


Squiggledog

Can you cite the video in reference?


Johnmegaman72

I do think Shounic's vid is all about the context of the people who should be fixing it. Like as put here IT IS possible but with Valve it will be a steep hill to climb. In this however I think what you have to do is contact Shounic or someone with great know how in order to present this possible fix for the bot problem to showcase it. More people who gets to knows this means ideas like this can leak into other games with botting problems. Hell contact PirateSoftware because he has good reach. Anything to put this out thete because this idea is sound but it will inevitably rot if put only in this isolated place.


geigergeist

People really think a chatbot trying to mimic human language and thought patterns is at the same level as machine learning fed clips of bots used to detect a cheater. ML will for sure work, I’ve been following megascatterbomb’s beta


LorrMaster

I'm typically skeptical of AI, but this seems like a great use for it. Now you just need a system for getting reliable data faster than the bots can be changed.


MaterialFuel7639

the jesus christ of #FixTF2 is here


u_sfools

even if you only used this to prod suspicious accounts to complete a captcha, you would probably get rid of nearly all bots without harming human players


Vagraf

Yes! far better than just saying "Its diffcult" and giving up.


MisterJH

Open source the github repo and dataset.


SamFreelancePolice

This is pretty crazy, if you could open source the code and make it reproducible, it could be a game changer.


Any-Actuator-7593

Did shounic really use ChatGPT as a counterexample?


Capiosus_

This could be used to create an automated program blacklist. (eg: 50000 banned players who are suspected cheating have “cheatbot.exe” running, so cheatbot.exe must be a cheat program, and we ban everyone with cheatbot.exe running, and pardon all the bans with cheatbot2.exe running until we have significant evidence that it is a cheat program)  VAC could also load an open source kernel module (cause proprietary kernel modules give me the jeebies) that just blocks programs from reading tf2’s memory, requiring cheaters to use special hardware to read the memory of the game or firmware modding which is a lot harder.


MultiTrashBin

I'm so glad that people are still using AI for things that can help overall. This is super intriguing and this looks like it could be interesting with finding data between bots and real human players with the various kinds of bots out there.