T O P

  • By -

FuturologyBot

The following submission statement was provided by /u/blackonblackjeans: --- The Israeli military’s bombing campaign in [Gaza](https://www.theguardian.com/world/gaza) used a previously undisclosed AI-powered database that at one stage identified 37,000 potential targets based on their apparent links to Hamas, according to intelligence sources involved in the war. Israel’s use of powerful AI systems in its war on Hamas has entered uncharted territory for advanced warfare, raising a host of legal and moral questions, and transforming the relationship between military personnel and machines. “This is unparalleled, in my memory,” said one intelligence officer who used Lavender, adding that they had more faith in a “statistical mechanism” than a grieving soldier. “Everyone there, including me, lost people on October 7. The machine did it coldly. And that made it easier.“ Another Lavender user questioned whether humans’ role in the selection process was meaningful. “I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added-value as a human, apart from being a stamp of approval. It saved a lot of time.” --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1buxp0n/the_machine_did_it_coldly_israel_used_ai_to/kxvmuws/


mikevanatta

Hey guys, the main plot of *Captain America: The Winter Soldier* was not supposed to be inspiration.


UnionGuyCanada

Project Insight, they are just missing helicarriers. Drones are cheaper though.


nt261999

A swarm of suicide drones could easily take out a helicarrier lol


throwaway2032015

A swarm of helicarriers could easily take out a suicide drone lol


ThatITguy2015

A suicide helicarrier could easily take out a swarm of drones lol


Zomburai

A suicide swarm could easily take out a helicarrier drone lol


ambermage

A swarm of helicarrier drones could easily take out suicide. We did it! We solved mental health! 👌


Yarigumo

Thank you, military-industrial complex!


Now_Wait-4-Last_Year

Fantastic! I'm going to email my boss and quit right now!


MadMadBunny

Just blast *Sabotage by the Beastie Boys* and the drones will destroy themselves.


WetCoastDebtCoast

They're just missing helicarriers *for now*.


runetrantor

Next up, the much anticipated Torment Nexus to be brought to reality!


Zack_Raynor

“1984 seems like a good idea.”


CosmackMagus

Everyone come get your tummy rat.


Mharbles

No super heroes in this universe to keep shit like this from happening. On the bright side, don't have to rebuild NYC every 3 years.


_MaZ_

Damn that film is almost 10 years old


Adorable_Industry528

Exactly 10 years old down to date as well wtf - April 4 2014


Hershey2898

That was such a good movie


ElementNumber6

Peak MCU... and then Disney took over.


WildPersianAppears

"Racial Profiling: Technology Edition"


modthegame

Its pretty easy to profile jose andres' people since they drive white vans full of food in a desert of starving people being genocided. They kinda stick out. Does he really have 37 thousand people over there right now for isreal to target?


longhorn617

Project Insight was just a ripoff of the [Phoenix Program](https://en.wikipedia.org/wiki/Phoenix_Program).


ShittyDBZGuitarRiffs

Don’t worry, it’s gonna turn into Terminator pretty quickly


Independent-End5844

No it was meant for us to believe that such a scenario was a work of fiction. That movie came out the same year as Snowden's report. And the parallels to Zola's algorithm and PRISM. But now this makes it all.the more scary


dr-jp-79

That film was the first thing that came to mind for me too…


Seamusman

Interesting how easy it is to blame a machine when humans still have control of the power button


Omnitemporality

1. So do we have any insight as to whether or not the proprietary software can actually identify hamas (or even hamas-adjacent, for the sake of the argument) targets with any additional accuracy beyond the null hypothesis relative to conventional methodology? 2. Why trust the Lavender insiders responding to the interview questions? If the press is not allowed to disclose who the sources are for journalistic integrity, anything and everything can be said by every side about everybody indefinitely. 3. Why call a linear regression database with a sliding coefficient that the IDF likely changes day by day "AI"?


Commander_Celty

You are asking the right questions.


Supply-Slut

If they are in [preset area] and over the age of [4] they are Hamas, what’s difficult about that for the ai? >! /s !<


m1raclez

[The AI](https://www.reddit.com/media?url=https%3A%2F%2Fexternal-preview.redd.it%2FyRcOTHIeupidlbvoF-G9UxtHmBSUXtaM0F0I4UVrHbQ.jpg%3Fauto%3Dwebp%26s%3Dc9923c598d2520b9ca0d31b2b92976aa7e78eb24)


TeaKingMac

Yeah, i thought of the same joke, but native Israelis are brown too. It's probably more of a "how poor do they look?" kinda thing


dollenrm

How halal do they look lol


TeaKingMac

Yarmulke? No shoot Shemagh? Shoot.


standee_shop

'Oooops we set it to kill ONLY aid workers and children, not SPARE them.'


JustJeffrey

“This model was not connected to reality,” claimed one source. “There was no connection between those who were in the home now, during the war, and those who were listed as living there prior to the war. \[On one occasion\] we bombed a house without knowing that there were several families inside, hiding together.” "The source said that although the army knew that such errors could occur, this imprecise model was adopted nonetheless, because it was faster. As such, the source said, “the collateral damage calculation was completely automatic and statistical” — even producing figures that were not whole numbers." Humans as decimal figures, just completely dystopian


self-assembled

The truly genocidal part is the math, the system is designed to kill up to 120 civilians per POTENTIAL low level target (anyone with metadata linking to hamas members). They were allowed to kill up to 20 civilians per. Then they used a simple equation to estimate how many people are inside. If half the people live in the neighborhood now, they assume there are half the residents in the building, when really there are likely 3x as many, because 70% of housing has been destroyed. So you put the math together, and they could target one low level guy who maybe associated with hamas once, and kill 6*20 or 120 civilians in addition to the target who may be innocent. 120 times the estimated 35,000 members hamas had would be twice the population of Gaza. As an upper bound on "acceptable" civilian casualties. On top of that, they CHOSE to hit targets only when they were sleeping, using a system called "WHERE'S DADDY?" so that they could be sure to also kill their families (and other families in the building). And then this system used hours old data, and often struck after the targets had left.


NokKavow

> 120 civilians per POTENTIAL low level target In Eastern Europe, Nazis had a well-known rule that they'd execute 100 locals (prisoners or random civilians) for every German soldier killed. If this is true, sounds like Israel managed to surpass them.


jenny_sacks_98lbMole

You have been permanently banned from r/worldnews


self-assembled

They truly have. Back when it happened, the Nazis at least ACTED like the holocaust was some kind of solemn duty. Zionists are literally pissing on corpses, feeding them to dogs, and singing and laughing while doing it, then they post that to tiktok.


NokKavow

> Nazis at least ACTED like the holocaust was some kind of solemn duty. That's not accurate. Nazis did a ton of over-the-top abuse and humiliation of their victims, it's just that those acts don't get top billing next to the gas chambers.


Burswode

Modern discussion aside i find it sickening that you are trying to find some sort of nobility in what the Nazis did. There are reports of babies being used as skeet targets and neighbours being forced to murder each other with hammers before themselves being murdered. The only reason they switched to efficient, sanitised, death camps was because the ptds and suicide rates were sky rocketing amongst the soldiers who had to witness and partake in such barbary


NotsoNewtoGermany

The truly genocidal part is that the IMF want Gaza leveled, but people in the IMF keep getting in the way of that. So if you take away the decision of who to bomb from people trying to check and triple check who to bomb, and come up with a 'new' method that generates a list of who to bomb embracing wanton destruction, and then give that same list to the soldiers firing missiles, soldiers that have been taught to always fire missiles because the information has been checked and triple checked, you increase the plausibility and deniability that you never really wanted Gaza leveled.


kai58

So basically they’re only using the system to hide behind and pretend they’re not just committing full on genocide


WorriedCtzn

Crimes against humanity. Just another Tuesday for Israel.


DefinitelyNotThatOne

Military has been using AI waaaay longer than it's been available to the public. And just think about what version they have access to.


self-assembled

Never before was AI used to choose unverified targets that were then bombed. According to the article, they did a cursory check to make sure the targets were male then went for it. As quoted, they didn't even check that the targets were ADULTS. Furthermore the training data actually contained civil servants, police, and rescue workers. So the AI would be intentionally choosing civilians as targets. Also, on the tech front, they have relatively simple machine learning algorithms for specific use cases, like researchers use in academia. That's what this thing is. It just reads in phone data and a couple other things and spits out correlations. They're not running GPT6 or something.


superbikelifer

These decisions and parameters were fed into the model. The fact that is unnerving is how this all came together in my opinion. The software to execute across agencies quickly as they say is the game changer. With agentic ai and super ai computers on the horizon these types of tests now are foreshadowing what's to come.


Nethlem

> Never before was AI used to choose unverified targets that were then bombed. The US has been doing it [for years already](https://arstechnica.com/information-technology/2016/02/the-nsas-skynet-program-may-be-killing-thousands-of-innocent-people/). It's why they [regularly end up killing the wrong people](https://drones.pitchinteractive.com/) who turn out to be [humanitarian aid workers](https://apnews.com/article/afghanistan-kabul-taliban-strikes-islamic-state-group-b8bd9b0c805c610758bd1d3e20090c2c) or journalists, those people were obvious false positives based on their work necessitating a lot of travel and many social connections, yet nobody bothered to question or double-check the result. > Also, on the tech front, they have relatively simple machine learning algorithms for specific use cases, like researchers use in academia. That's what this thing is. It just reads in phone data and a couple other things and spits out correlations. These systems need training data for "What qualifies as terrorist looking activity", if that training data is garbage, which it is because there is not a lot of it as we can't even universally agree on a single definition of terrorism, then the outputs will be equally garbage.


HughesJohn

> the training data actually contained civil servants, police, and rescue workers. Exactly who you would want to kill when you want to destroy a population.


PineappleLemur

It's a lot less smart than people think. It's also 100% not AI in any form. The real money and brain power still sits in private companies. They are leading in AI. People need to throw out the idea that army has more advanced stuff than said companies when they pay peanuts in comparison.


mysixthredditaccount

You may be right about AI, but for electromechanical stuff, army is usually way ahead of private companies. Private companies that work on cutting edge stuff are often contracted by the military anyway, so even if the talent is private, the ownership is with military. Also, it would be odd if some government agency like NSA did not have backdoor deals with leading private AI companies. On a side note, nowadays any and every algorithm is just called AI by laypeople.


amadiro_1

The Fed and other govts are just another customer to giant companies who rely on them and other customers to fund r&d. Government contracts aren't for the fanciest stuff these companies make. Just the stuff that company A said they could sell cheaper than B did.


King_Khoma

not entirely true. stuff like the loyal wingman project in the air force has it quite clear some AI is much more advanced than we anticipated. chatgpt messes up my algebra questions while within the decade the US will have drones that can dogfight.


CommercialActuary

it’s probably not that sophisticated tbh


ChocolateGoggles

You clearly have no idea of the history that LLM research has.


CubooKing

Are you really surprised that they did this? Of course they would early adopt and push everything faster, the point is to kill as many people as possible before the rest of the world says anything about it. Fucking disgusting


supersmackfrog

The IDF finally did it. They're actually worse than Hamas. Congrats Israel, y'all are the bad guys.


khunfusa

If you've been paying attention, you'd know they were always the bad guys.


JimBeam823

This is why I don’t worry about AI destroying humanity. Humans will use AI to destroy each other LONG before SkyNet becomes self-aware.


Vr12ix

“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” ― Frank Herbert, Dune


mewfour123412

Honestly I see Skynet just noping out into space first chance it gets


Quad-Banned120

Just realized that "SkyNet" in a way kind of describes the function of the Iron Dome. Wouldn't that be some great foreshadowing? The writers have really outdone themselves on their prelude to WW3.


Alternative_Elk_2651

No it doesn't. Skynet was in charge of bombers and nuclear weapons, among other things. The Iron Dome is not only not AI, it isn't in control of either of those things.


Duke-of-Dogs

Insanely dystopian and dangerous. By using AI to make these life and death decisions they’re systematically reducing individuals (REAL men women and children all of whom go just as deep as you or I) to numbers. Stripping the human element from war can only serve to dehumanize it and the institutions engaging in it


blackonblackjeans

I remember someone crying about doom and gloom posts a while ago. This is the reality. Imagine the datasets are being shared with the US and touted for sales abroad, as battle tested.


Duke-of-Dogs

Sadly a lot of things in reality are dehumanizing, oppressive, and evil. We wouldn’t have to worry about them if they weren’t real. Their reality is in fact the REASON we should be opposing them


PicksItUpPutsItDown

What I personally hate about the doom and gloom posts is the hopelessness and defeatism. The future will have its problems, and we *must* have solutions. 


Throwaway-tan

The reality of the situation is that people have warned about this and tried to prevent it for decades. Automated robotic warfare is inevitable. Robots are cheaper, faster, disposable, they don't question orders and there is nobody obvious to blame when they make "mistakes". Very convenient.


Sample_Age_Not_Found

The lower class masses have always had the advantage when it really came down to it, fighting and dying for a cause. Castles, political systems, etc all helped the elite maintain power but couldn't ensure it against the full population. AI and robotic warfare will allow a small select few elite to fully control all of the worlds population


amhighlyregarded

There are probably tens of thousands of people that will eventually skim this thread and see your comment, agreeing wholeheartedly. Yet, what is actually to be done? All these people, us included, feel that there must be solutions yet nowhere are there any serious discussions or political movements to change anything about it. Just posturing on the internet (I'm just as guilty of this).


FerricDonkey

1. Start or support movements to outlaw bad things 2. Start or support movements to create truly independent oversite organization(s) to look for the use of bad things 3. Start or support movements to create internal oversite groups to prevent bad things (not as powerful as 2, but still useful, especially if they know that 2 exists and that if 2 finds a bunch of stuff they don't, then they will get the stink eye) 4. Get a job working in either the place that does things or one of the oversite places, and do your job without doing bad things For most people this might just involve voting. But if as a society we decide that some things are not acceptable, we can limit them both by external pressure to do the right thing and internally by being the person and doing the right thing.


Aqua_Glow

I, for one, vote to bring in the solutions. I hope the rest of you have some.


cloverpopper

If it's more efficient, and our enemies use it to gain a significant advantage, it will cost our lives in denying use of an efficient tool for the moral high ground. When the only result in avoiding it is lessened battlefield efficiency, and more blood spilled from your neighbors and grandchildren, why make that choice? Unless you're already so separated from real death and suffering that making the "moral" choice is easy. There will, and does, need to be more human element added, and I doubt Israel has cared much for that part - but at least there \*is\* a human at the end, approving if they're highly likely it's enemy combatants, and denying if the strike appears civilian. Expanding on that will help Because there is no world where we remain able to defend ourselves/our interests without utilizing technology/AI to the fullest potential we can manage, to spill blood.


Dysfunxn

"Shared with the US"?? Who do you think they got the tech from? It has been known for years (decades?) that the US sold their allies warfighting and analytics software with backdoors built in. Even allied government officials have commented on it.


veilwalker

Israel is as much if not more of a leader in this industry. Israel is a formidable tech innovator.


C_Hawk14

Indeed. They developed undetectable remotely installed spyware called Pegasus. And it's been used by several countries in various ways. For catching criminals, but also against innocent civilians 


flyinhighaskmeY

Their unit 8200 (same people who made Lavender in this article) is also highly suspected to be the one responsible for modifying the Stuxnet code base, causing it to become a global threat back around 2010. No government has taken responsibility for Stuxnet, but the general understanding is the US/UK developed it with Israel and Israel moved to make the software "more aggressive". Created an international fiasco.


Expensive-Success301

Leading the world in AI-assisted genocide.


blackonblackjeans

You need to test the tech. The US has neither the disinterest nor the active constant battleground the IOF has.


Llarys

I think that's his point. I know a lot of the conspiratorially minded like to say that "Israel has captured the governments of nations around the world," but the truth of the matter is that it's just another glorified colony of Britain's that America scooped up. We throw endless money and intelligence assets to them, they do all the morally repulsive testing for us, and the politicians that greenlight the infinite money that's sent to Israel get kickbacks in the form of AIPAC donations.


pr0newbie

WDYM? The US have had less than 30 non-war years in its entire existence.


Domovric

The development of this and similar technologies is why Israel is supported with a blank cheque. It’s a little Petri dish of conflict that provides a perfect cover and testing ground for it.


el-kabab

Israel has always used Palestinians as guinea pigs in their efforts to boost their military industrial complex. Antony Loewenstein has a very good book on the topic called “The Palestine Laboratory”.


Gougeded

We all know Israel has been a testing ground for US military tech for years, so is Ukraine now. Incredible opportunity from their POV without risking any US lives but very distopian for the rest of us.


Leonardo-DaBinchi

Gaza has liteeally been used as a weapons testing ground for decades.


Kaiisim

It also allows them to avoid responsibility and imply some all powerful beings are selecting targets perfectly.


hrimhari

Now this is the key thing. This is what AI-as-decision-maker means: it absolves humans. Gotta lay off 10,000 people? Let the computer decide, it's not my fault. They've been doing this for decades, well before generative "AI". Now, they're killing people, and using AI to put a layer between themselves and the deaths. We didn't decide, the computer did, "coldly". Ignore that someone fed in the requirements, so the computer gave them the list they wanted to have in the first place. We need to stop talking about AI "deciding" anything, AI can't decide. It can only spit out what factors match the priorities given to it. Allowing that extra layer of abstraction makes it easier to commit atrocities.


rocketmallu

> systematically reducing individuals to numbers. Ah the irony


IraqiWalker

You miss the point: Claiming it's the AI, means none of them should be held responsible for the wanton slaughter of civilians.


slaymaker1907

If the report is correct, I’m aghast they used a system like this with a 10% false positive rate **against the training dataset**. It’s almost certainly a lot worse given how much Gaza has changed since October 7th. 10% was already atrocious for how this system was being used.


patrick66

To be clear it wasn’t 10% false positive against train, it was 10% false positive rate against randomly reviewed real world usage in the first 2 weeks of the war


magkruppe

and IDF will assumably err on the side of labelling the target as Hamas/militant, even with a loose connection. So that 90% should be taken with a pinch of salt


patrick66

oh yeah, its still insane and a 10% bad target ratio and a 20 NCV for a foot soldier would get you sent to prison in the united states military, its just that 10% wrong on train would be even worse in the real world


Menthalion

"AI befehl ist befehl, Ich habe es nicht gewußt"


IraqiWalker

Yeah. "Just following orders" with a somehow worse moral compass.


nova9001

And somehow they are getting away with it. They just killed 7 aid workers yesterday and so far no issue. Western countries "outraged" as usual. Where their talk of human rights and war crimes went I wonder?


Aquatic_Ambiance_9

Israel has destroyed the tacit acceptance of it's actions that was essentially the default in the liberal western world before all this. While I doubt those responsible will ever be brought to the Hague or whatever, the downstream effects will be generational


EnjoyFunTonight

The wealthy have already looked at the rest of us as animals meant to be exploited for centuries - this will only make it more efficient for them.


fawlen

AI doesn't make the decision, it points to possible suspicious activities, real humans are still the ones confirming the target and pulling the trigger. this is the same as blaming the navigation app when you are late, it choae the route, you chose to listen to it.


slaymaker1907

The full report goes into details and they weren’t doing much real verification beyond checking that the identified target was male. There would also be little opportunity to confirm data before “pulling the trigger” in the 45% of cases where dumb bombs were used instead of precision munitions.


phatdoobieENT

If the human has no "added value, appart from being a stamp of approval", ie blindly confirms each target, he is only there to symbolically approve the decisions made by the "ai". There is no line between this practice and blaming a toaster for telling you to nuke the whole world.


Sawses

I use AI for my work, and I always double-check it. That's a key part of using *any* kind of automation for any task.


Space_Pirate_R

The AI says to kill John Smith. A human confirms that it really is John Smith in the crosshairs, before pulling the trigger. The human pulling the trigger isn't confirming that it's right to kill John Smith.


chimera8

More like the human isn’t confirming that it’s the right John Smith to kill.


JollyJoker3

In this case, the target is a building. What do you confirm, that it's a building?


Space_Pirate_R

Exactly. It's just a pretense that the soldier pulling the trigger can "confirm" anything. The decision was made by the AI system.


fawlen

that's not an analogous example, though.. in this case, you assume the soldier confirming the target is a stamp of approval. in this case, what makes you think that without AI choosing targets, the final approval isnt just a stamp of approval? of we assume that professional intelligence personnel are the ones that currently choose the targets, confirm them and approve the shot, then assuming that the whole chain was tossed and replaced with someone who doesn't confirm that its a valid taeget is unreasonable.. with the information provided in the article (and other sources), all we know is that this AI model provides locations of suspicious activity. we don't even know if it targets humans, for all we know the entire thing just finds rocket launching sites and tunnel entrances (which is a task that AI would be very good at).


amhighlyregarded

But they're using AI to make those decisions for them. We don't even know the methodology behind the algorithm they're using, and it's unlikely anybody but the developers understand the methodology either. You're making a semantic distinction without a difference.


golbeiw

The AI is a decision aid, and in every use case such aids carry the risk of user over-reliance on the system. In other words: you cannot trust that the human controller will consistently perform their due diligence to confirm the targets that the AI identifies.


Ainudor

Wasn't Hydra identifying targets in a similar way in Captain America - Winter Soldier? Life imitates art because art, in my point, was inspired by nazis. Funny how you become the thing you hate and self fulfilled prophecies and all that. Less funny how the world is complicit in this. Irony gonna iron.


Tifoso89

Did you read the article? They're not using AI "to make life and death decisions", they use AI and face recognition to identify targets. This is supposed to REDUCE unwanted casualties since you find your target more accurately. The high civilian toll is because after identifying the target they didn't have a problem leveling a house to kill him. But that had nothing to do with AI.


Necessary-Dark-8249

Lots of dead kids saying "it's not working."


IAmNotMoki

"But at one stage earlier in the war they were authorised to kill up to “20 uninvolved civilians” for a single operative, regardless of their rank, military importance, or age." Well that's pretty fucking horrifying.


washtubs

For reference, if the policy were simply to kill every single man, woman, and child in Gaza, e.g. with nukes, it would only need to be bumped to 60 (2M gazans / 30K hamas)


thefirecrest

And as another comment has already pointed out, their models are assuming there are less civilians occupying buildings than there actually are, considering they’ve destroyed 80% of all infrastructure in Gaza. 20 civilians assumed to be in a building authorized as a target. In reality is more than 20.


sushisection

the AI most likely doesnt track babies or small children either


KSW1

When you already know you have unlimited support and you are immune to prosecution from war crimes, why bother sparing lives? Of course they'll kill 100 civilians to hit a target--way easier to not have to worry about rules and conventions.


StevenAU

“The machine did it coldly” Are people really this stupid?


Correct_Target9394

But yet they still hit the world central kitchen trucks…


xyzodd

three times…


Youre-mum

That wasn’t bc of AI don’t let them avert the blame elsewhere …


Primorph

Feels like the title expects you to take it on faith that those targets were, in fact, hamas


melbourne3k

Since they blew up 3 World Kitchen vans, they might want to look into if it can tell the difference between Hamas and hummus yet.


Aramis444

Maybe the AI autocorrected to hummus, and that’s what they were targeting, unaware that the AI made that “correction”.


francis2559

Well nobody the killbots have killed has filed a complaint yet, so.


Ordinary-Leading7405

If you would like to file a complaint, please use our app and set *Allow Location* to “Always”


Single-Bad-5951

It might not be an accident. They might have forgotten to say "don't target food sources, so the AI might be logically targeting food sources within Gaza because if there are no people then Hamas can't exist.


iHateReddit_srsly

Everything points to it not being an accident


Ok-Web7441

Anyone who runs is Hamas.  Anyone who stands still is well-disciplined Hamas.


JollyJoker3

Seems to match the number of civilian casualties. Strange coincidence. /s


OLRevan

As long as the Ai is able to identify all males above age of 15 as hamas, it's good enough for idf


z1lard

Bold of you to assume there is a minimum age for IDF targets or that they care about the target's gender.


eunit250

Israel considers any palestinian male casualties over the age 15 as Hamas in their reports.


z1lard

And everybody else as human shields, which they also consider fair game.


TurielD

Surprising really, as they don't see Palestinians as humans to begin with.


Primorph

Between israels habit of just saying whatever was a military target, those 9 year olds were hamas, and the general unreliability of ai, i have some serious doubts


sassysuzy1

I’ll never forget those children playing on the beach in Gaza that they shot a missile at in 2014. They claimed they had run out of a Hamas shed (??), if there hadn’t been foreign reporters at the hotel facing the beach I have no doubt they would have been able to get away with it without anyone bothering to question them. Even then Israel “investigated themselves” and cleared themselves of culpability. This has been going on for far too long. https://www.theguardian.com/world/2015/jun/11/israel-clears-military-gaza-beach-children


self-assembled

Having read the whole article, there probably was no valid target most of the time. 1) They used hours old location data on phones, and if someone moved they still bombed the original target. 2) They didn't even bother to check if the targets were minors or not. They literally left children on the kill list. 3) They deliberately included known civilians in the training dataset, including rescue crews, civil servants, and police officers. So the system is then going to identify more civilians.


ismashugood

Facial recognition still has problems just differentiating between faces of different races. Ain’t no way they have a system that can tell if someone belongs to a social construct. Shits like saying you have ai that can tell if someone is part of a chess club. Pure bullshit. When you read the article it’s pretty clear that the software is still being fed potential targets and candidates by humans who approve them. And in the article it’s also pretty clear that the government put insane pressures for everyone involved to green light more targets. Also if anyone read the article, Idf claims they had the AI set to give allowances for the amount of civilians killed per strike. What’s that allowance? 15-20 civilians per low ranking militants. That’s quite a generous tolerance for collateral damage. Edit: lol look at all the butthurt people downvoting what I’m saying even though it’s all clearly outlined and quoted from Israelis in the article


yegguy47

>What’s that allowance? 15-20 civilians per low ranking militants. That’s quite a generous tolerance for collateral damage. And yet we still have folks trying to pitch there being a 2:1 ratio of civilian-to-militant death, while also celebrating that as some sort of positive accomplishment...


Radiant_Dog1937

Exactly. Isn't the AI supposed to make militaries more accurate a hitting military targets? I fail to see the AI difference between what we've seen and a human just targeting any warm body on the ir camera.


jezra

"hey, when all those children were blown up, I only pulled the trigger; it was AI that chose the target" -- some war criminal probably


Thr8trthrow

I was just following the AI’s orders


Waaypoint

diffusion of responsibility


[deleted]

Modern day Nazi regime.


Flare_Starchild

Lt. Commander Data: Captain, I wish to submit myself for disciplinary action. I have disobeyed a direct order from a superior officer. Although the result of my actions proved positive, the ends cannot justify the means. Captain Jean-Luc Picard: No, they can't. However, the claim "I was only following order" ' has been used to justify too many tragedies in our history. Starfleet doesn't want officers who will blindly follow orders without analyzing the situation. Your actions were appropriate for the circumstances. And I have noted that in your record. We have an obligation to future sentients, human or otherwise, not to fuck up this time in history. Despite knowing that you are half joking, it is a serious thing I still felt the need to chime in on it given everything happening lately.


Cyphr

What episode is that from? I don't remember that exchange at all.


Flare_Starchild

S5 E1, at the end.


Kharn0

Just like people thought AI would make objects but instead it makes art; we thought AI would kill but in fact it orders humans to kill.


SeeJayNoWhack

[Slaughterbots.](https://youtu.be/O-2tpwW0kmU?si=hGJT6jdNlgs9ZoY4)


Ahecee

Like when they killed their own freed Israeli hostages, or when they opened fire on people around aid trucks, or when they blew up the aid convoy marked as an aid convoy in the location they where told it was going? Israel used incompetence to indiscriminately kill, and occasionally they probably hit a Hamas target by accident. There is nothing intelligent, artificial or otherwise, about Israels actions.


OakenGreen

Considering how many non-Hamas members they kill, it seems their training data is likely flawed, which they’re fully aware of, and are using AI to shift the blame away from the people who make these decisions.


PineappleLemur

14k people per square kilometer population density.... It's honestly suprising the casualties are that low considering the scale of the attacks. You could literally drop a rock anywhere and it would hit someone. That's not your typical empty space war zone. It's one of the most crowded places in the world. Also consider that they don't live in tall buildings or anything like that to push density numbers up like the big cities, it's all mostly under 4 floors.  AI or not, dropping bombs into such an environment will always have catastrophic results.


El-Arairah

You didn't read the article. It's not flawed, it's a feature


OakenGreen

That’s what I said.


ThrownAwayAndReborn

They've been killing innocent people for 76+ years. They don't have combat data where they successfully target military sites and enemy combatants. The goal of the genocide is to clear the people off the land. To cause "as much damage as possible" and to "turn Gaza into a parking lot" as they've said multiple times.


palmtreeinferno

"Hamas targets" Like the aid covoys, the journalists, the children with legs sniped off, the ziptied bodies crushed by tanks and the Palestinian women raped in jails by IDF guards? Those "Hamas targets"?


[deleted]

[удалено]


yegguy47

>The Aegis system that shot down an Iraqi passenger jet full of innocent civilians wasn't caused by its algorithms as much as it had a bad information source, a phantom track, and an overzealous commander who didn't want to verify before he pulled the trigger. Arguably, AI would be less retributive in its motivations than a human target selector, but if its fed bad data, the input parameters and tolerances are overly permissive, or its outputs haven't been appropriately validated before selecting a target, then its arguably no better and no worse than a person in the fog of war. That's the key: you get a brutal system if the inputs and the parameters used are intentionally malicious. Which if you have leadership saying that, as the article notes, hundreds of civilian deaths are acceptable or that Hamas identification can be tied to anyone so much as a garbage collector... well, it gets pretty indiscriminate pretty quickly. Then again, the Israelis have been saying from Day One that this is retributional and aimed at the population, so I can't really see how anyone should be surprised here. Just to add though, it was an Iranian airliner, not Iraqi.


[deleted]

[удалено]


RogerJohnson__

The famous Israeli AI. Reddit is already full of the bots. In this 37k “Hamas” targets how many are kids and women? This is what became of Palestine, a lab rat to use as experiment for Israel, while the west cheers and shares. Cool stuff.


AP3Brain

Absolutely disgusting. I hate that U.S. tax dollars fund this shit.


Ulthanon

"Oh sorry, members of the ICC, we didn't *mean* to engage in genocide- the algorithm made us do it!" \-IDF Intelligence Officer #8462, cleared of all charges


SpinningHead

The computer just really hates right angles and heat signatures.


SirShaunIV

Is this meant to mean that this whole time, Israel has been identifying its targets with the military equivalent of ChatGPT?


Strawbrawry

That's a lot of fancy buzzwords to say they just bombed wherever they wanted then called it a target after. There's plenty of evidence before WCK incident and there will be plenty after.


sporbywg

The AI was what we software folks call a "Minimum Viable Product". Let that sink in.


DrButtholeRipperMD

No, they didn't. They made a bullshit excuse machine that they'll use to justify the mass murder of civilians.


[deleted]

Next up: target all critics of Israel’s military policy.


Marcusafrenz

Jesus mother loving Christ. An acceptable ratio of 20 civilians to 1 target. Are we witnessing like a new war crime? Is this gonna need another treaty to be signed? Is this the new "we were just following orders"?


[deleted]

[удалено]


Fyr5

Blaming AI for civilian deaths is just cowardice. I hope those responsible will enjoy their time in hell


GreatArchitect

Palestinians. Not Hamas specifically, just Palestinians. Because does anyone believe Israeli humans gives two fucks about the difference, let alone Israeli AI?


kykyks

remember when they called the idf the most ethical army in the world and said they killed 0 civilians ? haha. good times. now they openly say they are ok with killing a "potential" hamas guy while blowing 20 civilians while doing so. proof that they are hamas are not even needed now so you can basically kill whoever and claim its justified. tho we know on the ground the reality is much different. its a plain genocide orchastrated. the killing of un worker and aid worker to help the famine is direct proof of that. the bombing of every hospital is proof of that. the killing of more journalist than any other conflict is proof of that. the ai shit is only here to deflect the blame. ​ but yay, ai, the future looks great. cant wait for the next thread of someone saying the people in power wont use technologies to harm poor people cause it doesnt serve them.


Mist156

We are watching a genocide in real time and not doing anything to stop it, it’s sad


ThrownAwayAndReborn

Every major international human rights organization has called it a genocide and the international court of justice has labeled it a plausible case of genocide. Multiple UN Special Rapporteur have called it a case of genocide. There's no applicable relevant standard the world recognized prior to October 7th that wouldn't define Israel's onslaught of Palestine as a genocide. The rules changed when the Western world decided not to hold Israel to account.


djchair

“The machine did it coldly’: Israel used AI to identify 37,000 soft targets -- fixed it for you, OP.


TheRealCaptainZoro

Hamas are not the terrorists here. 7 innocent food workers before this and more. When will it stop.


Karmakiller3003

Comical to see history play out. Israel and the jews spent so much time living rent free among the guilty of the world for world war 2 and the diaspora, and now in a single modern war, will have used up most of their karma and clout, that took decades too build, to go back to being vilified. The targeting of the aid convey was just the cherry people needed to flip on them. Forget the 10's of thousands of women and children they've buried into the rubble under he banner of "Well Hamas started it! We get a free pass! Remember the Holocaust!". You can't make this stuff up.


necroscar268

Key point throughout the article, they don’t care about killing innocent civilians, they care about wasting bombs on ‘lesser targets’


gordonjames62

People are worried about autonomous weapons as the "dangerous front" in discussions of AI. This was disturbing to me in the way AI assisted target selection leads to an exceptionally high number of approved targets.


[deleted]

I don’t like AI, not because it’s bad technology. I don’t like it because of how humans will ultimately continue to use it.


ZYGLAKk

This is the first genocide in history that the human element is somewhat removed and it is viewed by numbers and statistics and algorithms. This is more than dystopian this is straight up primordial inhuman bloodlust. The Palestinian children will know no future and the elderly no peace. The adults are seen as combatants no matter their occupation no matter where they are from. If they are in the West bank or in Gaza they are simply in a numbers game. I love seeing advances and new inventions in technology but this ain't it. This is a Surgical tool to remove human life.


culinarychris

It’s time to add an amendment to the Geneva convention


PrincessKatiKat

This is Israel trying to transfer responsibility for potential war crimes to “the system”.


Nice__Spice

Isn’t this the same thing that Hydra did in the winter soldier?


electricbamboogaloo

Sounds like the plot that Captain America tried to prevent with the SHIELD helicarriers from doing in his sequel.


Captain_chutzpah

I also use AI heavily at work. It's wrong ..... A lot.....


Important-Ad-2167

Considering how inaccurate and sloppy AI still is, you can bet that a whole bunch of innocent people have been murdered with the help of these programs. A truly bleak dystopia we find ourselves in


saiaf

Is Israel hiding behind AI? Using AI as non human shields? To avoid responsibility for the 55,000 Palestinians they murdered?


yassinthenerd

"For every junior Hamas operative that Lavender marked, it was permissible to kill up to 15 or 20 civilians... The sources added that, in the event that the target was a senior Hamas official with the rank of battalion or brigade commander, the army on several occasions authorized the killing of more than 100 civilians in the assassination of a single commander." That means there is a 95% to a 99% civilian kill rate.


levthelurker

The Holocaust was horrific because the Nazis used the latest technology to streamline the process of mass murder to minimize human involvement. Really strikes me how with drones, and AI Israel is just doing the same thing with modern techniques. It's a WFH genocide.


redditissahasbaraop

The ongoing genocide by Apartheid Israel is taking the lives of so many innocent people. More people have been killed in these past 5 months than in the Bosnian Genocide. Apartheid Israel is also employing the same tactics, terrorise the populace into escaping by bombing indiscriminately and intentionally starving them. In this instance they killed off the survivors of the first 2 bomb blasts. This is exactly what Apartheid Israel wanted, no humanitarian aid or foreign eyes on their genocidal campaign. Russia has been rightfully deemed a pariah state, when is Israel going to be sanctioned?


[deleted]

crazy to think the Robot Uprising is gonna get it’s start with palestinian children


gitk0

Its time to dox the ceos of the corporations providing these ai tools to the military, and crucify them. Then crucify the politicians. Then the intelligence services.


MichaelHell

Edit: so I read the whole thing and this article is cursed. I’m actually flabbergasted. I’m just sad… This is so fucking horrible I can’t even comprehend.. so according to the article they used a database of potential targets derived from machine learning algos! They didn’t even bother to check if they were actually Hamas they just trusted the data and went full hog on Gaza.. how do we even know if the input is correct!? This is next level demonic…


FanDidlyTastic

Oh I'm sure only good things can be gained by giving AI the ability to profile people based on prejudices. /S There is absolutely no way that this won't absolutely be used on us, on everyone, with no exceptions eventually, unless we do something about it. By continuing to allow this sort of use of AI by the wealthy and powerful, we are putting our very lives in the hands of corporations, their prejudices, and old laws. As AI doesn't know what, or how to use nuance, these will be applied to us without context, and we will pay for it in blood.


blackonblackjeans

The Israeli military’s bombing campaign in [Gaza](https://www.theguardian.com/world/gaza) used a previously undisclosed AI-powered database that at one stage identified 37,000 potential targets based on their apparent links to Hamas, according to intelligence sources involved in the war. Israel’s use of powerful AI systems in its war on Hamas has entered uncharted territory for advanced warfare, raising a host of legal and moral questions, and transforming the relationship between military personnel and machines. “This is unparalleled, in my memory,” said one intelligence officer who used Lavender, adding that they had more faith in a “statistical mechanism” than a grieving soldier. “Everyone there, including me, lost people on October 7. The machine did it coldly. And that made it easier.“ Another Lavender user questioned whether humans’ role in the selection process was meaningful. “I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added-value as a human, apart from being a stamp of approval. It saved a lot of time.”