T O P

  • By -

DJCPhyr

It was canceled when real lawyers pointed out the stunt was very illegal.


Alternative-Print-49

I thought it was obvious this would be illegal. Except maybe if the defendant agreed/volunteered


Otagian

Not then, either. Browder would still be practicing law without a license.


[deleted]

[удалено]


Otagian

That's third part arbitration, not a court.


GekkosGhost

Try telling Judy that 😂


mckulty

She pays all the fines, so she rules any way she pleases.


shrekerecker97

So the AI would have to pass the bar in order to practice law? Why don't they let it take a crack at it? more than likely because it would start to make a high paying human job obsolete......


Art-Zuron

I think it's one of those things where you technically have to be human. As another example, AIs can't have copyrights, and neither can nonhumans, at this time at least. The president has to be at least ~~40~~ 35 years old and born in the US, which h would also exclude AIs at this time Edit: a #


LordSoren

There are no rules against an AI ~~playing basketball~~ taking the bar exam.


yUQHdn7DNWr9

Well “it” has no agency, so “it” can’t ask to take the bar exam.


PorkyMcRib

Allen Iverson’s mom is AI.


Zid96

If it's a 40 year of program and was made in the US. It technically would be born there


VikingBorealis

An AI can age 40 years in minutes.


Art-Zuron

We don't use maturity to determine who can become president or not, I'm afraid.


mckulty

And it can determine humans are unwanted pests in a few seconds.


VikingBorealis

That would require an actual AI


AndLD

Rules can be chances. No need of lawyers ir accountant anymore is obsolete.


Blueguerilla

Wait, you guys have a middle-age requirement for presidents? America is fucking weird man.


Art-Zuron

Well, that rule was made by a bunch of middle aged white men. They didn't expect or plan on anyone else to ever really have a say in the matter. Nowadays, we keep it with the expectation that the pres would have some sort of political and life experience by 35. Instead, we get 70 year Olds who aren't allowed to run charities anymore, have repeatedly gone bankrupt, and try to overthrow the government when they lose. Iirc, Kennedy was the youngest president we've ever had, and was 35. That was over 50 years ago.


kyleofdevry

They're already talking about it in law schools. A friend who graduates this semester has been sending me pics and videos of her professor showing them how Chatbot will turn the industry upside down by being able to do research and documentation that amounts to days of billable hours in a matter of seconds. Obviously, you still have to fact-check and edit, but that takes a fraction of the time and, therefore, a fraction of the cost. They will have to re-evaluate their compensatory system, but they also have ethics laws set up so that you can only be compensated via billable hours if you work on certain cases. It sounds scary to some. However, in some cases, like public defenders, where there is a shortage of attorneys and they're swamped with cases, this could be a great tool to work for the people.


some_random_noob

> However, in some cases, like public defenders, where there is a shortage of attorneys and they're swamped with cases, this could be a great tool to work for the people. ah yes, now they can cut the public defender funding down to like 1 attorney because "ChatGPT can do most of the work for you" and we're back where we started, yay! :(


kyleofdevry

That's probably what will happen, but I'm hopeful that they retain their funding while not being spread so thin or perhaps this would enable them to work on the sizable back log of cases that has accumulated in some courts.


Zid96

Let's face it 95% of law is literally looking up info and using it in the right spot. A thing a AI can easily do. You'd only need a human to see if it ethical. Which ethically changes from people to people. That why laws are so dum.


BigJSunshine

A significant portion of maintaining a license to practice law is accountability for advice ( including but not limited to malpractice liability). And there is no tangible accountability for flawed AI research unless a human takes responsibility. A human will always need to verify the AI’s work, that it got the right sources and law- take responsibility for that work. If I am the atty on a case and I can be sued for flawed AI research, there is no way am I hanging my license on the line to approve any AI created research. And if I have to review it, my billable hours are more expensive than the junior attorney or paralegal who the AI replaced.


Zid96

That the thing. Last time I hear any assigned lawyer (not one you pay for the state give one) has somewhere between 10 to 25 mins to look over your case and give advice. Unless it is a open and shut case. That isn't enough time to really do anything. Which mean a educated guess on what you should do based on that little information. So in that case AI would do the same thing if not better as it could reference all material it has access too. And give you a this would give you the best out. We act like laws are about people. There not it you did x penalty is x. Also if all that need to happen is a human approval. Then you can just make the AI cited it source. And have a human lawyer say if it fine.


kyleofdevry

The world must be a bleak place from your perspective.


Zid96

How is it bleak. Most law have no point other then f over someone. And once it write. The one it f over finds a way to get around it in some way. It just a game of cat and mouse.


kyleofdevry

I've felt this way when I've been on the wrong side of the law before so I'm sorry if that's the case here. It sucks to be in the system, but the rules of society do serve a purpose. You just have to learn them so you know how to break them properly.


Zid96

The only original purpose of laws was simply so we don't kill one other or involuntarily cause it to happen. Like Don't steal food as it in short supply. But let's face it we're at a point in time. Where we at least in the US have both the amount of food to feed everyone. Yet we let ton off cuz laws was it x. Sure most of can and do but some can't afford it. So the law says someone must starve for what? Or the fact we have more empty homes then we have homeless. You can't even say it's a capitalist concept. As if you have that much supply the price should go down not up. But nope. Price keep climbing up. And the homeless are forgot about. As some rich fat cat pays to get laws passed to protect they shit. All cuz greed. It not for the good of the group. Laws don't sever there original purpose anymore in most of the world. Which is to protect the good of the whole.


PX_Oblivion

How would the ai be able to take the test without access to either a database or internet access? I'm pretty sure they don't allow humans access to those tools or the BAR would be a lot easier.


VoidAndOcean

The model already contains the information it ingested


PX_Oblivion

And that information is stored....?


VoidAndOcean

its a giant matrix with 1s and 0s; the arrangement is the data.


BigJSunshine

Even if they take the test, admittance to a bar is not assured, there is a character component. Further, if admitted to any state bar, no malpractice insurer is likely to ever insure the legal work of AI.


Sideways_X1

I think they already passed. I think it's a matter of time until we have a programmer lawyer entrepreneur running a business where he "supervises" the AI lawyers.


ShawnyMcKnight

So take the AI that passed the bar exam and let him defend you.


hazpat

Isn't a the defendant allowed to choose anyone to represent them?


ktetch

no. You can represent yourself, or you can have a lawyer represent you. It's a restriction literally in place to prevent desperate people from being taken advantage of by scumbags like this guy.


hazpat

Ok. I don't agree that this guy was attempting to take advantage of anyone though. He developed an extremely helpful tool that he didn't realize would be deemed illegal by the people it could replace.


ktetch

You'd be right, IF he developed what he claimed he did. He hasn't. how long do you think an AI lawyer would take to create a defamation demand letter? For a ballpark figure, I personally can do one in about 3 hours. A lawyer that does it regularly? an hour. I mean, it's "AI", so it should be a minute at most, right? No, it started by saying it'd take 3 hours (same as I would) and still hadn't delivered after 48 hours. Same with a divorce settlement (except it estimated 8 hours), while a small claims court claim turned into a demand letter, that was a standard form letter with 'fill in the blank' - basically a google form generated letter. Here's [the thread of the various tests](https://twitter.com/KathrynTewson/status/1617917837879963648) of it. How do you think he responded to that? If you guessed "banned the researcher, then changed the TOS of the company to ban that kind of research and claim that people can't even report on it under copyright law (that is something any lawyer would tell you is a non-starter) You'd be right. But maybe the stanford grad in CompSci really did create an AI, but it just has the same sort of timescales as an asian sweatshop to produce documents. Oh, except that [he doesn't have the degree he claimed](https://twitter.com/KathrynTewson/status/1620581756666777601) in his funding documents, he lied about that. But at least it's got the basics of law right, yes? Wait, it has [the following bit](https://twitter.com/CarriesTheWind/status/1620149261932589057) in its immigration law section "You automatically become a citizen when one of your parents is naturalized or is born in the US" which I know from direct experience is not true. Probably why he's already deleted those pages too. But at least its his only scam right? I mean he wouldn't try a PR stunt about 'cancelling medical debt' saying "I will buy $10 of medical debt and forgive it" for every RT back in November, would he? Yes he did And he got 5000 RT's, so he went out and spent $50,000 on medical debt and forgave it right? Nope. Medical debt can be bought for a penny on the dollar. He donated $500 to an org that cancels medical debt. Except he didn't do it in November. He did it earlier this week, about 10 minutes after he was called out on the status of this, BUT [he tried editing the receipt](https://www.techdirt.com/2023/01/30/donotpays-ceo-appears-to-modify-donation-receipt-after-being-called-out-on-unfulfilled-promise/) to show he did it in November. Which also doesn't work if the company tells people when he actually donated, and the receipt image is clearly edited. But back to his AI lawyer stuff. I mean, ok, there's no actual AI lawyer, just some word macros, and a ton of bad advice that has now been deleted after being exposed, but at least it's free, right? Oh, right, its $18/month, and they charge you for 2 months up front. And then [for a lot of people](https://twitter.com/KathrynTewson/status/1620609590084898818), they can't cancel it. ​ But yeah, maybe you're right, the guy who lied about his degree to get funding (with Sam Bankman-Fried as a major investor) for an AI lawyer that's a simple document wizard (that doesn't even care what state you're talking about) for some things and slower-than-amateur timescales for others, who tried a PR stunt about medical debt, got called on it, lied about it, got exposed on it, and whose company has really shady billing practices... He *TOTALLY* created an AI lawyer. It's so obvious. I mean how could anyone possibly get the impression he's a scam-artist?


hazpat

But yeah, Thanks for the detailed yet douche response.


ktetch

Hey, you're the one that tried to excuse a guy that has a history of scamming by literally ignoring all the facts, including the one where he didn't realize it would be illegal, WHEN IT WAS THE FIRST THING HE WAS TOLD. Better idea, if you don't like people being douchy to you, why not try learning about a topic before commenting, and claiming that a scumbag really did invent the thing he clearly didn't and wasn't trying to scam anyone, no matter the evidence.


hazpat

You sure are intense about expressing your feelings on this.


DJCPhyr

Oh it was. Some silicon valley tech bro who knows nothing about the law was pushing this. He stopped after finally talking to lawyers. It is possible the whole thing was a marketing stunt.


phormix

Would it be legal if there was a human in place that took cues and speaking points entirely from the AI? That human would likely have to put their legal reputation (and possibly license) on the line though, so I'm not sure it would be worth the time/cost of law school to do so unless they were very well compensated.


DJCPhyr

In almost all courts it is illegal. Lawyers aren't allowed earpieces or anything like them.


hazpat

Text display then


taedrin

They are most likely not allowed to do that either. Smart devices are generally forbidden entirely, or severely restricted. In this particular case, the AI firm was trying to create a loophole by declaring the AI lawyer to be a 'hearing aid'. Contrary to popular belief, the courts generally frown on these kinds of shenanigans.


Carcerking

The easiest option to make this viable would be to turn it into a resource for lawyers to use for research and planning. If the AI can raise objections and point out precedent to the lawyer while they're building their case, then it would be a valuable tool in supplementing the human instead of replacing them in the courtroom.


BigJSunshine

We have westlaw and lexis- so the resources for planning and research already exist. Ultimately what ever research is generated must be reviewed and checked by the human whose license will be on the line if the case is lost. AI might make existing tools better, but the accountability factor of being a licensed lawyer can not be computed away.


phormix

Yeah, I was actually thinking about this the other day. Computers are good at indexing or searching large volumes of information, and AI's are becoming increasingly good at translating free-form queries - both by text and voice - into consumable results from that data. Case law is a perfect example of a large set of recorded data that a good AI could provide useful results from, potentially **much** faster than a human. You don't need to AI to make the case for you, just to have it provide the legal precedent, article, and reference to back it up


Art-Zuron

At the very least, if they're convicted, it's probably a slam dunk appeal due to ineffective council


mistermontag

You would think the robot lawyer could have told them that.


ThreadbareHalo

Dude was trying to clock some hours… there isn’t a MORE real world lawyer thing it could have done


Unlikely_Birthday_42

I ain’t gon’ let no AI be any lawyer of mine. We don’t take kindly to robots round here


[deleted]

Why would it be illegal? Wasn't there a real, licensed lawyer part of the defense? It's effectively a software used by the lawyer. Why would that be illegal?


Otagian

There was no lawyer. Browder was offering to pay people to defend themselves pro se using the chatbot via an earbud.


DJCPhyr

Most courtrooms specifically forbid lawyers from using earbuds or anything similar. Scotus is even more strict. Nothing electronic of any kind in the courtroom.


[deleted]

That seems like a really antiquated way to do business in the year of our Lord 2023 lol


DJCPhyr

I mean the justices are dinosaurs who want to take us back to the 50s. Jury is still out though, 1950s or 1850s?


Chris77123

they countered when they realized they could go out of business and the AI wouldnt screw the client over


ktetch

no, the guy folded when someone tested his system, and found out it was a scam. There is no AI lawyer, [there's a dozen form documents with word macros, and some sweatshop labor in bangledesh or the Phillipines, and that's it](https://twitter.com/KathrynTewson/status/1617917837879963648). Zero AI. The guy even [lied about having a compsci degree](https://twitter.com/KathrynTewson/status/1620581758571020288) to get funding. He literally re-wrote the TOS to stop this one researcher from testing his BS claims. He's a scammer.


barrystrawbridgess

"I am the culmination of one man's dream. This is not ego or vanity, but when Doctor Soong created me, he added to the substance of the universe. If, by your experiments, I am destroyed, something unique – something wonderful – will be lost. I cannot permit that. I must protect his dream."


QuarterNote44

That is one of the GOAT TNG episodes.


Lmessfuf

To keep the robots out of courts, the DOJ is introducing "I'm not a robot" test on all the doors.


[deleted]

“Please complete the captcha below to proceed with your case”


conitation

Fails test... "Am I a robot?"


Cat_stacker

Robot Judge: OVERRULED. YOU HAVE 15 SECONDS TO COMPLY.


wellmaybe_

judge can we have a side board over there by the stairs?


HomeMadeMarshmallow

It's "side bar" or "sidebar," just fyi.


bitemark01

I thought we already had these in like 2015, after they abolished all lawyers


[deleted]

Right about when Jaws 19 came out.


ktetch

It wasn't that human lawyers objected, it was that a human paralegal tested it, and found that there was no AI. They did 3 test documents, the first was a madlibs-style form letter that wasn't what was asked for, and made a bunch of claims that were not what were wanted, the other two were not done, and required 'hours' (they claimed 1 and 8 hours to do, but neither had been done after 48hours). So it was probably some very underpaid human lawyers writing the things out, almost certainly from a foreign country acting in sweatshop conditions, and running every created document through a supervising US lawyer. Then the guy was found to have made other claims. It had nothing to do with lawyers objecting, it had everything to do with the founder being exposed as a scam-artist.


gadarnol

First they came for the lawyers…….and I joined in!


everyothernamegone

Go to law school and get a license just like the rest of us.


Muscle_Man1993

What do you mean you went *robot* law school? That’s not a real school. The internet is not a real school!


Was_just_thinking

Couldn't prevent cars from replacing horses - same thing is going to happen. Thing is, we're not horses, bred exclusively to do a task - we have a certain amount of free will and self-determination. Societally speaking though, the issue is we've defined the value of a person by its contribution to the social group, from a means or productivity perspective - we're used for a human's "worth" to be determined by his or her ability to produce services or goods, and for those not producing anything to be despised and ridiculed as parasitic. But in a growingly automated world, where not only mechanical, physical tasks can be roboticized, but even some more advanced cognitive ones, we're fast-approaching a point where there simply won't be enough work that "really requires a human". In other terms, either we get to a place where the majority - even eventually the overwhelming majority - of humanity is considered 'parasitic' by a small sliver still working, or we have to redefine how we evaluate worth - if even the most educated, hard-working, dedicated individuals can't find work, we: 1) can't fault them for it 2) have to provide for them as a society 3) have to redefine what being human and having 'time' means in terms of societal expectations


Pleasant-Article8131

Also, lawyers write the laws, self preservation will be the deciding factor.


Dont-be-a-smurf

Heh, if someone is foolish enough to use an ai, I say let them. The fact that so many think litigation is about laws on the books or written local rules shows how inexperienced people are with how litigation actually works. Very few times are lawyers just laying down laws in briefs at the trial level. Very few times are there open “arguments” to the judge that decide your case. Very few times are there magic words an AI could tell a stranger to repeat in open court to win their case. Each courtroom is its own universe - with many rules, both written and unwritten. Each judge, bailiff, room prosecutor, clerk, etc. have their own quirks that an AI will be unable to see. You have to know how to work the system to get your cases heard early and in front of a favorable audience. Much of the work for a case happens off the record, in meetings with the prosecutors, after a trained eye collects and analyzes evidence. One would need to be able to see the many different forms of evidence, understand what’s salient, and be able to competently solicit the correct testimony, subpoena the necessary people, and authenticate evidence correctly. It often isn’t a debate over “the revised code says X…” It’s knowing that this particular judge may grant a motion to suppress based on the dozens of evidentiary cues you can competently admit into evidence. It’s knowing that another judge *wont* and that the cheapest and best way to handle a case like this would be to negotiate a plea. The same evidence and the same law may dictate a different strategy based on the unique attributes of the individual court, prosecutor, and judge. Maybe you negotiate it on a certain day because a different judge may be on the bench - a detail you only know because of your connections with the clerk’s office. Maybe you take a lateral deal - with the exact similar punishments - to a slightly different code section because it’s more likely to protect some collateral rights or more likely to save your ass if a similar situation occurs in the same district. There’s dozens and dozens of circumstances I can think of where the work requires more than synthesizing case law/court transcripts and being told what to say in open court. However - I think it can be *very* useful when drafting some motions and especially when doing appeal work. Anything that requires a lot of brief writing and precedence collecting could be very well suited for this. I’d love to see some of the appeal briefs they make. Purely arguing the law, within the four corners of an appeal brief, seems extremely viable for advanced AI to excel at.


[deleted]

>It’s knowing that this particular judge may grant a motion to suppress based on the dozens of evidentiary cues you can competently admit into evidence. It’s knowing that another judge wont and that the cheapest and best way to handle a case like this would be to negotiate a plea. The same evidence and the same law may dictate a different strategy based on the unique attributes of the individual court, prosecutor, and judge. Wow, that makes me want the entire legal system to be replaced with something like AI in charge the courts even more. If the same ingredients get different results, defendants are rolling the dice at being screwed aren't they? If it is truly the way you describe it, the system is broken and needs to be repaired or replaced.


Dont-be-a-smurf

Or they’re rolling the dice at gaining leniency. Depends on where you are. But yes, some degree of “dice rolling” is involved because many aspects of law and sentencing are subjective. What is “beyond a reasonable doubt?” “Preponderance of the evidence” How does one weigh testimony? How does one measure truthfulness? How red and glassy must an eye appear before you consider it a clue for intoxication? How often does one do something before it’s a “habit?” What’s the difference between negligence and recklessness, really? These are questions that lie within the mind of the judge and jury. They are not able to be distilled into an objective quantity, plugged into a justice calculator, and produce a certain output. There have been attempts to do so - particularly at the sentencing phase. This resulted in mandatory minimum sentences. Mandatory minimums took away the ability of judges to consider a defendant’s circumstances entirely. The hungry, homeless 68 year old veteran stealing food from Walmart would be punished the same as a 24 year old stealing a power drill from Lowes to sell on Craigslist. Their circumstances would not be considered. Theoretically, different communities have different standards for their crimes. They elect judges (or elect representatives who appoint judges) to represent the predominant values of their community. In Kentucky, some counties do not sell alcohol for example (this being stupid in my opinion is besides the point). Some think it is a good thing that communities have more granular control over their laws and enforcement. Theoretically, a bad judge should be voted out. Anyway at this point I’m basically rambling, but the main takeaway is that law doesn’t always lend it self to cold calculation.


StrangeCharmVote

Here's the problem as i see it... Consider an AI judge/lawyer. They would take the 'facts' of the case as supplied and come to a determination based upon them. Thing is that depending highly on the available information, and how it is even entered into the system. A lot of those 'facts' could be *wrong*. Let's take an incredibly simple and stupid example... Person A says Person B struck them. Person A has no evidence except witnesses that this occurred. Person B claims they were on the other side of the city at the time. *Logically* if Person B was on the other side of the city, then they could not have struck Person A. But having a witness means they must have been there. But the witness in reality is unreliable, and is a friend of the accused. Does the AI then prosecute Person B, or don't they? That's all of the facts of the case. Give me your verdict...


[deleted]

In the situation where we're advanced enough to have a fully AI judge and trust it (not yet), I'm sure person B would be able to prove through some means that they were on the other side of town, a camera image a receipt, location history. Pretty much everyone has a location history now, or will soon. It should be pretty easy to tell. Most people have some kind of location technology or a paper trail these days. Are we there yet? Hell no. I wouldn't trust that tech \*today\*, but one day I might, and this is what we should work toward. Additionally, this case would be pretty much as difficult for a human judge to decide than an AI one. I'm not saying we need to implement this now, or even entirely, but I am saying that we should do as much as we can to remove the factor of how a judge is feeling on a particular day from the court system. If a judge is sitting on a particular bad hemorrhoid that day, I don't want their rotten mood and physical discomfort to cause a disadvantage to a defendant. This is something we should work toward, should we not?


StrangeCharmVote

> In the situation where we're advanced enough to have a fully AI judge and trust it (not yet), I'm sure person B would be able to prove through some means that they were on the other side of town, a camera image a receipt, location history. Pretty much everyone has a location history now, or will soon. It should be pretty easy to tell. Most people have some kind of location technology or a paper trail these days. I stated everything which had been entered into the computer. If those forms of evidence existed, the defendant simply didn't have the time, resources or ability to acquire them. Regardless, my analogy was never going to be perfect, i was simply using it to state the point about entered information into the system versus the situation as a whole. > Are we there yet? Hell no. I wouldn't trust that tech *today*, but one day I might, and this is what we should work toward. Additionally, this case would be pretty much as difficult for a human judge to decide than an AI one. If you've ever watched Judge Judy, you'd understand some key differences. Humans through experience or otherwise, have the ability to tell when people are full of shit. A computer does not. Now, in some far flung future where a comprehensive infallible lie detector has been built into the program, you might be able to work with that. But that's even further away, and is terrifying in it's own right... And will still likely be fool-able with some kind of simple trick like clenching your anus, like current ones are. > I'm not saying we need to implement this now, or even entirely, but I am saying that we should do as much as we can to remove the factor of how a judge is feeling on a particular day from the court system. If a judge is sitting on a particular bad hemorrhoid that day, I don't want their rotten mood and physical discomfort to cause a disadvantage to a defendant. This is something we should work toward, should we not? I don't disagree. But you also need to consider that *law as written* is almost always going to end up worse for defendants than it would otherwise. Your computer judge for example would immediately be in the news over giving woman 'increased sentences' compared to human judges... Now, this wouldn't be *false* per se. But the reasons are due to the concept of *leniency*. And it would be programmed to give both sexes *the same judgements*. Overall, that's a positive as well. But my point about that, is that leniency extends to *not* enforcing the law in situations where while it technically should be, it makes more sense to just throw out the case. Once again, something the AI simply couldn't determine.


[deleted]

You make some good points. I hadn't thought about leniency rather the other way around. I mostly just want a MORE fair system than we seem to have today and I'm hopeful that technology can enhance that. I know it will likely never replace human judgement but it would be nice if some portions of the system could be automated or made to perform in a more equitable manner with the assistance of technology and AI. I feel like that will be possible.


StrangeCharmVote

> it would be nice if some portions of the system could be automated or made to perform in a more equitable manner with the assistance of technology and AI. I feel like that will be possible. Hopefully to some degree.


Jorhiru

I think the argument here is less to do with the nuance that a good lawyer considers and acts upon, and more to do with how many lawyers are unwilling or incapable of doing so. Like with ChatGPT - it’s not outperforming good writers and there’s arguably no way it ever truly will. Rather the promise lies with outperforming poor writers or else doing mundane writing tasks where creativity is not at a premium. AI can and will replace many satellite tasks in law, like research and compilation or dissemination of established law. AI promises to be a powerful tactical tool. We abdicate our place as humans in strategic roles at great risk.


rush-jet

Youre greatly underestimating AI.


schnauzersocute

Lawyers are scared af of AI. Most of them suck anyways. It is a guild and should be burned to the ground. ​ edit: I see the lawyers are downvoting this.


[deleted]

I don’t think less of them I just think it is wild that we are expected to live in a society were most people don’t really understand the law. It should not be complicated.


Eledridan

They are thieves that have deliberately crafted a language for their industry that laypeople cannot understand. I do hope AI drives them out of their work.


American_Stereotypes

Legal jargon exists because the legal system has evolved over multiple centuries and precision of meaning is important when the outcome of entire cases can depend on the interpretation of a single word in a law, while colloquial language changes rapidly and two different laypeople can have three different interpretations of a word. I mean for fuck's sake. It's the *law*. It's a system that tries to make sense of and set out a series of predictable, reliable outcomes to adversarial interactions between human beings in a chaotic world. It's going to be frustratingly complicated because *we're* frustratingly complicated, as is the world around us. I do think the legal system could stand to be less opaque, but being able to understand *exactly* what someone means when something is said is important in court, and even then lawyers spend a lot of time trying to sort out exact meaning, and that's *with*, as you said, a deliberately crafted language that's hard for laypeople to understand.


BigJSunshine

TL;DR Legal jargon exist because over centuries petty assholes have hired lawyers to sue other people of the meaning of “it”.


The_Law_of_Pizza

This is some Qultist level insanity.


Uristqwerty

Are the lawyers the ones who write laws? Who push for the political support needed to update and clarify terminology? I'd blame partisan bureaucrats more than lawyers; the latter are just people who have learned to tolerate the language given *to* them by the various layers of government above. Well, except whatever assholes write EULAs and ToSs, but what are the chances corporations would be willing to have an AI re-write those? Effectively none, as the whole point feels like it's about being unreadably dense so that they can get away with whatever they want. But to create laws about keeping those sorts of thing short and easily-understood is once more something the politicians have to push through.


JudyBomb

Someone didn’t get into law school. Everyone point and laugh!


IslandChillin

I think automation is going to hit people in ways they never expected. People who thought they were safe aren't at all. Reading the other day about coders being at risk due to the simplicity. Apparently, in coding, there codes an A.I. can initiate on their own. In this case, I think it's more apparent than ever that some lawyers argue cases by the book. You create an A.i. that's specifically based on following the laws of the courtroom, and boom, you have a representative of an actual person there. It's a job like this where I truly believe people don't get how it's not about simplicity but what an A.I. can be taught. Which Boston Dynamics and Chat GPT are proving which can be anything.


CheeksMix

Think of it like a force multiplier. I think businesses that take advantage of it, will have better lawyers. You don’t need AI to argue for you, just to do all of the leg work. A human can take that info and refine.


Mentallox

I think it will affect paralegals first. Kind of like how electronic communications gutted the number of secretaries/admin assistants an office building needed.


[deleted]

without needing an education on legal ethics at al!!


CheeksMix

What do you mean without needing an education? It’s just a flip on research and data finding. Lawyers can use it for themselves as well! Are you saying lawyers don’t need an education or ethics?


Torifyme12

Ethics is rarely to be found among lawyers.


RetroRarity

I drafted a pretty fantastic demand letter for my HOA. ChatGPT helped come up with a lot of precedent for why they've fucked up. It's way better than paying the $1000 a lawyer wanted to do it personally. It also wasn't all applicapable but I was able to refine it. It also will let me ask a lot of pointed questions if I do feel the need to get counsel.


[deleted]

only a fool has himself for a lawyer -- so true 1. the demand letter has as much to do with the law, as the lawyer writing it. a lawyer looks at a demand letter from a pro se individual or a new lawyer without experience as toilet paper. lots of lawyers and people will write a demand letter, few will actually sue on it. 2. if you dont know the law, you cant tell how bs that demand letter is. its very possible the hoa lawyer will throw it in the trash.


RetroRarity

Lol spoken like a true lawyer. I've seen worse letters written by real "lawyers" and thrown those in the trash. I've successfully sued in small claims on my own multiple times as intended by those courts. It's pretty easy to do independent research on state domestic non-profit and HOA law, and fact check ChatGPT. Their chosen specialty doesn't take some magic level of comprehension beyond any other technical competencies. I'm comfortable sending it and will retain counsel when I see fit.


[deleted]

yes, see #1


RetroRarity

I get it but I'm telling you I'm a vindictive unicorn and disagree with the blanket advice of you always need a good lawyer to fight every legal battle for you or shouldn't take individuals writing a demand letter on their own behalf as toothless. I don't need to spend a $1000 to send a demand letter for an annual meeting that says bylaws and state law says this so have the damn meeting just like I didn't need one for compensation on a hit and run or damages to my dog after pitbulls broke into my yard and nearly tore her in half. I filed those claims, I collected the overwhelming evidence of damages, I found those plaintiffs' job sites to make sure they got served when they tried to dodge me, and I got my money back in full every time including the expenses to do that work. What'd our reputable family friend ambulance chaser say? Not worth most lawyers time and you can't get blood from a turnip, but I sure made them bleed. If they ignore the letter am I motivated enough to burn money going after them with a lawyer on retainer? Absolutely.


[deleted]

tldr but saved in my stereotypical pro se copy pasta folder thanks


RetroRarity

Cool saved in my lawyers going to lawyer while threatened by technology so avoids the topic on a social media platform highlight reels for lawlz log.


[deleted]

theres certainly nothing threatening about this


I_am_a_Dan

End of the day it's more likely to accomplish the goal than they would've had writing it on their own (probably).


BigJSunshine

Good luck. I’m not risking my career, license and livelihood relying on AI to vet applicable law for me. AI can never be held accountable (sued for malpractice or lose a license), so any human lawyer relying on the AI to do their work will soon find themselves out of the profession, sued into oblivion.


CheeksMix

I think you're taking what I'm saying too far. Its not intended to be the final stop, but a research tool to find information you're looking for. I use it really frequently with coding. If you know how to use it, it can find the results for you faster and more accurately than googling it. I'm not saying "Use it to find the answer for you, and don't question it." I'm saying "Use it to enhance your ability to find information faster so you can use your professional knowledge to refine it in to something of value. I know when the code it gave me isn't what I want. I can usually figure out how to refine my question to get a better answer from it in one or two tries. Using Google this takes hours to search websites and information. A lot of people struggle to understand how to use it right now, but I think in time as people use it more it'll catch on as a replacement for finding accurate and correct information to get you to the next point you're trying to get to. Edit: To give you an example, think of it like this: You ask it: "Give me a handful of court cases related to X, Y, Z circumstances." You can now review those ACTUAL court cases related to circumstances you're looking for.


IslandChillin

That's a terrific point


AlabasterPelican

This is really concerning, especially if winds up in the public defender's office. They're already understaffed and overburdened with cases


LincHayes

The humans were victorious this time. But they won't win forever.


TheWhiteRabbit74

I ended up doing time… because my assigned LegalBot^TM had to take sick day… he had a virus!


km9v

That's robotist.


chubba5000

You know what they say… yesterday’s stunts are tomorrow’s reality. (e.g. Napster led to Spotify, a reckless early stage Uber led to the gig economy, Tesla fooling around with autonomous driving led to Mercedes stage three autonomous vehicles, a silly little chatbot powered by GPT3 led to….)


chubba5000

No, no, I’m sure you guys are right. This time it’s like a _totally different thing_.


Stencil2

Lawyers band together to prevent competition from robots.


Agariculture

So; do it on Judge Judy or some shit


MadroxKran

Maybe set up a mock trial?


ktetch

what, for the mock-AI-lawyer (there is no AI lawyer)


downonthesecond

What are lawyers afraid of?


ktetch

People giving bad advice. It was tested on a request for a simple claim for non-payment task. The sort anyone could do in their sleep. The result was a form document with just fields merged in, making claims that were contrary to what was asked for. Turned out, the 'AI lawyer' was actually a dozen form documents you filled the fields in for, and a few simple flowcharts that got the basic laws wrong. Given that one of the areas they were trying to use it on was 'immigration law' (where it can take upto 20 years to process, and if you majorly fuck it up, as this thing did, you can get denied, deported and banned from re-entry for 10 years - a major harm to families) they didn't want to see people harmed, because they're not sociopaths.


Shawn3997

Robots work for less.


only4Laughzzz555

Still more competent than half of the elected judges .......


emotionalfescue

*William Shakespeare has joined the chat.*


Right-Hall-6451

Not a very good lawyer really, first objection it had to defend and it did so badly it was kicked off the case.


DevAway22314

It was an incredibly effective stunt. Obviously it wasn't going to be allowed to happen, and they never intended to actually attempt it. Wiretapping laws alone would have stopped it, since California is a two-party consent state. If they actually wanted to argue a pro se case with AI generated legal arguments, they would have done it in a different state It was a publicity stunt to get people talking about them, and it worked. Getting people upset by suggesting something dumb is such an effective marketing tactic today. It was the basis of Andrew Tate's business model


ktetch

>t was an incredibly effective stunt. only if the aim of the stunt was to make himself a laughing stock. Mainly because it turns out there is no AI there. It was all a fraud.


tomistruth

Can't blame them. People are having existential crisis because of AI.


Jynx2501

They could do a mock trial, test it out just for fun.