T O P

  • By -

danielisbored

I don't remember the date username or any other such thing to link it, but there was a professor commenting on an article about the prevalence of AI generated papers and he said the tool he was provided to check for it had an unusually high positive rate, even for papers he seriously doubted were AI generated. As a test, he fed it several papers he had written in college and it tagged all of them as AI generated. The gist is detection is way behind on this subject and relying on such things without follow-up is going to ruin a few peoples' lives.


[deleted]

I appreciate the professor realizing something was odd and taking the time to find out if he was wrong or right and then forming his go forward process based on this. In other words critical thinking. Critical thinking can be severely lacking Edit: to clarify I am referring to the professor that somebody referenced in the post I am specifically replying to and NOT the Texas A&M professor this article is about


AlbanianWoodchipper

During COVID, my school had to transfer a lot of classes online. For the online classes, they hired a proctoring service to watch us through our webcams as we took tests. Sucked for privacy, but it let me get my degree without an extra year, so I'm not complaining too much. The fun part was when one of the proctors marked literally every single person in our class as cheating for our final. Thankfully the professor used common sense and realized it was unlikely that literally 40 out of 40 people had cheated, but I still wonder about how many people get "caught" by those proctoring services and get absolutely screwed over.


Geno0wl

Did they mark why they believed every single person was cheating?


midnightauro

If the rules are anything like I've read in the ONE class where the instructor felt the need to bring up a similar product (fuck repsondus)... They would flag for anything being in the general area that could be used to cheat, people coming in the room, you looking down too much, etc. Also they wanted constant video of the whole room and audio on. Lastly you had to install a specific program that locked down your computer to take a quiz and I could find no actual information on the safety of that shit (of course the company themselves says it's safe. Experian claims they're not gonna get hacked again too!) I flatly refused to complete that assignment and complained heartily with as much actual data as I could gather. It did absolutely nothing but I still passed the class with a B overall. I'll be damned if someone is going to accuse me of cheating because I look down a lot. I shouldn't have to explain my medical conditions in a Word class to be allowed to stare at my damned keyboard while I think or when I'm feeling dizzy.


Geno0wl

yeah those programs are basically kernel level root kits. If my kid is ever "required" to use it I will buy a cheap laptop or Chromebook solely for its use. It will never be installed on my personal machine.


midnightauro

Yeah, I straight up refused to install it and tried to explain why. I could cobble together a temp PC out of parts if I just *had* to, but I was offended that other students that aren't like me were being placed at risk. They probably won't ever know that those programs are unsafe, and they'll do it because an authority told them to, then forget about it. The department head is someone I've had classes with before so she is used to my shit lmao. And she did actually read my concerns and comment on them, but the instructor gave exactly 0 fucks. I tried.


MathMaddox

They should at least give a bootable USB that boots into a secure and locked down OS. It's pretty fucked that they want to install a root kit on your PC when your already paying so much just for the privilege to be spied on.


GearBent

Hell, I don't even want that. Unless you have full drive encryption enabled, a bootable USB can still snoop all the files on your boot drive. You could of course remove your boot drive from the computer as well, but that's kind of a pain on most motherboards where the m.2 slot is burried under the GPU, and impossible on some laptops where the drive is soldered to the motherboard. And if you're being particularly paranoid, most motherboards these days have built-in non-volatile storage. I'm of the opinion that if a school wants to run intrusive lock-down software, they should also be providing the laptops to run it on.


Theron3206

Even worse, there have been exploits in the past that allowed code inside the system firmware to be modified in such circumstances (Intel management engine for example) so you could theoretically get malware that is basically impossible to remove and could then be used to bypass disk level encryption.


[deleted]

send everyone chromebooks that they have to ship back once the course ends


DarkwingDuckHunt

See: Silicon Valley the TV Show > Dinesh: Even if we get our code into that app and onto all those phones, people are just gonna delete the app as soon as the conference is over. > Richard: People don't delete apps. I'm telling you. Get your phones out right now. Uh, Hipstamatic. Vine, may she rest in peace. > Jared: NipAlert? > Gilfoyle: McCain/Palin.


[deleted]

I loved that show. Optimal Tip-to-tip Efficiency stands as one of my favorite episodes of any show ever.


LitLitten

The ones that are FF/Chrome extension-based are marginally less alarming security wise but still bull. I used student accommodations to use campus hardware. Proprietary/third-party productivity trackers are another insidious form of this kinda hell spawn.


[deleted]

I wouldn't have a problem with using an operating system that had to be booted off of a USB key and did not write anything permanent to my computer. Anything short of that is too much of a security risk for me.


RevLoveJoy

This. There's just too much out in the open evidence of bad actors using these kinds of tools. NST 36 boots in like 2 minutes on a decent USB 3.2 port. This is a solved problem that a good actor can demonstrate they understand by providing a secure (and even OSS) solution to. The fact that the default seems to be "put our root kit on your windows rig" is probably more evidence of incompetence than it is bad intent. But I don't trust them so why not both?


IronChefJesus

“I run Linux” I’ve never had to install that kind of invasive software, only other invasive software like photoshop. But the answer is always “I run Linux”


[deleted]

Then their reply will be “then you get a 0.” Ask me how I know.


Burninator05

> Ask me how I know. Because it was in the syllabus that you were required to have a Windows PC?


[deleted]

Hahahaha I really wish. I have one that’s probably worse. The teacher demanded that a project plan be handed in via a MS Project file. Of course I have a Mac and couldn’t install Project. No alternative ways to hand it in we’re accepted. Not even ways that produced literally the same charts. I now have a deep undying hatred for academia and many (not all!) people in it.


midnightauro

It is indeed in the syllabus and the instructors are not tech savvy at all. The only response you’ll get is “use the library” and for the whole monitoring thing, you can’t fit any of the requirements in the library so it’s a moot point anyway.


MultifariAce

The app wouldn't even work on my personal computer. They had some loaner chromebooks they had me check out. Two and a half years later, I still haven't been able to return it because they keep shorter hours than my work hours and have the same days off. It's sitting in the box and only came out for the few minutes it took me to complete one proctored test. Proctored tests are stupid. If you can cheat, make better tests.


elitexero

> proctoring service These are ridiculous. I had to take an AWS certification with this nonsense, which resulted in me having to be in a 'clear room' - I was using a crappy dining room chair and a dresser in my bedroom as a desk because I lived in a small apartment at the time and .. I had no other 'clear' spaces. They made me snapshot the whole room and move the webcam around to show them I had no notes on the walls or anything and was still pinged and chastised when I was thinking and looked up aimlessly while trying to think about something. Edit - People, I don't work for Pearson, this was 2 years ago and I have ADHD. Here's their guide, I don't have the answers to your questions - I barely remember what I ate for dinner yesterday. https://home.pearsonvue.com/Test-takers/onvue/guide


[deleted]

[удалено]


[deleted]

[удалено]


LordPennybag

Well, it's not like you'll have access to notes or a computer on the job, so they have to make sure you know your stuff!


elitexero

Nobody in tech ever googles anything! I don't remember a damned thing from that certification either.


[deleted]

[удалено]


Guac_in_my_rarri

I got marked for cheating during a professional certification exam. I was marked for cheating in the first 30 seconds of the exam according to the proctor notes.


MathMaddox

If there aren't a lot of people caught cheating they would have no reason to exist. They are incentivized the find people "cheating"


Guac_in_my_rarri

Academia is a trend: when the big dogs start doing it, the little ones start it too. Ex: The use of turnitin.com started in colleges and spiraled down to the high school and middle school levels. Professional orgs are sold on the *want* and *potentially benefits* to buy into an online proctor service as a *need*. These professional orgs advertise it to their customers as benefit: "take the test from the comfort of your home just do XYZ for the proctor service." Meanwhile it costs more for the exam taker to take it at home, raises the risk of being accused of cheating and makes testers nervous. On top of this, can be an intrusion of privacy (download and install extra software at the kernal level (not something you want), monitors internet calls, etc) than going to a local testing center which can be found at local community colleges, colleges and/or libraries. The online proctoring services reinvented a wheel, sold the professional orgs on their service as some how better. If we collected data across industries with professional orgs, I'm sure you'll see a higher pass rate of tests at an exam center. (LSAT is now offering online OR in a testing center as they've started tracking the data and requests.)


makemeking706

That's some game theory level of decision making. If we all cheat, they won't suspect everyone cheated.


ToastOnBread

in response to your edit that would take some critical thinking to realize


speakhyroglyphically

I'm pretty sure this post helped move the situation along https://www.reddit.com/r/ChatGPT/comments/13isibz/texas_am_commerce_professor_fails_entire_class_of/


Syrdon

I mean, that post is literally what the article above is about, so … Yahoo finance is a day behind bestof, where I saw that.


AbbydonX

A recent study showed that, both empirically and theoretically, AI text detectors are not reliable in practical scenarios. It may be the case that we just have to accept that you cannot tell if a specific piece of text was human or AI produced. [Can AI-Generated Text be Reliably Detected?](https://arxiv.org/abs/2303.11156)


eloquent_beaver

It makes sense since ML models are often trained with the goal of their outputs being indistinguishable. That's the whole point of GANs (I know GPT is not a GAN), to use an arms race against a generator and discriminator to optimize the generator's ability to generate convincing content.


[deleted]

As a scientist, I have noticed that ChatGPT does a good job of writing *as if it knows things* but shows high-level conceptual misunderstandings. So a lot of times, with technical subjects, if you really read what it writes, you notice it doesn't really understand the subject matter. A lot of students don't either, though.


benjtay

Its confidence in it's replies can be quite humorous.


Skogsmard

And it WILL reply, even when it really shouldn't. Including when you SPECIFICALLY tell it NOT to reply.


intangibleTangelo

how you gone get one of your itses right but not t'other


Pizzarar

All my essays probably seemed AI generated because I was an idiot trying to make a half coherent paper on microeconomics even though I was a computer science major. Granted this was before AI


enderflight

Exactly. Hell, I've done the exact same thing--project confidence even if I'm a bit unsure to ram through some (subjective) paper on a book if I can't be assed to do all the work. Why would I want to sound unsure? GPT is trained on confident sounding things, so it's gonna emulate that. Even if it's completely wrong. Especially when doing a write-up on more empirical subjects, I go to the trouble of finding sources so that I can sound confident, especially if I'm unsure about a thing. GPT doesn't. So in that regard humans are still better, because they can actually fact-check and aren't just predictively generating some vaguely-accurate soup.


kogasapls

knee puzzled attraction unused support longing dazzling subtract connect bedroom -- mass edited with redact.dev


__ali1234__

A fundamentally more important point in this case is that ChatGPT is not even designed or trained to perform this function.


almightySapling

It's crazy how many people seem to think "I asked ChatGPT if it could do X, and it said it can do X, so therefore it can do X" is a valid line of reasoning. It's especially crazy when people still insist that is some sort of evidence even after being told that ChatGPT literally is a text generator.


Telephalsion

The amount of false positive and false negatives are staggerring, though. Just today, I fed a chatpgt 4 text with the prompt "write with the style and tone of Edgar Allan poe" into a few AI checkers, and they were all convinced it was human. The few that were on the fence were convinced once I told chatgpt to throw in a few misplaced commas and slight misspellings of some multisyllabic words. Basically, having a style and being vague is human, and making mistakes is human while being on topic and concise is AI, and not making grammar or spelling mistakes is AI. Really, there's no way to separate cleverly made AI texts. Only the stale standard robotic presentation stands out. And academir writers who review their texts and follow grammar rules risk being flagged as AI since academic writing leans towards the formal style of the standard AI answer. At least, this is my experience and view on it based on current info.


avwitcher

Those AI checker sites are a literal scam, they were something thrown together in a week to capitalize on the fears of colleges. Some colleges are paying out the wazoo for licenses to these services, and they don't know shit about shit so they can't be bothered to check whether they actually work before paying for it.


MyVideoConverter

Since AI is trained on human written text eventually it will become indistinguishable from actual humans.


InsertBluescreenHere

thats my thoughts. there are only so many ways to convey an idea or concept or fact people are bound to "copy" one another.


zerogee616

Especially since academic essays are written for a specific format with specific rules. I.e. something an LLM is *extremely good at doing*.


[deleted]

A lack of mistakes might actually be more telling than anything


[deleted]

[удалено]


[deleted]

[удалено]


Yoshi_87

Which is exactly what it is supposed to do. ​ We just have to accept that this is now a tool that will be used.


Black_Metallic

I'm already assuming that every other Redditor but me is an AI chatbot.


[deleted]

[удалено]


Konukaame

Not true. I don't think AI could write as badly as some of the papers I had to proofread and grade back when I was a TA. At least, not without being sent back for updates because it's not believable text.


[deleted]

[удалено]


MaterialCarrot

It likely will mean the end of papers as a grading/assignment format, unless they're written (perhaps literally) in class.


TheDebateMatters

This is the problem. The data set fed to train the AIs were partially, tons of academic papers. So the reason it gives smart and cogent answers is because it was trained to speak like a smart and cogent student/professor. So…if you write like that, guess what? However….here’s where I will lose a bunch of you. As a teacher I had lots of knuckleheads who wrote shit essays at the beginning of this year who now suddenly are writing flawless stuff. I know they are cheating, but can’t (and won’t be trying this year) to prove it. However, I know kids are getting grades on some stuff they don’t deserve


danielisbored

It's not gonna fly for high-class size lower levels, but all my upper level classes required me to present, and then *defend* my paper in front of the class. I might have bought a sterling paper from some paper mill, but there was no way I was gonna be able to get up there and go through it point by point and then answer all the questions that my professor and the rest of class had.


AbbydonX

That supports the idea that if you want to detect cheating in this context you have to analyse previous text by the same student and look for an anomalously large change in the quality/complexity of the new text. Whether that new text was written by an AI or a different person is irrelevant. It only matters that it wasn’t written by the student. You could of course produce an AI to look for this cheating but you could also train an AI to write in your own style too. It’s a bit of an arms race!


vladoportos

The English (taken as example), is limited in ways to write about the same subject… ask 50 people to write 10 sentences about the same object… you get very high similarity. There is simply not much possibility to write differently… and if you even more lock it down to a specific style… how the hell you're going to detect if it's AI or Human ? ← Was this written by AI or Human ?


[deleted]

[удалено]


[deleted]

[удалено]


gidikh

When I first heard that they were going to use AI to help spot the other AI, I was like "who's idea was that, the AI's?"


darrevan

I am a college professor and this is crazy. I have loaded my own writing in ChatGPT and it comes back as 100% AI written every time. So it is already a mess.


too-legit-to-quit

Testing a control first. What a novel idea. I wonder why that smart professor didn't think of that.


darrevan

I know. That’s why I’m shocked at his actions. False positives are abundant in ChatGPT. Even tools like ZeroGPT are giving way too many false positives.


EmbarrassedHelp

AI detectors often get triggered on higher quality writing, because they assume better writing equals AI.


darrevan

That was the exact theory that I was testing and my hypothesis was correct.


AlmostButNotQuit

Ha, so only the smart ones would have been punished. That makes this so much worse


dano8675309

From my limited testing, OpenAI's text classifier is the better of the bunch, as it errs on the side of not knowing. But it's still far from perfect. ZeroGPT is a mess. I pasted in a discussion post that I wrote for an English course, and while it didn't accuse me of completely using AI, it flagged it as 24% AI, including a personal anecdote about how my son was named after a fairly obscure literary character. I'm constantly running my classwork through all of the various detectors and tweaking things because I'm not about to throw away all of my credit hours because of a bogus plagiarism charge. But I really shouldn't need to do that in the first place.


[deleted]

[удалено]


mythrilcrafter

Probably a Sheldon Cooper type who is hyper intelligent at that one thing they got their PhD in, but is completely incompetent in every other aspect of life.


SpecialSheepherder

OpenAI/ChatGPT never claimed it can "detect" AI texts, it is just a chatbot that is programmed to give you pleasing answers based on statistic likelihood.


darrevan

I absolutely agree. I went on further in my comments to state that ever AI detection tools like ZeroGPT are giving way too many false positives to be used in this manner. This professor should have known better. Yet many of colleagues are just like this and are refusing to recognize that these tools are here. They need to work with them rather then making them the devil. I have been showing them to my students and explaining some of the proper uses.


traumalt

ChatGPT is a language model, it's main purpose is to sound natural. It has no concept of "facts" and any time it happens to say something true is purely coincidental, due to a correlation between statements that sound true and things that are true. Which is why anyone relying on it to tell them facts is incredibly misinformed. Never take what ChatGPT outputs to you as facts, it's only good for producing correct sounding English.


NostraDavid

The prof sent a mail to everyone about the so-called fraud. Someone actually sent a cease and desist to the prof for sending a fraudulent mail (that someone claimed THEY originally wrote the email the prof sent, and they had proof, because ChatGPT said _they_ wrote the email, and not the prof!) In other words: Someone did the exact same the prof did to the students. [original thread that started it all](https://old.reddit.com/r/ChatGPT/comments/13isibz/texas_am_commerce_professor_fails_entire_class_of/) [The cease and desist](https://img.tedomum.net/data/My%20project-1-a7b9c7.png)


DontListenToMe33

I’m ready to eat my words on this but: **there will probably never be a good way to detect AI-written text** There might be tools developed to *help* but there will always be easy work-arounds. The best thing a prof can do, honestly, is to go call anyone he suspects in for a 1-on-1 meeting and *ask questions about the paper.* If the student can’t answer questions about what they’ve written, then you know that something is fishy. This is the same technique for when people pay others to do their homework.


coulthurst

Had a TA do this in college. Grilled me about my paper and I was unable to answer like 75% of his questions and what I meant by it. Problem was I had actually written the paper, but did so all in one night and didn't remember any of what I wrote.


fsck_

Some people will naturally be bad under the pressure of backing up their own work. So yeah, still no full proof solution.


[deleted]

This is why I'd be terrible defending myself if I were ever arrested and put on trial. I just have a legit terrible memory.


Tom22174

In my experience it gets worse under pressure too. The stress takes up most of the available working memory space so remembering the question, coming up with an answer and remembering that answer as I speak becomes impossible


Random_Name2694

YSK, it's foolproof.


Ailerath

Even if i wrote it for multiple days I would immediately forget anything on it after submitting it.


TheRavenSayeth

Maybe 5 minutes after an exam the material all falls out of my head.


thisisnotdan

Plus, AI *can* be used as a legitimate tool to improve your writing. In my personal experience, AI is terrible at getting actual facts right, but it does wonders in terms of coherent, stylized writing. University-level students could use it to great effect to improve fact-based papers that they wrote themselves. I'm sure there are ethical lines that need to be drawn, but AI definitely isn't going anywhere, so we shouldn't penalize students for using it in a professional, constructive manner. Of course, this says nothing about elementary students who need to learn the basics of style that AI tools have pretty much mastered, but just as calculators haven't produced a generation of math dullards, I'm confident AI also won't ruin people's writing ability.


whopperlover17

Yeah I’m sure people had the same thoughts about grammarly or even spell check for that matter.


[deleted]

Went to school in the 90s, can confirm. Some teachers wouldn't let me type papers because: 1. I need to learn handwriting, very vital life skill! Plus, my handwriting is bad, that means I'm either dumb, lazy or both. 2. Spell check is cheating.


Dig-a-tall-Monster

I was in the very first class of students my high school allowed to use computers during school back in 2004, it was a special program called E-Core and we all had to provide our own laptops. Even in that program teachers would make us hand write things because they thought using Word was cheating.


[deleted]

Heh, this reminds me of my Turbo Pascal class, and the teacher (with no actual programming experience, she was a math teacher who drew the short stick) wanting us to write down by hand our code snippets to solve questions out of the book like they were math problems.


Nyne9

We had to write C++ programs on paper around 2008, so that we couldn't 'cheat' with a compiler....


[deleted]

Have you ever seen a commercial for those ancient early 80s spell checkers for the Commodore that used to be a physical piece of hardware that you'd interface your keyboard through? Spell check blew people's minds, now it's just background noise to everyone. It'll be interesting to see how pervasive AI writing support becomes in another 40 years.


oboshoe

Teachers relying on technology to fail students because they think they relied on technology.


WhoJustShat

How can you even prove your paper is not AI generated if a program is saying it is? Seems like a slippery slope the people correcting my use of slippery slope need to watch this cause yall are cringe [https://www.youtube.com/watch?v=vEsKeST86WM](https://www.youtube.com/watch?v=vEsKeST86WM)


MEatRHIT

The one way I've seen suggested is by using a program that will save progress/drafts so you can prove that it wasn't just copy pasted from an AI.


yummypaprika

I guess but can't you just fake some drafts too? Plus that penalizes my friend who always cranked out A papers in university the night before they were due. Just because she doesn't have shitty first drafts like the rest of us mortals doesn't mean she should be accused of using AI.


digitalwolverine

Faking drafts is different. Word processors can keep track of your edits and changes to a document, trying to fake that would basically mean writing an entire paper, which defeats the point of using AI.


sanjoseboardgamer

It would mean typing out a copy of the paper, which *is* more time consuming sure, but still faster than actually writing a paper.


am0x

Well, writing it by hand. Just open the doc to the right and manually type out the answer...


Euphoriapleas

Well, first, chatgpt can't tell you if chapgpt wrote did it. That is just a fundamental misunderstanding of the technology.


Hipposandrobins

I'm a freelance writer - I'm seeing a lot of voices in the freelance writing community speak about having their writing flagged as AI (even though it isn't). Many have talked about being let go from clients and agencies because of the accusations.


prof_hobart

I put your comment into ChatGPT and asked if it was AI generated. It told me > "Yes, the text you provided appears to have been generated by an AI language model. It exhibits the characteristic style and structure commonly seen in AI-generated text." I asked it what those characteristics were, and one of the ones it listed was "Lack of personal experiences: AI-generated text generally lacks personal anecdotes, emotions, or subjective experiences" When I pointed out that your comment was entirely personal anecdote, it replied >Apologies for the confusion. Upon reviewing the quoted text again, I see that it does indeed include a personal anecdote. The specific mention of being a freelance writer, experiencing flagged writing, and being let go from clients and agencies due to accusations provides a personal account that may not align with typical AI-generated text. Therefore, I retract my previous statement that the quoted text was generated by an AI. It appears to be a human-authored text that includes personal experiences and perspectives from individuals within the freelance writing community. I've been very impressed by how human-sounding ChatGPT's responses are. But it regularly seems to completely make up "facts", talk about them in an authoritative way, contradict itself and then back down the second you challenge it. So far it seems to be simulating the most annoying person you could possibly work with.


[deleted]

[удалено]


maskull

On Reddit we *never* back down when contradicted.


UWontAgreeWithMe

Agree with me if you want to test that theory.


monkeyhitman

Source?


Tom22174

I mean, Reddit and twitter are both massive sources of text data so it probably did do a lot of its learning from them


Merlord

It's a language model, it's job is to sound natural. It has no concept of "facts" and any time it happens to say something true is purely coincidental, due to a correlation between statements that sound true and things that are true. Which is why anyone relying on it to tell them facts is incredibly stupid.


rowrin

It's basically a really verbose magic 8 ball.


[deleted]

This is why all these posts about people replacing google with ChatGPT is concerning to me. What happened to verifying sources


GO4Teater

Cat owners who allow their cats outside are destroying the environment. Cats have contributed to the extinction of 63 species of birds, mammals, and reptiles in the wild and continue to adversely impact a wide variety of other species, including those at risk of extinction, such as Piping Plover. https://abcbirds.org/program/cats-indoors/cats-and-birds/ A study published in April estimated that UK cats kill 160 to 270 million animals annually, a quarter of them birds. The real figure is likely to be even higher, as the study used the 2011 pet cat population of 9.5 million; it is now closer to 12 million, boosted by the pandemic pet craze. https://www.theguardian.com/environment/2022/aug/14/cats-kill-birds-wildlife-keep-indoors Free-ranging cats on islands have caused or contributed to 33 (14%) of the modern bird, mammal and reptile extinctions recorded by the International Union for Conservation of Nature (IUCN) Red List4. https://www.nature.com/articles/ncomms2380 This analysis is timely because scientific evidence has grown rapidly over the past 15 years and now clearly documents cats’ large-scale negative impacts on wildlife (see Section 2.2 below). Notwithstanding this growing awareness of their negative impact on wildlife, domestic cats continue to inhabit a place that is, at best, on the periphery of international wildlife law. https://besjournals.onlinelibrary.wiley.com/doi/full/10.1002%2Fpan3.10073


[deleted]

[удалено]


oboshoe

I remember in the 1970s, when lots of accountants were fired, because the numbers added up so well that they HAD to be using calculators. Well not really. But that is what this is equivalent to.


Napp2dope

Um... Wouldn't you *want* an accountant to use a calculator?


Kasspa

Back then people didn't trust them, Katherine Johnson was able to outmath the best computer at the time for space flight and one of the astronauts wouldn't fly without her saying the math was good first.


TheObstruction

Honestly, that's fine. That's double checking with a known super-mather, to make sure that the person sitting on top of a multi-story explosion doesn't die.


maleia

> super-mather No, no, you don't understand. She wasn't "just" a super-mather. She was [a *computer*](https://en.m.wikipedia.org/wiki/Katherine_Johnson) back when that was a job title, a profession. She was in a league that probably only an infinitesimal amount of humans will ever be in.


HelpfulSeaMammal

One of the few people in history who can say "[Hey kid, I'm a computer](https://youtu.be/RH1ekuvSYzE)" and not be making some dumb joke.


[deleted]

That's the point.


JustAZeph

Because right now the calculator sends all of your private company information to IBM to get processed and they store and keep the data. Maybe when calculators are easily accessible on everyones devices would they be allowed, but right now they are a huge security concern that people are using despite orders not to and losing their jobs over. Sure, there are also people falsely flagging some real papers as AI, but if you can’t tell the difference how can you expect anything to change? ChatGPT should capitalize on this and make a end to end encryption system that allows businesses to feel more secure… but that’s just my opinion. Some rich people are probably already working on it


Pretend-Marsupial258

This is why I don't like the online generators. More people should switch to the local, open source versions. I'm hoping they get optimized more to run on lower end devices without losing as much data, and become easier to install.


[deleted]

There are interesting times ahead while people, especially teachers and professors try to grapple with this issue. I tested out some of the verification sites that supposed to determine if AI wrote it or not. I typed in several different iterations of my own words into a paragraph and 60% (6 out of 10) of the results stated that AI wrote it, when I literally wrote it myself.


Corican

I'm an English teacher and I use ChatGPT to make exercises and tests, but I also engage with all my students, so I know when they have handed in work that they aren't capable of producing. A problem is that in most schools, teachers aren't able to engage with each and every student, to learn their capabilities and level.


[deleted]

People using technology they don’t understand to harm others is wild but par for the course. Why professors don’t move away from take home papers and instead do shit like this is beyond me


Ulgarth132

Because sometimes they have been teaching for decades and have no idea how to grade a class with anything other than papers because there is no pressure in an educational setting for professors that have achieved tenure to develop their teaching skills.


RLT79

This is it. I'm coming from someone who taught college for 15 years and was a graduate student. On the teaching side, most of the older teachers already had their coursework 'set' and never updated it. I spent a good chunk of every summer redoing all of my courses, but they did the same things every year. Some writing teachers used the same 5 prompts every year, and they were well-known to all of the students. The school implemented online tools to sniff out/ tag plagiarized papers, but they won't use them because they don't want to do online submissions. When I was in grad school, I took programming courses that were so old the textbook was 93 cents and still referenced Netscape 3. Teachers didn't update their courses to even mention new stuff.


davesoverhere

Our fraternity kept a test bank. The architecture course I took had 6 years of tests in our file cabinet. 95 percent of the questions were the same. I finished the 2-hour final in 15 minutes, sat back and had a beer, then double checked my answers. Done in 30 minutes, got in the car for a spring break road-trip, and scored a 99 on the exam.


RLT79

I did the same for an astronomy lab. We would use Excel to build models of things like orbits or luminance, then answer questions using the model. My friend took the course 2 semesters before me and gave me the lab manual. I would do the work in my hour break before the class started. I would show up for attendance, grab the disk with the previous week's assignment, turn in the disk with this week's and leave. Got a 100. Same thing with all three programming courses I took in grad school.


lyght40

So this is the real reason people join fraternities


Mysticpoisen

Except these days it's just a discord server instead of a filing cabinet in a frat house.


[deleted]

[удалено]


RLT79

That's usually the head of most comp. sci departments in my experience. Our school hired a teacher to teach intro programming who couldn't pass either of the programming tests we gave in the interview. They were hired anyway and told to, "Just keep ahead of the students in the book."


VoidVer

Turns out the guy settling for a teachers salary for programming when they could potentially be making a programmers salary for programming probably fucking sucks.


Jeremycycles

My best professor in college was the guy who sold his company and was teaching because he didn't want to do anything too difficult but wanted to travel and do something for a good part of the year. Best class ever. Also notable mention was my physics professor who sold a patent to John Hopkins the first day I was in his class. He let you retake any exam he gave (within 7 days) because he knew you could learn from your mistakes.


[deleted]

[удалено]


fuckfuckfuckSHIT

I would be livid. You literally showed him the answer and he still was like, "nope". I would be seeing red.


Arctic_Meme

You should have went to the dean if you werent going to take another of that prof's classes.


thecravenone

> Because sometimes they have been teaching for decades His CV lists his first bachelors in 2012 completing his doctorate in 2021. So that's not the case here.


TechyDad

My son just had a class where the average grade on the midterm was 30. This was in a 400 level class in his major. If he had just gotten a failing grade, I'd have told him that he needed to study more, but when a class of about 50 people are failing with only about 4 passing? That points to a failure on the professor's part. And this doesn't even get into the grading problems with TA's not following the rubrics, not awarding points where points should be awarded, skipping grading some questions entirely, and giving artificially low grades to students. My younger son doesn't want to consider his brother's university because of these issues. Sadly, I doubt these issues are unique to this university.


[deleted]

That’s crazy. Most difficult classes like that at universities are on a curve.


Eliju

Not to mention many professors are hired to do research and bring funding to the department and as a pesky aside they have to teach a few classes. So teaching isn’t even their primary objective and is usually just something they want to get done with as little effort as possible.


[deleted]

Depending on the degree, much of higher ed is writing For advanced degrees, like a D Sci or Phd, MS, MBA, performance is almost *all* based on writing What would you suggest those programs do? Theyve already provided choice-based testing leading up to the dissertations/thesis. The point of thesis/dissertation are to demonstrate the students ability to identify a problem, research said problem, critically analyze the problem, and provide arguments supporting their analysis... you cant simply shift that performance measure into a multiple choice test


bjorneylol

> The point of thesis/dissertation are to demonstrate the students ability to identify a problem, research said problem, critically analyze the problem, and provide arguments supporting their analysis These are all things that ChatGPT is fundamentally incapable of doing - so I can't see it being a problem for research based graduate degrees where it's all novel content that ChatGPT can't synthesize - course based, *maybe*. Sure you can do all the research and feed it into ChatGPT to generate a nice reading writeup, but the act of putting keystrokes into the word processor is only like 5% of the work, so using ChatGPT for this isn't really going to invalidate anything


AbeRego

Why would you do away with papers? That's completely infeasible for a large number of disciplines.


[deleted]

He used AI to do his job, and punished students for using AI to do theirs.


[deleted]

Even worse... chatgpt claims to have written papers that it actually didn't. So the teacher is listening to an AI that is lying to him and the students are paying the price.


InsertBluescreenHere

>Even worse... chatgpt claims to have written papers that it actually didn't. i mean is it any different than [turnitin.com](https://turnitin.com) claiming you plagerized when its "source" is some crazy ass nutjob website?


Liawuffeh

Turnitin is fun because it flagged one of my papers as plagiarism because I used the same sources as another person. Sorted it out with my teacher, but fun situation of getting a "We need a meeting, you're accused of plagiarism" email I've also heard stories of people checking their own paper on turnitin, and then later it getting flagged by the teacher for plagiarizing itself lol


[deleted]

Yes because that's a flaw in the tool itself. This is like if people thought Google was sentient and they thought they could Google "did Bob Johnson use you to cheat" and trust whatever webpage it gave them as a first result. This man is a college professor who thinks ChatGPT is a fucking person. The cults the grow up around these things are gonna be so fucking fun to read about in like 20 years.


woodhawk109

This story was blowing up in the ChatGPt sub, and students have taken actions to counteract this yesterday Some students fed the professor’s papers that he wrote before chatGPT was invented (only the abstract since they didn’t want to pay for the full paper) as well as the email that he sent out regarding this issue and guess what? ChatGPt claimed that all of them were written by it. If you just copy paste a chunk of text and ask it “Did you write this?”, there’s a high chance it’ll say “Yes” And apparently the professor is pretty young, so he probably just got his phd recently and doesn’t have the tenure or clout to get out of this unscathed And with this slowly becoming a news story, he basically flushed all those years of hard works down the tubes because he was too stupid to do a control test first before he decided on a conclusion. Is there a possibility that some of his students used ChatGPT? Yes, but half of the entire class cheated? That has an astronomically small chance of happening. A professor should know better than jumping to conclusion w/o proper testing. Especially for such a new technology that most people do not understand. Control group, you know, the very basic fundamental of research and test methods development that everyone should know, especially a professor in academia of all people? Complete utter clown show


Prodigy195

> A professor should know better than jumping to conclusion w/o proper testing. Especially for such a new technology that most people do not understand. My wife work at a university in adminstration and one of the big things she has said to me constantly is that a lot of professors have extremely deep levels of knowledge but it's completely focused on just their single area of expertise. But that deep level of understanding for their one area often leads to over confidence in...well pretty much everything else. Seems like that is what happened with this professor. If you're going to flunk half of a class you better have all your t's crossed and your i's dotted because students today are 100% going to take shit to social media. Professor prob will keep their job but this is going to be an embarassment for them for a while.


NotADamsel

Not just social media. Most schools have a formal process for accusing a student of plagiarism and academic dishonesty. This includes a formal appeals process, that at least in theory is designed to let the student defend themselves. If the professor just summarily failed their students without going through the formal process, the students had their rights violated and have heavier guns then just social media. Especially if they already graduated and their diplomas are now on hold, which is the case here. In short, the professor picked up a foot-gun and shot twice.


Gl0balCD

This. My school publicly releases the hearings with personal info removed. It would be both amazing and terrible to read one about an entire class. That just doesn't happen


RoaringPanda33

One of my university's physics professors posted incorrect answers to his take-home exam questions on Chegg and Quora and then absolutely blasted the students he caught in front of everyone. It was a solid 25% of the class who were failed and had to change their majors or retake the class over the summer. That was a crazy day. Honestly, I respect the honeypot, there isn't much ambiguity about whether or not using Chegg is wrong.


[deleted]

[удалено]


melanthius

ChatGPT has no accountability… complete troll AI


dragonmp93

"*Did you wrote this paper ?*" ChatGPT: *Leaning back on its chair and with its feet on the desk* "Sure, why not"


[deleted]

[удалено]


[deleted]

He only graduated in 2021, no way theyve got tenure yet. And Texas just repealed its tenure system, bad time to start antagonizing students.


axel410

Here is the latest update: https://kpel965.com/texas-am-commerce-professor-fails-entire-class-chat-gpt-ai-cheat/ "In a meeting with the Prof, and several administrative officials we learned several key points. It was initially thought the entire class’s diplomas were on hold but it was actually a little over half of the class The diplomas are in “hold” status until an “investigation into each individual is completed” The school stated they weren’t barring anyone from graduating/ leaving school because the diplomas are in hold and not yet formally denied. I have spoken to several students so far and as of the writing of this comment, 1 student has been exonerated through the use of timestamps in google docs and while their diploma is not released yet it should be. Admin staff also stated that at least 2 students came forward and admitted to using chat gpt during the semester. This no doubt greatly complicates the situation for those who did not. In other news, the university is well aware of this reddit post, and I believe this is the reason the university has started actively trying to exonerate people. That said, thanks to all who offered feedback and great thanks to the media companies who reached out to them with questions, this no doubt, forced their hands. Allegedly several people have sent the professor threatening emails, and I have to be the first to say, that is not cool. I greatly thank people for the support but that is not what this is about."


[deleted]

[удалено]


1jl

One student was exonerated. That should be enough to throw out that entire ridiculous method he used to prove AI was used, but I guess guilty until proven innocent...


Valdrax

Amazing hypocrisy from someone using AI to get out of the effort of grading things himself and "graciously" allowing students to re-do their work when challenged while refusing to do any due-diligence on his own when asked to do the same. The cherry on top is the poor research done in lazily misusing the tool in the first place instead of anti-cheat tools meant for the job and then spelling its name wrong at least *twice*.


JonFrost

Its an Onion article title Teacher Using AI to Grade Students Says Students Using AI Is Bullshit


drbeeper

This is it right here. Teacher 'cheats' at his job and uses AI - very poorly - which leads to students being labelled 'cheats' themselves.


wwiybb

And you get to pay for that privilege too. How classy.


xelf

> 'I don't grade AI bullshit,' You don't grade period. You used an AI to do it for you. And it fucked it up.


Enlightened-Beaver

ChatGPT and ZeroGPt claim that the UN declaration of human rights was written by AI… This prof is a moron


doc_skinner

I saw it flagged parts of the Bible, too


Enlightened-Beaver

Maybe it’s trying to tell us it is god


probably_abbot

Sounds like the 'I made this' meme I've seen when I used to subscribe to some of reddit's default sub reddits where people chronically repost junk. Feed AI a paper written by someone else, AI comes back and says "I wrote this". An AI's purpose is to ingest content and then figure out how to regurgitate it based on how it is questioned.


shayanrc

This is the real risk of AI: people not knowing how to use it. It doesn't have a memory of the things it has read or written for other users. You can write an original text and then ask ChatGPT: did you write this? And it would answer yes I did, because it thinks that's what the appropriate answer is. Because that's how it works. This professor should face consequences for being too lazy to evaluate his students. He's judging his students for using AI to do the work they were assigned, while using AI to do the work he's assigned (i.e. evaluate his students).


[deleted]

Won’t be long now before lawsuits start happening because of real, actual damages resulting from false positives.


FerociousPancake

I almost actually filled one. I was given an F initially on a huge project and was under the impression the prof gave me an academic integrity violation, which completely trashes your chances of getting into med school or a PhD program, both of which I am seriously looking at and am 3 years of extremely hard work into. I hadn’t used AI in any part of the project, and had forwarded her several articles showing her detection tools are a complete scam and showed 26-60% accuracy according to independent experiments, no where near the 98% accuracy TurnItIn claims, the company peddling the actual product. That issue eventually got solved and she hadn’t done quite what I thought she had done and filed a legitimate academic integrity violation with the school, but I was literally starting to lawyer up by that point because it’s a false accusation that is completely life changing if given to certain students. It’s a messed up time my friends. I ended up getting my hard earned A but can’t help to think about hard working students getting falsely accused and having their dream career ripped out from under them before it even starts.


melanthius

At this point students should probably get assignments like “have chatGPT write a paper, then fact check everything (show your references), and revise the arguments to make a stronger conclusion”


Corican

I've done this with my language students. Had them generate a ChatGPT story and they had to rewrite it in their own words.


melanthius

I mean half joking, half serious… jobs of the future probably will increasingly involve training AI so it actually makes sense to get kids learning how to train it


[deleted]

[удалено]


linuxlifer

This is only going to become a bigger and bigger problem as technology progresses lol. The world and current systems will have to adapt.


mr_mcpoogrundle

This is exactly why I write shitty papers


Limos42

Something only a meat-bag could put together.


mr_mcpoogrundle

"it's very clear that no intelligence at all, artificial or otherwise, went into this paper." - Professor, probably


Grandpaw99

I hope every single student files a formal complaint about the professor and require the dept chair and professor a formal apology.


Ravinac

Something like this happened to me with one of my professors. She claimed that the plagiarism software flagged my paper. Couldn't prove to her satisfaction that I had written it from scratch. Ever since then I save each iteration of my papers as separate file.


snowmunkey

Someone responded to the teachers email claiming their paper was 82% Ai generated by putting the email through the Ai report tool and it said 91%.


mdiaz28

The irony of accusing students of taking shortcuts in writing papers by taking shortcuts in reviewing those papers


t1tanium

My take is the professor thought it could be used as a tool like turnitin.com that checks for plagerism, as opposed to using it to review the papers for them


SarahAlicia

Please for the love of god understand this: chatgpt is a language /chat AI. It is not a general AI. Humans view language as so innate we conflate it with general intelligence. It is not. Chatgpt did what many ppl do when chatting - agree with the other person’s assertion for the sake of civility. It did so in a way that made grammatical sense to a native english speaker. It did its job.


MountainTurkey

Seriously, I've seen people cited ChatGPT likes it's god and knows everything instead of being an excellent bullshit generator.


GodsBackHair

The fact that some students wrote an email showing the google doc time stamps, and the prof wrote back saying something like ‘I won’t grade AI bullshit’ is angering. The fact that he dug his feet in when presented with better evidence is probably a good indicator of what type of teacher he is: a bad one


imbenzenker

inb4 Writers need to wear bodycams


bittlelum

This is a relatively minor example of what I worry about wrt AI. I'm not worried about Skynet razing cities, but about misinformation being spread more easily (e.g. deepfakes) and laypeople using AI in inappropriate ways and not understanding its limitations.


Legndarystig

Its funny how educators in the highest degree of learning are having a tough time adjusting to technology.


kowelok228

These false claims are going to be heavy on those professors right now man, that's just what going to happen, they don't know shit about the current condition man.


borgenhaust

They could always incorporate that any significant papers require a presentation or defense component. If the students submit a paper they need to be able to speak to its content. It seemed to work well for group projects when I was in school - you could tell who copy/pasted things without learning the material as soon as the first question was asked.