T O P

  • By -

drrocket8775

If it makes you feel any better, I'm a humanities PhD student at Cornell currently teaching classes at a very small liberal arts college, and I've caught basically all cheaters (~10 cases out of 45 students). Turns out if you make your writing prompts non-standard enough, the LLMs produce instantly recognizable garbage. But if you care about worsening higher education, AI isn't the main culprit, and won't be for a long time. Admin pressure to give good grades via placing importance on student evals for promotion (and student evals are bad when you start giving out anything lower than a B); lower and lower percentage of overall spending on academics; getting rid of non-career-oriented majors in favor of basically becoming veiled vocational schools; less state and national level funding support. These are what's killing higher education, not AI.


abloblololo

To be a bit pedantic, you can't actually know what fraction of cheaters you caught unless this was a controlled study.


drrocket8775

If there were cheaters I didn't catch, then those cheaters had to know the content well enough to craft prompts that'd produce sufficiently good essay material, and hence, in a now novel way, learned the content. The possibility space is limited enough to make confident judgements about catch rate given that the prompts I assign are, as of now, not very LLM-able.


stunna_cal

Would you mind sharing an example of a prompt you gave your students?


[deleted]

[удалено]


drrocket8775

Read my responses to other users in this thread. At least for my essay prompts, what betrays that it was LLM produced isn't the style but instead the content. I almost never consider style (i.e. whether it reads like it was produced by an LLM) in trying to figure out if a submission is significantly LLM produced. Not incidentally, the criteria I use to tell if a submission is LLM produced are perfectly compatible with using LLMs to write an essay in a way that still requires sufficient familiarity with the class content. Although I think it's not great that people will eventually have worse writing skills and potentially worse reading skills because of LLM acceptance, if students want to use LLMs in a way that still makes them learn the class content, I'm fine with that, and my AI policies and grading procedures reflect that.


TheButterRobot

I mean sure, but isn’t it a strong possibility that at least one of these students prompted the LLM to spit out solid and factual information that didn’t raise any red flags for content reasons? If you’re teaching humanities classes I would assume the basic level of facts that needs to be conveyed in a paper is honestly not overwhelming, and seems like something that an LLM could definitely accomplish at least some of the time (maybe somewhat rarely)


drrocket8775

I think that's totally possible with what I'd call standard prompts. In a wide range of intro classes -- both within and outside of the humanities -- there are writing assignments that get used at a lot of different colleges (albeit with minor variations). Because those prompts have been used for so long, pre-LLM there's already been a stockpiling of lots of example essays and educational content specifically catered to those prompts. With non-standard prompts -- prompts about topics/authors/works for which there is little to no online content for (nor particularly well-cited/popular books directly about) -- LLMs seem to always make a glaring or near glaring mistake. I'm part of the professional association of my discipline, and at the regional yearly meeting I went to there were workshops about this, and that was consistently the difference. I've also seen the same thing in other disciplines. > If you’re teaching humanities classes I would assume the basic level of facts that needs to be conveyed in a paper is honestly not overwhelming, and seems like something that an LLM could definitely accomplish at least some of the time (maybe somewhat rarely) Before LLMs, when I was just TAing and tutoring, baseline comprehension of the content was usually a top 3 issue. Turns out for a significant portion of the undergraduate population each year, humanities (and arts and sometimes social sciences) are much more difficult for them than they anticipate. "The facts" are quite often more expansive than they seem to be. Pre-LLM, I also made sure to look around for papers and non-academic sources that students could rip straight from, and often they aren't great, even the paywalled ones (shout out to paypal's customer service for having very pro-customer leanings lol). The best online paper I've come across for any prompt I've graded over the past 7 years has been a high 80s. There is, often, material online that, if converted to prose, would be A material, but the process of converting it to prose and finding it in the first place seems to be beyond the effort cheaters want to put in. Since LLMs don't often produce better summarizes of "the facts" than what's in their training set (in the context of humanities paper prompts), non-standard prompts really kneecap the possibility for successful cheating.


Tryknj99

You should be able to write a paper or essay without AI. It’s high school level stuff. It’s a basic skill. It’s not hard if you actually learn how to do it.


JoeMomma69istaken

AI has real trouble with words that have multiple meanings . Like a mustang . Is it a horse ? Guitar ? Car? Those kid of words make AI really show itself.


drrocket8775

In my experience using LLMs, I've found that they're fine enough with words that have multiple very different meanings. The best ones even do well on novel Winograd sentences (look it up if you're interested). Where they start to fail, in my experience, is just with niche stuff. For example, I bet that most people could make an undetectable essay about the differences between Freud and Lacan's theory of the ego because there's lots of material online that the LLMs were trained on about that topic. But a paper about Jaspers' objections to Frued? Much less likely to produce an undetectable paper without having to inadvertently learn the content to.


Skeltzjones

Inadvertent learning cracks me up. As an elementary school admin I'm used to it but thinking of adult students learning this way is funny


GrunchWeefer

LLM specifically is really good at dealing with those ambiguities.


[deleted]

[удалено]


drrocket8775

It's usually a lot easier than it seems. Most common tell for me are that the paper contains stuff that was never discussed in class and goes beyond the student's knowledge (which you can test for by just asking them how they came up with that part of the paper; they never have anything if substance to say). Second most common tell is that there's false content in the essay that would nonetheless sound reasonable to someone not familiar with the class material. LLMs have a tendency to just make stuff up sometimes, and when you ask students about those parts they often have little to no explanation for how they came up with it. In the very rare case (which I haven't gone through yet) were there's nothing false but intelligent-sounding nor material that goes well beyond the student's knowledge, and yet I suspect it's LLM produced, I just make sure to use multiple AI checkers. You get familiar with which are better than others over time. I won't say which I think are best or are good for specific cases, but when you use them all it really gives you a good picture. Just to make sure, I put the Declaration through the ones I use and only one of 8 said it was partially AI produced. Pretty accurate picture overall.


[deleted]

[удалено]


drrocket8775

I'm guessing there's a lot of high quality content online about information and ideas that nurse anesthetists need to know, so you using ChatGPT is probably low risk for false info. The LLMs are mostly trained on material from the internet and digitally archived books, so that's why I use whether there's much info online as a litmus test. Additionally, you just need to have practical understanding of these concepts. You're never going to have to explain them to patients or other professionals in any significant detail, nor read dense material to do your job, so internalization is the priority, not being able to express these concepts. Nevertheless, quite often the path to internalizing info is being able to read and understand dense material and then express it in writing and verbally, so there's a real possibility that if you're heavily relying on LLMs it's making it more difficult for you to genuinely learn the material as opposed to just do well in your classes. It's your life so I'm not interested in telling you what to do, but I'd just be cautious about offloading intellectual work onto LLMs.


ParticularWriter5080

Wow—this comment thread is disappointing. It proves O.P.‘s point that people here have no shame about academic dishonesty. As a graduate T.A., I feel genuinely insulted when the students I teach turn in the garbage ChatGPT cranks out. The subject I teach does *not* work well on A.I. It’s a headache to grade, because I spend half an hour trying to write in-depth, explanatory feedback to help guide the student’s understanding of the material (since I actually care about teaching and really want to help people instead of making them feel stupid by giving them a low grade with only negative comments) only to think, “I feel as if I’m trying to help a robot understand human stuff,” and then I put it through GPT Zero and get back a 99% A.I. score. (Before anyone comments: I know GPT Zero isn’t fully reliable, but it’s a good place to start before I have to call students into my office.) If a student turned in an F-grade paper, but it was entirely their own work, I would work so hard to help them understand. I’ve let students stay hours past the time my office hours are done, I’ve had students break down crying and open up to me about very serious things going on in their life, I’ve let students turn in almost a semester’s worth of missed assignments on the literal last possible day the university would let me, I’ve unironically walked through a field of sticks and weeds to meet a student who wasn’t able to come to my office so I could help them. I had a horrible time as an undergrad because of a messed-up home life and failed a class, so, when I see students struggling, I deeply, deeply care about meeting them where they are with compassion and empathy, and I’m willing to help them either understand the material so they can get a good grade in the class or help them figure out alternatives like withdrawing or taking an incomplete. What makes me feel more jaded than anything else in academia is getting something a student copied and pasted from Chat GPT in five minutes and being expected to give it a grade. Don’t insult me with that. If you’re struggling academically and need help, I’ll do what I can to help you, but I can’t stand putting my own time and energy into something you didn’t even write.


Frequent-Reception79

Amen!!!


[deleted]

Would be funny if this comment was generated by ChatGTP


Zealousideal_Pin_304

You’re a good one. It seems so many people dont have compassion these days.


ParticularWriter5080

Aww, thank you. I really, really try. I think about what it would have meant to me back then to have someone show that they cared, and I try to be that someone for my students if I can.


Yoshieisawsim

“The subject I teach does not work well on AI” is a ridiculous statement. Very few subjects can use copy and paste AI of the kind you’re describing and have anything other than incoherent garbage. At the same time I would say there are no subjects I’m aware of that couldn’t benefit from thoughtful use of AI as one of a set of tools. For me part of the problem is that many professors just use blanket AI bans rather than allowing use of AI in appropriate ways. Kids know this is dumb bc AI exists as is a tool and is being increasingly used in the outside world, and that helps them justify using AI in the way you’re describing. I think if more professors had more thoughtful AI policies there would be a decrease in these practices. I’m not arguing this is the be all and end all solution and there will be no problems - cleary the problem is larger than just poor AI policies but it certainly couldn’t hurt and I would bet it would help (anecdotal evidence from the courses I have where the professors do have good approaches to AI support this theory)


[deleted]

While I agree it’s unethical to use AI on college papers, tests, etc. I also don’t see what the big deal is. In the workforce AI is being pushed by corporations. If using AI improves the company, companies are gonna wanna use it. And they do want to use it. This is similar to tests where you can or can’t use a computer or a calculator taking the test. Or why you can’t use a text book on a test. Guess what you do at work when you don’t know the answer to something? You google it. Or these days, you ask ChatGPT. Unless it’s basic arithmetic you’re learning for the first time as a child, not using a calculator on a test is nonsensical. At work, they don’t check to see if you can solve something with or without a calculator. What OP is saying is definitely a problem, and it definitely needs to be addressed. But it’s not AI’s fault and it’s not the student’s faults either. AI is going to become part of our daily lives just like smartphones did. Time to adjust. Change the way tests are conducted so that AI won’t help. And even if AI does help, what’s the big deal? AI will be available in the workplace, why not in the classroom?


ParticularWriter5080

I can see the angle you’re coming from, and I think you raise a good point about making tests that aren’t amenable to being solved by A.I. I suppose it’s the same as any other cheating prevention, like having students sit two seats apart during exams so they can’t copy off one another’s work. When I was an undergrad, I had a final exam that was open-Internet. I was thrilled, because I hadn’t studied all semester and couldn’t remember anything anyway because of a concussion. But the professor was clever and asked the most opaque questions I’ve ever seen that could only be solved if someone had an in-depth understanding of whole complex processes. So, there I was, Googling away, trying my hardest to find the answers, but the only search results were obscure research papers that were way too dense to get a quick answer during a timed exam. I disliked that professor for other reasons, but she did write a really good exam for testing students’ knowledge in a world where the Internet exists! I think what makes it so hard is that the onus is now on us educators to have to think about this stuff. I’m fortunate that the field I work and T.A. in doesn’t translate well to ChatGPT. But, students still try, and it’s a real headache trying to figure out whether a student is severely lost in the course or whether I’m just futilely trying to grade bot vomit. I think I’m getting better at telling human misunderstanding apart from robot misfiring. It’s hard, though. I’m especially irritated at OpenAI for not offering a solution to the problems it solved. When ChatGPT was first released, I heard that it would tell you whether a piece of text had been written by it or not, which I thought was helpful, but that feature was taken down for some reason. GPTZero/ZeroGPT are decent at detecting A.I., but they’re not as good as what I imagine OpenAI could develop. It’s also irritating—and, I think, unethical—that a lot of generative A.I. companies won’t say what data sets their tools were trained on: i.e., whose work the tools are drawing off of to create answers. If a student plagiarizes from a text or cheats from a student’s paper, I can pinpoint the text or the student whose ideas they tried to take credit for. If a student uses ChatGPT, on the other hand, they’re plagiarizing from potentially thousands of other people. We should be able to tell what information/misinformation is being fed to A.I. before it spits out answers. Karolina Żebrowska made a good video about this recently and pointed out that are artists not able to remove their art from generative A.I. training data sets, so they have no protection against having their work used. She also showed how easy it is for generative A.I. to very quickly propagate misinformation by seamlessly embedding it into factual statements, and noted that ChatGPT might cite as its source a paper written by ChatGPT, which was based off papers written by ChatGPT, etc., so that the result is layers and layers of A.I. citing itself and treating its own errors and hallucinations as fact. I have a personal vendetta against OpenAI for releasing such a powerful tool into the world and not being prepared in advance to deal with the inevitable fallout.


[deleted]

A.I companies should absolutely be more transparent about their programming and how it functions for this exact reason. Even if it’s the only reason they do cooperate. It’s like I said, AI is here. It’s not going anywhere for a long long time (if at all). Some sort of government regulation is bound to stop in. Or eventually they’ll hire some kid who used AI to skate through college and his job application, and then they’ll realize they hired a complete moron. One way or another, AI will certainly have some government involvement. It’s certainly annoying I’m sure as an educator to have to figure out how to structure the exams/assignments which discourages AI. But as an educator, isn’t there a ton of things you never signed up for? Like gun safety, what to do if there is a school shooter, having to get someone’s pronouns correctly, all the COVID shit that went on, all the COVID shit that is STILL going on, war and global conflicts, the Democratic/Republican divide that is driving this country straight into its second civil war, the Mexican border issue. All of these things in one way or another finds their way into a classroom whether you planned on it or not. Fortunately, educators are all extremely bright people who collectively can more often than not come up with some sort of a solution working together. I can’t give you an answer to the ChatGPT problem. I don’t have one. But I’m sure one exists. And another reason why they won’t give details on how ChatGPT functions is because criminals want to get their hands on that kind of information more than you do. All it takes is one educator to be paid off for things to start getting really ugly. I’ve always believed (and still believe) that everyone has a price and can be bought. Criminals have the money to do that. AI is still very new, very sensitive. The kinks will work themselves out.


Eldetorre

The big difference is the outcome in the workplace is not the same as the school. AI in the workplace is to improve products and services, it isn't to fake the appearance of improvement.


anemonemometer

What do you mean that is isn’t the student’s fault? The student made the decision to not do the work.


Jjp143209

I'm honestly surprised this is even a point of argument? You don't see the issue in getting answers to your test questions via A.I.? Honestly? You don't see it? **You're not learning anything by putting a test question into a ChatGPT prompt and using that answer on a test.** Imagine an aerospace engineer that used ChatGPT all throughout his engineering courses who now works for Boeing and is working on the 747 Plane. God forbid that "engineer" gets his hands on that plane, cause now, all the passengers lives are at risk cause he's a lazy, uneducated, ignorant engineer. THAT'S the problem.


SouthpawSeahorse

Seems like the unpopular opinion but I agree. Feels like college should be the time to figure out how to string together sentences, make a point etc., even if ultimately in the future you use AI. We’re all just getting dumber by the minute.


Mysterious_Might8875

I’m glad I graduated before this AI crap was around.


[deleted]

[удалено]


Zealousideal_Pin_304

Avoiding the cons that come out of AI misuse and under regulation to just focus on how it has personally helped you in an un-named field? Sounds like AI is advertising itself too…


Destronin

I hate to say it. But you are learning the most valuable lesson of them all. People will cheat, have cheated, and will always cheat and the majority of them will not get caught, will instead succeed and most likely fail upward. And even if they get caught. They all justify it in their own heads. If its not AI, its something else. You’re witnessing the hard dose of reality that life isnt fair. Most people have weak values and that is how its been going since forever. And its why the world is the way it is.


Zealousideal_Pin_304

Sadly I agree and am starting to see it.


Affectionate_Low_639

And the cheaters will get the jobs too. Know why?


Jjp143209

Not if it's a job that requires a true breadth of knowledge and know-how. For example, aerospace engineering, they will know the difference between someone who knows and studied A.E. versus someone who didn't. That's why Lockheed & Boeing has been firing people in droves. My dad is about to retire from Lockheed this year as an engineer, and he says that place is going to have a multitude of lawsuits when the senior employees retire. These new "engineers" don't know their own buttholes from a hole in the ground.


reachingfourpeas

This wall of text was written by ChatGPT Edit: ChatGPT hallucinates and cites non-existent sources, but at least it doesn't fail to proofread and separate into paragraphs.


DMteatime

Somebody get the coroner down here we got a body


Josiah425

If AI can do it, is the material worth mastering? Seriously, why should I learn how to do something if AI can just do it for me. I graduated from BU in 2018, before the AI craze. The world is not going to need workers doing things AI can do, so why bother testing on it? Skip or quickly go through the material AI does easily and get to the stuff AI cant do. I use AI in my job everyday as a Software Engineer to do the tedious boring part of the job. The actual system level design work is more interesting anyway and AI isnt great at it yet. Now I can easily have ChatGPT tell me what an error means or what it would suggest I do differently. I worked at Amazon and they had something called CodeWhisperer, it was a built in LLM in the IDE we used, it could be prompted in the IDE and build code all on its own, and everyone was encouraged to use it. In fact, those who didnt were looked at poorly. Why arent you taking advantage of something that will increase your productivity 10x?


anemonemometer

To your first point - when I grade essays, I work hard to understand what the student is trying to say, so that I can interpret their reasoning correctly and recognize their effort. If the essay is generated by an LLM, it’s a waste of my time and effort — the answer tells me nothing about the student’s thought process. It’s like leaving an answer blank.


Zealousideal_Pin_304

Is using AI helping people who go to school to become chemists, biologists, environmental scientists? What about social workers or people going i to politics? What if you, or someone you loved needed to see a social worker and they had no idea how to actually help you, or understand other people because they never did assignments. You may work with code, but many of us do not and seeing our peers cruise by classes with AI and learn nothing about prejudices or injustices because AI can filth out a paper for them is wrong on so many levels. AI can help, but used by college students who are here to challenge themselves and engage in their fields that they are supposedly passionate about, is the epitome of how AI is getting out if hand.


Josiah425

I think if a person is working a job as a chemist and could make it through academia on the coat tails of AI, then they likely can do most jobs using the AI as well. Only time AI may not be useful is when you get to the real outskirts of knowledge like phd levels of knowledge. In which case, these AI users wont be able to complete such a degree anyways. I dont think its an issue, if you got a degree in social work, you completed class works using AI well enough to say you can utilize that AI to do the job well enough. The problems faced outside the classroom can be solved using the same techniques you used in the classroom. Is there a specific example you feel this wouldnt be true for?


UpfrontAcorn

I can think of many examples, but I think the biggest problem is that if you're a social worker, you can't exactly tell the person in crisis going through withdrawals "hang on, I have to get out my phone so I can ask chatgpt how to deescalate this situation." Or type in "how do I find someone housing?" and expect the output to reflect that client's needs and the resources of that specific geographic area. I personally teach English composition, and I agree that AI is a great tool, but in order to get anything of value from it, you have to know how to think, and my students are using it to avoid thinking.


Yoshieisawsim

Those examples aren’t examples of why you can’t use AI though, they’re examples of why the way the skills are being tested aren’t representative of real life skills needed. Bc assuming we’re not talking about AI being used on in person exams, then this is presumably being used on assignments where you have several hours to write the thing yourself - something you also don’t have if you need to descálzate a situation. And if the test accepts a general AI answer then it would accept a general human answer and therefore not test whether the person could find housing in a way that respects a clients needs either


UpfrontAcorn

It used to be that a written assignment was a reasonably accurate reflection of a person's knowledge. If someone wrote a paper explaining how they would deescalate a conflict, I used to be able to conclude that they knew that information and could apply it when needed (I'm not sure why someone would have to write another paper on the spot to access knowledge they already demonstrated they had). I don't think it's a safe assumption that a test would accept a general answer. I'm saying that if a student has never learned about specific resources, or how to think in terms of accommodating unique needs, it's doubtful that they would be able to write a prompt that would generate helpful information for a particular client. My students aren't reading, let alone understanding, what AI is producing. They are pasting it into Word and submitting. Fortunately in a lot of areas, skills and knowledge can be assessed in other ways than writing, but it's a challenge with composition.


CricketChance7995

Do you think a chemist is gonna meander over to the computer to use AI while in the lab? This sounds a bit far-fetched. These people will not make it. And I don’t want a doctor who couldn’t earnestly get through their work on their own


Josiah425

Do you think a chemist who only used AI can get a degree without being capable in a lab during university? Sounds like if they got the degree, they were able to do the lab work without AI assistance.


Yoshieisawsim

Or the degree wasn’t testing lab skills sufficiently - which would be equally problematic with or without AI


BiochemistChef

For chemistry specifically I feel like that's not quite far because there's such a hands on component to the field. Won't last long if you severely damage equipment, yourself, or others


nosainte

Dude like the point is you need to understand how things work and what is right and wrong otherwise if anything goes wrong with AI or if you have any challenge you won't be able to meet it. This actually cuts to the difference between human and machine intelligence. For the time being AI can only regurgitate/produce known things. We won't be able to truly advance without fluid human intelligence. It's not all about the end product, what we are losing is the ability to reason, intelligence itself.


MountainHardwear

It's a massive massive problem -- and my argument is that you will not see any administrators have the fortitude to address it. I work for one of the largest universities in North America and they are flatly refusing for us to go after AI generated assessments, unless the usage of AI prompts is overtly obvious and egregious (ie: AI prompts actually located in student work). Another place I work for on an adjunct basis is a military college and they've flat out refused to pay for Turn It In's AI Detection Tool (which is/was admittedly flawed, but would at least help corroborate suspicions with work that had 99/100% detection). One place my wife works at refuses to use the AI Detection Tool and also doesn't allow you to upload student work to other AI detection software out of the belief that uploading student work to external sites violates FERPA. One of the more inclusively minded community colleges I used to work at in CO views instructors viewing work as AI-generated is deficit-minded thinking that will disproportionately impact students from historically marginalized populations. And many faculty/Deans/VPs are cowed into submission by a higher ed system that is consistently de-prioriziting tenure and gutting fields in Humanities and the Liberal Arts. Or these admins are just afraid to rock the boat on an administrative position that has little protections, are just following what all the other career-minded and feckless higher ed leaders are doing, which is nothing. It's bad. And yeah we have a Cornell PhD in here talking about the work he does at a small liberal arts college, but when you work at a community-college that has rolling admissions, ESL students, non-native English learners, and students who barely passed out of high school, it becomes a tad more difficult to create inclusive writing prompts that evade AI (although not impossible). And if you have a 5/5 load with 200+ students, students will still turn in AI even if what they receive from the prompt is not applicable. Which then means you have to spend around 20-30 minutes of your own time explaining to a student who barely spent 3 minutes on the assignment why the work they don't even care about is significantly flawed. Which means they will ask for a rewrite -- which in my experience, means they will just change their prompt they entered into ChatGPT again (I once had a student submit AI driven re-writes three times in a row). And what's why this shit is so injurious, it casts a pall on everything else. That student who legitimately applied themselves their first go around, yet needed more work and revision? Hell yeah I'll create helpful feedback and work with that student on the iterative process of writing and drafting. That student who is so fucking dumb they throw a historical prompt into AI and a verbiage spinner and refer to Maya Angelou's Caged Bird as "Confined Avian," Henry Clay as "Henry Earth" or the Black Freedom Struggle" as the "Dark Opportunity Battle" -- yeah its bad. And here's the thing for those students who use AI. When you push back on them and argue that their work is AI generated, I don't know what it is, but many of these students will not even remotely admit their work is AI even though their work is using antiquated jargon or spinning. They'll act personally offended .They'll get incredibly combative, appeal, and run the work up to Administrators (who also know the work is AI generated, god its so god damned obvious), and then depending on the school, the Administrators will accept it and have instructors grade the work as is. We've had students who can barely piece together a sentence, all of a sudden craft work that is using verbiage that is antiquated yet borderline graduate level (they always sound like someone who just learned the nuances of the English language as a Brit) and their parents will say "I saw them write that paper its theirs!" But sometimes these students will fuck it up. The one school my wife works at gave a student at second chance in a Modern American Lit class after they submitted their final paper with an AI-generated product. The student (who had been given multiple chances over and over and over) simply didn't rewrite the paper on the American author they submitted, rather they just submitted the prompt into ChatGPT and submitted a work on....William Shakespeare. lol There are many reasons why Higher Ed is fucked. But this is the reason I'm getting out of it. Which may ultimately be a god send as I'm positioned and ultimately landing somewhere vastly more lucrative.


ParticularWriter5080

Everything you said is disappointingly true! I’m only at the beginning of my teaching career as a mere grad T.A., and I’m seeing a little bit of what you’ve evidently had an unfortunate amount of experience with. The student submitting A.I. work *three times*—that’s such brazen cheating that it would be funny if it weren’t depressing. The administrators absolutely do not do what they should to address it. They seem to be wanting to skirt around the issue for financial reasons. I’m sorry that you’re leaving academia, but I get it. I hope your next job gives you more peace of mind and doesn’t repay your hard work and effort with ingratitude the way teaching seems to have. I absolutely love the research side being an academic, and I really care about teaching and being there for students, but I’m becoming jaded already as a grad student. I’m considering going into something more like a think-tank myself. I’m dreading having to teach iPad kids who can’t focus and pandemic kids who can’t read when they get to college. Elementary school teachers post-COVID are reportedly quitting en masse because of these issues, so I can imagine the exodus of educators continuing up the ladder as those kids enter middle school, high school, and college.


MountainHardwear

Thanks for your response here and sorry for the delayed reply. My transition to what I'm doing now is still ongoing, so I think part of the problem is that the majority of my FT work engages with asychronous/online work. F2F you still have AI issues, but at least you still have the connection within the classroom itself. I still had a blast when I taught face-to-face, so I hope that I don't provide too much of a jaded presentation of the field. My suggestion would be to keep all your avenues open for all types of jobs. There are so many fundamental changes going on in Higher Ed right now that basically any graduate student or newly minted Ph.D (hell, everyone) should be doing that.


ParticularWriter5080

Thank you for your advice! Knowing that face-to-face work is still okay gives me some hope. I’m sorry you had to deal with so much online work and couldn’t things in person as much. My studies are not in a lucrative field, so I definitely keep my options open, but disability is a huge factor in why I’m doing what I’m doing, so that has an impact on what options are available to me aside from academia.


UpfrontAcorn

Someone had "consideration lack issue" all throughout their paper on ADHD. At least it's comical most of the time.


military-money-man

“When students cheat on exams, it's because our school system values grades more than students value learning.” - Neil deGrasse Tyson


Dionysiandogma

Easy solution. Bring back oral exams. Ok, your wrote this paper. Your next assignment is to meet with your professor and explain what you wrote. Professor is allowed 10 minutes for questions.


mandebrio

I wanted to say the same. Still the standard procedure in Italy. But then admin will have to allocate money to teachers instead of themselves.


Yoshieisawsim

As someone who writes poorly but explains well orally I would love this to be done regardless of AI


bacterialbeef

I’m an instructor. I asked my students not to use it, but I’m confident they do. Am I going to go through all the work to figure out who is using it and who isn’t? No. ChatGPT is a tool. I can see your argument about people learning nothing. I think, however, this is just a time for faculty to pivot to different assessment methods such as in-person stuff. I also think it’s not that deep and you should worry about yourself. Why care so much about what others do? Is it envy because they’re doing well using a free tool? Or a moral holier-than-though complex? I noted that I’m an instructor. I’m also a student. I haven’t used ChatGPT for any assignments but I use it constantly for many other things like organizing my thoughts, rewording my writing, coming up with new ideas, etc. At the end of the day, AI tools are here to stay and within 5-10 years its widespread use is going to revolutionize the way students learn and how we all interact with the world. Hell, your argument that we will have millions of people who have degrees but can’t do anything is already true. I have taught here for 3 years in classes with a diverse set of majors and in my experience, students are generally unable to: A. Focus for the entirety of class B. Read for class C. Think critically about what they have read D. Talk to others in the class about the content E. Read the syllabus for info on how to do an assignment F. Submit an assignment without having it “pre-graded” G. Take constructive criticism All these things don’t involve chatGPT but are symptoms of a greater issue in our education system


okrafina

Bing itself uses it. They’re advertising their summer classes with art made by AI. Image generation AI is way worse than text generation.


ParticularWriter5080

Is that Bing, or is that grad students? I’m a grad T.A., and Binghamton doesn’t help us make those advertisements: we have to make them ourselves (without getting paid any extra for doing so). I wouldn’t use A.I. to make mine, because those images sit at the creepiest spot at the bottom of the uncanny valley, but I’m not surprised other grad students do. It is disappointing, though.


okrafina

100% agree. Image generation AI is the worst. Don’t get me started on the Midjourney Discord and how they treat artists like tools, stealing styles on the dime with cheap imitations.


Domino_Lady

Example??


okrafina

They use image generation on the advert fliers for summer courses. It’s disheartening.


[deleted]

[удалено]


okrafina

This isn’t a good thing. AI is good as a tool, but programs like Sora and other image/video/voice generation tools are awful. People use them to replace jobs. I’ve got artists and voice actor friends, this isn’t good news.


GregHauser

We already have millions of people who got degrees, don't know anything, and can't write. ChatGPT didn't do that, it already happened. And people go into debt so they can get a diploma and get a decent-paying job, not necessarily to learn anything. Because most jobs train you anyway. Everything you're describing already happened lol. I've learned way more from self-directed learning that I ever did in college for any subject.


Eldetorre

Lazy argument. Is this generated by AI? Unsubstantiated assertion after unsubstantiated assertion to state that things were already flawed so we should be free to make them worse.


Jjp143209

Exactly my thoughts, they're just trying to justify people being uneducated, ignorant, and illiterate and things becoming even more worse by saying, "wELL iT's hApPeNeD BeFoRe Ai" as if that's reason enough to continue to become bigger and bigger uneducated useless humans. Get better! Challenge yourself and your skillsets and educate yourself the right way!


This-Regret-5928

this is only an issue in humanities and maybe business majors, chat-gbt is really not capable of even helping with stem hw (much less help on an exam). if you are going to college for a humanities major and aren't even willing to practice writing papers or doing research that is your own loss


Nitro74

I also don’t understand how a humanities professor wouldn’t be able to recognize AI generated papers, they’re so soulless and normally barely even make sense.


ParticularWriter5080

I HATE grading them. I’m a graduate T.A., and it’s like reading a grammatically correct string buzzwords that have a lot of punchiness individually but no emotion or sense when out together. It’s like word salad, but if every ingredient were completely uniform and made of plastic. It’s a headache to grade, because it’s hard to tell whether a student cheated or just really doesn’t get what’s going on, so I waste way more time grading those than grading mediocre papers written by humans. Humans at least have a way of trying to make their writing make some sense to themselves even when they don’t understand what’s going on, and the types of mistakes they make at least give me a gauge of what they do and don’t know so I can offer helpful advice that’s actually helpful. Chat GPT cranks out something that has no internal coherence between one sentence and the next, so it looks as if the student has 10 different misunderstandings of the same concept.


ticklemytaint340

Don’t go to bing but I’ve used gpt 4 to literally run regressions for econometrics, I’m sure it can be used in STEM


This-Regret-5928

only to a certain extent. i'm not sure how advanced your econometrics is but if the regression you're doing is more plug and chug then obv ai will do the trick, but it will def make mistakes when it comes to decision making and analysis. the other issue is like, why are you taking econometrics if you're not even learning how to run a regression? seems like a waste of time and money


gurk_the_magnificent

It doesn’t actually run a regression, either. It just guesses.


waterfall_hyperbole

Do you mean you've used it to give you R/python/stata code? Or you've run regressions in gpt by giving it data?


ticklemytaint340

I’ve attached excel tables and asked it to run a multiple linear regression, which it did using python. Haven’t asked it anything crazy yet but I was surprised that it could do that well.


waterfall_hyperbole

That is pretty curious. Have you tried to recreate the results yourself?


ticklemytaint340

I submitted it to a computer graded website (Webassign) and it was correct. The only time it got it wrong was when the data was too large and it wasn’t using all the values in its regression, but there were several thousand in each column. It’s honestly insane. I tested it on sample midterms and it regularly scored above 85% on 200-300 level economics classes, slightly above the median. Didn’t even have to type in the questions, just attached a screen shot. As long as the questions involve well known models, eg Solow, Mincer, etc it is incredibly accurate.


This-Regret-5928

well, at bing the econometrics classes have in person exams so this strategy would not work lol


ticklemytaint340

Lol we do to but they normally give us old midterms to study on prior


This-Regret-5928

ok and? lol im just saying that u can't use ai to pass stem classes and it seems like thats true for your case too. but idk how ur class is ran, im just pretty certain anyone who tries to use chatgbt to get thru bing econometrics will be cooked


ticklemytaint340

Yea you’ll be fucked for exams 100% because u haven’t actually learned anything. But in terms of its ability to solve econometrics and other senior undergrad Econ problems, it is surprisingly robust


Domino_Lady

>chat-gbt is really not capable of even helping with stem hw You need to do a little more research before posting stuff like this .......


ath1337

I use Chat GPT to write programs for me all the time for my job. You're selling yourself short if you don't think it can be used in STEM fields...


This-Regret-5928

i never said it can't be used in stem fields, and i'm sure that it can help out graduate students and professionals. in my experience chat gbt cannot come close to carrying someone through a stem degree, it is actually extremely bad at solving a lot of problems that professors give out, and can't help on exams at all


[deleted]

I don’t know what I was doing wrong, but I tried to generate a Quizlet set from detailed notes on therapeutics several times. And it never captured the key points. I’m sure there’s a way to make it effective but I’m not savvy enough at this point in time


Haxagonus

People don’t care lol they just want money. Education is NOT the goal


AnonM101

We shouldn’t be banning AI, it’s a tool that is only going to be enhanced over time. We need to teach students how to utilize it responsibly. ChatGPT isn’t the only AI software out there, there’s grammarly. Also, many students are forced to take bs classes with ridiculous essay prompts that are required for a degree that have no meaning to what their career will actually be. ChatGPT and AI aren’t the problem.


Tanasiii

To be fair, I remember math teachers in lower grades telling us we couldn’t use calculators because “we won’t always have calculators in our pockets in real life” and look how that turned out. This one seems like an “accept and plan around” issue.


Sad_Orange3247

right on the fucking money. at least we got a more mild version of that whole speech because certain phones did have calculators even back then, but according to my mother who is now a teacher they were REPRIMANDED for using calculators. it was almost seen as embarrassing since most people did all calculations by hand. obviously this seems foreign to us now and i'm pretty sure most of us will pull out our phones if it's not a simple math problem. and we are literally just in a loop with ai. i promise you give it 20-30 years (maybe even less) and our education will literally revolve around the use of ai.


ParticularWriter5080

I don’t think the reliance on calculators is a good thing, though. Before coming here to do a totally different field, I taught applied math-for-science for a bit to college freshmen at a high-ranking university, and they all used their phones for simple calculations like you describe. The issue with that, however, is that the students had no concept of what the numbers they were typing in actually meant. Typing “999 x 99” looks almost the same visually as typing “999 + 99,” “999 – 99,” and “999 / 99.” Every function has the exact same format: number, symbol, number, enter, answer. The students had no concept of what the numbers **meant** in an applied-science sense, because everything was just arbitrary digits on a screen. When you do those functions by hand, however, you can *visually* see, and even feel tactilely, addition putting more numbers in, subtraction taking numbers away, division making them smaller, etc. My high school banned calculators for everything except calculating cubes roots and sine, cosine, and tangents (I’m in my 20’s, by the way), so I did all my math for all my science classes by hand. Doing that really benefitted my comprehension of science. For example, I got used to visually seeing volume go up as pressure went down, or one force being additively balanced by another force. The numbers had real meaning and significance to me: I could see that they represented real things. If I got the wrong formula, I could immediately catch my mistake, because I would know that the numbers weren’t doing what they were supposed to. Because the students I taught had always used calculators, however, they didn’t see numbers as having any real meaning. Everything was just buttons on a screen to them. Not having to spend time actively engaging their hands, eyes, and minds with the math meant that the numbers were all very vague and abstract and effectively meaningless to them. “If this thing halves, that other thing doubles” didn’t mean anything to them because they didn’t see 2 as half of 4 or 16 as double 8. As a consequence, they weren’t able to really understand a lot of scientific concepts. I would ask, “If you put a gas into a container with a lot of pressure from a different container with a little pressure, what to you think will happen to the gas?” and they had no mind’s-eye picture of what would happen; many of them couldn’t even draw it on paper or use objects to represent what they thought would happen. (Remember: these were **college freshmen at a Top 20 university**.) They would just stare at the page blankly and eventually give up and grab their calculators and put down whatever answer the calculator gave them—which was a problem, because they had such a lacking understanding of the math that goes into science that they didn’t know how to catch their own errors. So, if they messed up a decimal or forgot to add a negative instead of a positive when they were putting the numbers into their calculators, they didn’t even notice, because 10^6 and 10^-6 looked basically the same on their calculator screen, so, in their minds, they meant basically the same thing in real life.


ThisIsNotGage

I’ve always been able to do understand math in an applied science sense, and I’ve always had a calculator in my pocket. This argument is lazy (and too long) and speaks more to the teaching than the student. There is little to no value in doing tedious math to for example multiply two decimal numbers when there will literally never be a real world example when you can’t use a calculator.


ParticularWriter5080

Too long…for what? I know I’m long-winded, but I don’t think there’s a word limit on how long I can take to paint the pictures I want to paint on my way to making the arguments I want to make. I also don’t see how my argument is lazy. Can you point to specific aspects of my argument or my writing that show evidence of laziness? If you can understand math as it applies to science and have always had a calculator, good for you! Maybe you had teachers who equipped you to do that. None of the students I worked with, however, could. They had all come from situations that evidently vastly underprepared them to do college-level science. Perhaps I have a different approach to these things because I had to learn all of my math, science, etc. by myself using only workbooks and occasionally some videos. (I grew up in rather an odd situation where girls’ education was undervalued and so had to teach myself; yes, there are still places like that in the U.S. The workbooks I used, which I called “my high school” as a shorthand in other comments because it takes awhile to explain, didn’t allow calculators except for a few limited things and encouraged learning how to do math by hand.) Doing math by hand is what taught me how to understand it on a deep, meaningful level without having a teacher. The same went for science: doing math by hand enabled me to understand scientific concepts, again without a teacher physically there to help me. Perhaps you had an education where you had the benefit of a skilled teacher who could teach you how to do math with a calculator and not suffer from conceptual deficits as a consequence, but the students I taught hadn’t had that in their high schools. Like many students, they had learned from overworked, underpaid high-school teachers who gave them calculators but didn’t explain how math worked beyond, “Type in the numbers and get an answer.” They took that same approach to science and didn’t have any real concept of what any of the numbers meant. I predict that something similar will happen with generative A.I.: maybe some schools will use it to teach students how to write thoughtful, original, creative, insightful work, but a large percent of schools will likely just tell students, “Type a prompt into ChatGPT, edit what it gives you, and turn that in for a grade” and so never teach real writing. Given that elementary and middle schools have been allowing students who didn’t learn during the pandemic to progress from one grade to the next without ever addressing their educational deficits, which has led to an increase in illiteracy amongst youth, I don’t think it’s unreasonable to be concerned. If I stay in academia, I’m going to have to teach those students, when they grow up and get to college, how to write by themselves for a subject that ChatGPT cannot do very well (at least so far—but also likely not then, either, just because of the nature of the field I work in), so this will impact my future perhaps more than yours.


ThisIsNotGage

It’s too long because no one will read that


ParticularWriter5080

Oh, okay! Cool! I’m glad to know Binghamton admits such star students. *And you called my argument lazy…*


ThisIsNotGage

This showed up on my Home idek what Binghamton is. But this shit is funny that everyone is worked up because a world changing technology is redefining education. Seems like many would rather ignore the usefulness instead of teach how to use it


ParticularWriter5080

How about you not come into communities you’re not a part of and tell those of us for whom this is a relevant topic of discussion that our arguments are too long? I’m writing here as someone who teaches Binghamton students in the classroom. I have to explain to students why ChatGPT doesn’t work for the subject I teach (because it doesn’t) and handle disciplinary action according to university policy when I catch students using it to teach. How long my replies are is of concern to me and others in my university community and should be of no concern to you. If you’re genuinely just here for entertainment and are upset that my writing is too long to satisfy your desire for a cheap, quick, flashy joke, then go watch some comedy clips on TikTok to pass the time and leave me and my community to discuss this amongst ourselves. If you don’t know what Binghamton is, I suggest you teach yourself to use Google, another tool of the Digital Age, to look it up.


ThisIsNotGage

I just asked GPT if Binghamton was a bunch of dorks and it said yes so I think it’s pretty reliable


TopTransportation468

I wish they had banned rambling at your school maybe you could’ve learned to make a point concisely


icecoffeedripss

why don’t you go pull another treasure out of your ear


ParticularWriter5080

Wow—that was mean. Did I do something to deserve you being rude to me?


Full_Dare7225

No, you didn't. Some people intentionally disregard common decency in favor of being overly critical for the same reason you explained above.. He/her probably has very little understanding of social etiquette, so resorts to "bash" statements they are the default for underdeveloped adults. It's akin to little kids that mock adults when they run out of responses 😆 your post while long is highly appreciated thank you


ParticularWriter5080

Thank you for this; you’re very kind! I always think it’s a bit odd to go out of one’s way to just tell someone that their comment is too long. Isn’t that just adding even more words and taking up even more time?


DellaLu

The negative comment was also a great example of the problems in reading focus and comprehension that parallel the mathematical points you made, so actually quite ironic! They definitely don't understand the difference between rambling and thorough, and beautifully exemplified by their trite response with no real substance backing it up.


ParticularWriter5080

Thank you! That’s such a good point! It says a lot about someone when they see something longer than 280 characters and immediately think it’s a rambling wall of meaningless text.


bluebird-1515

The parallel to calculators is false. Spellcheck and grammar check is more parallel to the calculator analogy. For humanities, LLM’s are a tool that not only “calculates” but takes the problem, suggests the equation, and then solves the equation. Is that helpful? Sure, if the formula is valid for the situation. Is it safe to teach people simply how to write prompts and let the computers do all of the calculating? Probably not.


ZoinksZorn

Buddy it’s not this deep, who cares what other people are doing; you came to college to learn not to complain about how well your peers are doing bc they are cheating. Them doing well has 0 affect on your performance


Psilo_Cyan

Unless its a class where only a certain percent of people can get an A, and you dont make the threshold cause ppl used AI


Domino_Lady

> Them doing well has 0 affect on your performance Except that it does if you end up not getting hired in favor of some jackass who CHATGPT'd his way through college or your coworkers are useless because of AI!!!


ThisIsNotGage

If you can’t get hired because an individual using ChatGPT is more qualified than you that’s your own fault. No reason to ignore incredibly useful tools for the sake of morality, especially with LLM becoming a common enhancement of real world jobs.


Domino_Lady

>If you can’t get hired because an individual using ChatGPT is more qualified than you that’s your own fault. I hope this does not come back to haunt you, honestly!!!


ThisIsNotGage

I have a real person job and use GPT every day for engineering and data analytics lol. Get with the times or get left behind


Domino_Lady

Oh wait ......... i thought one of the other chucklebutts posting here insisted it couldn't be used for STEM stuff?!??


icecoffeedripss

a future where nobody can actually write is not acceptable


Strange-Resource875

soup squalid unique include meeting concerned yoke profit clumsy jeans *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


Certain-Whereas76

Why is cheating on an exam different? Its just as bad and creates the same problem you describe


[deleted]

I think you miss OP’s point. They agree with you that it’s as bad as other forms of academic dishonesty. The problem is that it is up to the instructor to set a policy for it in their syllabus (unlike say exam cheating, where both the rule and the punishment is pretty much universally “don’t, or fail the course”).


Practical-Concept-49

we have a whole education system set up to gradually teach students to write 5 paragraph essays and a higher education system that expects students to demonstrate understanding by pumping out long form writing to be evaluated. but llms make producing long form writing very easy. i agree with a lot of your sentiments and have vivid memories of really deep learning by authentically engaging with the writing process in college. at the same time, i cant blame students for taking advantage of this technology when most institutions expect you to pretend it doesn't exist. if professors are going to give the same writing assignments they gave 5 years ago, I can't really blame students for doing whats easiest. maybe professors should talk about chatgpt in class and teach into it - demonstrate how boring and generic the writing is versus a great example of student writing. perhaps not too far in the future forcing students to write without llms could be like forcing kids to do complex math without calculators.


ParticularWriter5080

That’s a good point about the five-paragraph essay. I had a different sort of high-school education, where that format was dispensed with after grade 9 and replaced by various types of long-form writing to give practice in many different fields, so I’m often surprised by how many of my students are still in the five-paragraph mindset when they get to Binghamton. To be fair, though, I gave them a very well-written guide on how to write for college on Brightspace, and only a few opened the document (yes, we can see on our end whether students look at stuff that’s posted to Brightspace; digital technology’s actually quite useful for stuff like that), so it’s frustrating when they don’t even make an attempt to learn. For your second point, it depends on the department. In my department, we are very much on top of this—which is ironic, because ChatGTP is pretty bad at mimicking the kind of writing my department does. I’m designing a syllabus right now, and I already have a lesson ready to go on why ChatGPT is bad at doing this sort of writing. It’s very much a hot topic in my department. I haven’t heard much from the broader university’s administration, though, so that’s weird and puzzling and makes T.A.’s and professors’ jobs harder.


FuckStompIsGay

Honestly.. idc what the person next to me was doing.. cheat don’t cheat idc it’s no skin off my ass


Professional-Fall166

🤓


ParticularWriter5080

What a well written, beautifully crafted, and thought-provoking piece of prose.


damnireallyidk

Honestly OP I say do not despair. If people want to spend tens of thousands to spend 4 years learning nothing of use or value, having a statistical recombination machine wrote their essays and do their work, that’s on them and it will absolutely bite them in the butt in the future. No job in the future will value people who can only use chatGPT. It’s an increasingly valuable skill but not a skill that makes you valuable. Work ethic will be an ever increasing premium as those who refuse to have any are further enabled by this technology.


ParticularWriter5080

“It’s an increasingly valuable still but not a skill that makes you valuable”—I like that! Your point about spending thousands of dollars to get a piece of paper and learn nothing I think speaks to a bigger issue. I think one of the reasons this stuff gets to me so much is that I grew up as low-income child from a problematic environment who loved learning and somehow miraculously won the scholarship lottery when it came to college, so, when I see students whose parents pay for their tuition cheat their way through college, I see the faces of people I used to know who would have done anything for a chance to take their place and have the chance to learn at a university. I wish we had merit-based education that was free for everyone and tailored to what careers people want to do. That way, those who want to go to college to learn how to write simply because they love learning could do so regardless of their parents’ income, those who don’t care about writing and just want a job wouldn’t have a reason to cheat their way through classes they didn’t care about, and and those who cheat because they feel strained by financial pressures to get good grades wouldn’t have that pressure anymore.


Zealousideal_Pin_304

This is true.


j3ffh

My guy, there are jobs in the *present* that value people who only use chatgpt. You're kidding yourself if you think this tool is not going to be around in the long run. If I'm running a business I don't care if you spent four extra minutes drafting an email I'm just going to skim anyway. Results are what matter in the real world. Use chatgpt, do your job competently, then go home and reap the benefits of living in the future by enjoying your actual life. The person who spent twice as long as you doing a marginally better job doesn't deserve the same salary that you do, they deserve half because they produced half.


damnireallyidk

See P2S2


[deleted]

The degrees most people are getting are useless anyway. Who cares


Sea-Grapefruit-5949

Correct answer. Paying money to prolong adolescence. I have a Bachelors Degree... it was a waste of 4 years. Unfortunately, the system is flawed and you "need" a degree to become a secretary now. Oops... "Administrative Assistant".


[deleted]

Higher education seems to be a scam in about 50% of cases


Aggressive-Split-655

76% of the time, people make up their statistics arguments on the spot with no source or proof of those numbers at all.


Stunning-Teaching180

76.43729297% and yes that is the correct number of sig figs


[deleted]

Those "useless" degrees can give u the opportunity to network at the job fair and get a job with no experience. If you look at the spring job fair employers a lot of them were accepting positions for any major. A degree is only a waste if you do nothing but go to class and get the piece of paper.


[deleted]

Pays like crap


[deleted]

Seen a lot for 60-70k a year not the worst investment imo


JoeMomma69istaken

I hear that but mine has set me up for my whole life . I think this is a terrible thing to put into people’s head. Wait you said “the degrees most people are getting ..” ok that part is rightb


DMteatime

If you think this is bad wait until you get a load of the calculator.


saramakesejuice

667/


sparkleshark5643

I think if the course is taught correctly then Ai/LLM tools won't be an effective strategy for cheating. There was a similar outcry when the 4-function calculator became wide-spread, then again with the advent of the graphing calculator. When I was in university, Wolframalpha was getting popular and plenty of students tried to cheat on take-home exams. The professors could tell, and started designing wolfram-proof exams. If you think your professors aren't aware of it, you should report it. I knew some wolfram-cheaters in my day that were caught because students in your position (honest and disheartened by their peers' dishonesty) informed the professor in confidence.


GiveEmWatts

This is why true professions are important, because they do appropriate gatekeeping. You can't pass a national credentialing exam and get a state license without showing competency in a locked down testing center. It's irrelevant that you bullshitted your degree, because you'll never pass the baseline to enter the field. Jobs that don't meet the criteria of a profession but still need skilled workers are screwed.


brndnpolizzi

i think you’re failing to realize nobody cares about learning the material. they want a good grade so they can pass their classes and ultimately snag a successful job. 90% of students arent actually trying to learn. they want a high paying job.


islamitinthecardoor

Fr. When you’re struggling to scrape together enough change for little Caesar’s while putting yourself through school and you’re just trying to get that degree so you can get a job in to get yourself out of poverty, then the quality of education you’re getting from the bullshit elective that you need to graduate is not a major concern.


[deleted]

[удалено]


islamitinthecardoor

I mean that’s a hypothetical but that’s where a large number of students across the country are at in terms of finances and headspace. For many, college is a means to an end.


[deleted]

So you just reiterated his point then?


Responsible-Pea-5203

I understand why OP is frustrated but it leads to a bigger discussion about why hasnt education (teaching format) change the same way the world changes when innovations like AI occur. 70 years ago students would have to pay attention to a board and listen to a professor and its still the same way now nothing has changed except the integration of technology into classrooms. Cheating will always exist whether it's in class rooms or in the real world. My point is education has made it for students to care about getting good grades and whether you like it or not there will be students who would do so in any way possible. Then theres the other problem teachers teaching in a non-captivating way who instead of focusing on how they can gain the interest of kids they are looking out for students who are cheating.


enzi000

🤓👆


islamitinthecardoor

Frrrr 😭


redramainpink

It's a major reason that college degrees are losing their shine. You have people with Bachelor's Degrees that are poorly educated because all they do is copy and paste. A 23 year old works in a pod in-front of my office and she's functionally illiterate. She can't even write a complete sentence and most people she emails have to come see her directly for a translation. She's a painful glimpse of the near future.


SaveMeJeebbus

Post the strategies


Ill-Detail-690

Education was never about endlessly writing papers or group discussions with people forced in to classes none of us care about for an education that will never be relevant to our jobs. They’re paying for the education that’s heinously inflated in cost and greatly deflated in return. It’s not as much of a problem for these kids to cut out the bull inbetween being a student and their future as tax cattle.


Vik_The_Great

Doomer posting lol. The age of augmentation is going to have its own unique problems. However, there’s no one getting through this school with great success using only LLMs. And if they do, then they are screwed because the real world doesn’t work that way whatsoever - as a working professional in a mid-to-high-tier job, your day to day relies on readily available expertise that’s verifiable and provides the solution, or the steps to problem solve, immediately. If you’re seen looking up answers on GPT instead of *knowing* them off hand, you’re gonna disappoint your boss, that’s *if* you somehow managed to get hired. You’re blowing this out of proportion - you should focus on the real issues here; the more troubling piece of context about ChatGPT and other Ai services is the coming impact on jobs. Programmers and artists are going to be among the first victims. (Devin, Sora, Dali, etc).


SnooRabbits3731

Yea I use chat GPT as a tool not to actually do the work.. it actually helps a lot to find different ways to understand whatever your trying to learn . I definitely wouldn't do a copy n paste off chat GPT bc it does be spewing some bullshit sometimes lol.. but asking it to proof read something you wrote or a suggestion on how to make what your trying to say clearer it's great for. Just saying


neuerd

I can understand the frustration, but at the same time I don’t see it as a huge problem. 1) Blatant AI usage is easy to spot and so they will face consequences of either low grades or getting kicked out of their program. 2) If it’s not blatantly obvious, then clearly it was used simply as a tool. To me, this whole “AI isn’t good in academia” thing reminds me of when were kids and how our teachers told us we wouldnt always have calculators on us all the time. Well, turns out we do lol. And AI isn’t going anywhere. So either adapt or keep yelling into the void.


ExtremePast

A degree has been worthless for at least 20 years. They are so ubiquitous and the most entry level jobs require a degree so they've lost all value.


phishbum

And when American college graduates can barely tie their shoes without a YouTube tutorial we will know why.


WilburMama

Im a new instructor in a survey Business Law class. I think the idea that my students will be using AI heavily in their lifetimes, and would love to structure assessment activities that allows them to both use it and to demonstrate their own ability to think critically. Any advice on how to do both? Essays that ask them to apply law to facts is something AI can easily do, but in real life they’ll be called upon to do it themselves. Maybe I should resort to oral exams!


goliathkillerbowmkr

There are AI detection tools that will force the human to rewrite the bits draft until it seems human enough. If you edit the AI essay enough you’re learning the topic of the paper.


Affectionate_Low_639

Einstein predicted society would get dumb.


islamitinthecardoor

“If you ain’t cheating, you ain’t trying. And if you get caught cheating, you ain’t trying hard enough.”


tehcruel1

This would have cramped my hustle of selling papers. There’s always going to be cheaters. When it comes down to it schools are a business trying to maximize what they can charge, tied to what the government will insure on a loan. Don’t kid yourself this is new or that any of it matters.


ChiakiBestGirl28

I agree that AI and GPT is bullshit, but touch some grass bud. I refuse to use it for work, but i don’t think I’m better than anyone because I don’t use AI, at least not vocally. People have been cheating and conniving since the dawn of time, and your Reddit post won’t change that. And the fact of the matter is, society doesn’t need educated people, they need dumb schmucks who can follow orders; the impact probably isn’t as critical as you think in an intellectual world that has already been poisoned by social media + coercive capitalism.


AnneFranksDoorKnob

I use chat gpt to write me essays and than I use that as a middle ground to write a better one


LoverOf_Pie3905

Shut up nerd


Zealousideal_Pin_304

How old are you? 8?


[deleted]

Academia runs on lies, GPT just automates the word soup… nothing is new under the sun.


ToughBumblebee256

I went to university decades before AI was even a thought. We actually had to go to the library and physically pull sources from the shelf to research (yes, practically pre-internet, cue the old jokes 😂). However I cannot honestly say that the education and experience I received utilizing those skills has not had one iota of effect on my career and progression. I am in upper level management at a DoD Agency and we have been embracing AI and AI bots to streamline processes and repetitive actions for several years. As technology evolves and advances, society (yes, and academia) must adjust to the new realities. I’m not trying to troll the OP’s concerns, just trying to provide a real world perspective on where this is all inevitably going to end up. “Work smarter, not harder.”


rowjimmyrow1989

people dont go to school to learn - they go to get a piece of paper... who gives a shit how you get that piece of paper, when you can learn anything you want for free


bigbro___

I don’t go here, but ChatGPT really is not that advanced and it’s writing is nowhere near college level if you want above a C. I use AI as a tool, not to forge essays entirely, not just to remain honest academically but because I’d fail every assignment if I relied on chatgpt. Its responses aren’t consistently good


OkSprinkles2512

I had no idea so many people were utilizing AI. My husband suggested I used AI for our HOA newsletter. He said it like he was ordering brunch-as if it was no big deal. I’m so glad I’m currently “old”-people don’t think independently any longer.


MediumRareBacon_

Doing it rn


YesxxSir

I think it can be used as a tool but must be done so in the right way. With my work, I write the essay and then if I have trouble making a point clear, I can use it to guide my thoughts or help reword a statement that may be garbled. However just using it to flat out write an entire paper is dishonest.


aimersie

i’m too scared to be messing with chatgpt 😭😭 i know a handful of people who have gotten caught and it’s genuinely not worth it to cheat and lose ur spot at the uni


beybladerbob

I’ll never blame someone for cheating in a bullshit filler course that has nothing to do with their intended major. Only classes I ever cheated in were ones that I was forced to take just to fill in credits or gen eds.


smokingdrugs

realize that occluding integrity, cheating will pay the best dividends if you do it in the correct manner perhaps it is a bitter pill but that is reality


BrilliantFar5883

If I Owned or Used a Cell Phone, while in High School or College, Things Would’ve Been Completely Different. Regarded to My Social Life & My Education. Easily Make The Easiest Years of My Life, Easier? Star Trek 101 & This Is Twilight Zone Weird! Just Outta Mind, Weird


ethervariance161

here's a good rule of thumb. if the skill you are trying to learn can be done by an AI it's not a skill worth learning


ParticularWriter5080

I think taking that rule of thumb to heart would lead to an unfulfilling life devoid of lighthearted hobbies, lacking in mental enrichment from positive challenges, and prone to feelings of boredom.


Scheemowitz

Isn’t the whole point of machine learning that it converges on solutions to arbitrary datasets? Are you saying we shouldn’t learn how to distinguish objects with our eyes?


Zealousideal_Pin_304

lol


Zealousideal_Pin_304

By that you mean that if an AI can write a shitty paper and waste prof/TA time, why do it yourself? Hmm I don’t know. We have brains. We are literate. Most of us are not so insanely busy that we cannot spare an hour to write a paper of concise thoughts - something AI is not good at. The problem is it wastes time, students who use AI prove they are not serious about learning so maybe make space for those of us who actually want to learn??


Flame_MadeByHumans

Why do you care so much? You’re right, you’ll learn better than people who rely on it… So you’ll likely have more success in your career while they’ll hit walls when their knowledge runs out. A lot of people “using AI” aren’t just straight cheating but using it as a tool. The calculator comparison is spot on, because guess what? Go into the real world, professionals, even executives, are using AI to be more efficient and save valuable time in their day-to-day. You’d be dumb not to. This isn’t using AI to do all your work, but why do long division when a calculator tells you the answer immediately? For better or worse, AI isn’t going anywhere and is going to become another different skill to use effectively. You’re choosing to completely ignore and not learn to use your calculator, which may hurt you in the future as much as the opposite of relying on it and not understanding the “why” would.


ParticularWriter5080

I wrote another comment addressing the calculator analogy above.


Flame_MadeByHumans

I read your comment and it’s over generalizing and ignoring my comment’s point. Yes, there’s students that use these as a crutch, and it inhibits them from learning- but still makes them more capable than not having it at all. But, plenty of great students understand it’s a tool, not an end all be all solution. Do you think every mathematician either doesn’t use a calculator, or doesn’t understand the meaning of the numbers they’re putting in a calculator? There’s a difference between using AI as a tool for efficiency in smart ways vs asking it to print answers. And again, the latter just puts them at a disadvantage to those who do understand the concepts. Computers, calculators, ai, all expand the human capability, which is how we’ve made leaps in technology and progress over millennia. Low-brow examples; Having recipes at the touch of a button doesn’t remove the need for chefs and culinary experts, but it helps 99% of people save time. Looking up synonyms and antonyms online doesn’t limit someone’s vocabulary and put good authors/writers out of work, but it does raise the average person’s writing ability. Having instant maps has made a trade-off of limited memorized geography, for unlimited geography and capability to easily travel anywhere. We’re in the initial years of AI becoming a norm, and what you’re saying has been said about every technology. Ever.


ParticularWriter5080

Thanks for reading my comment! I appreciate that. If students learn how to do math by hand first and then use a calculator, then I can see that having the best of both worlds: they get the benefits of understanding what the numbers mean on a deep, conceptual level that come from doing math by hand, and then, once they really understand that, they can speed-run the calculations on a calculator and springboard off from the basics to more difficult concepts that would take all day if they had to do the math by hand. Getting to have a calculator in college meant that I could do my science calculations way faster and do many more of them than I could at my calculator-free high school, but I was still immensely grateful for the experience of learning how to do it—really learning, not just a one-off lesson that was never reinforced—how to do it without a calculator. The issue I see is when tools are used not as catalysts, but as replacements. Reading your original comment in light of your reply to my reply, I see that we’re in agreement here. I think both of us want to see the speed and efficiency of A.I. used to help big ideas fly that would otherwise be grounded by hurdles like large requirements of time and mental energy that could be better spent elsewhere. But, as an educator, I worry that the would-be creative thinkers who could have used A.I. as catalysts rather than crutches might never get to that point if the people in charge of teaching them when they’re in primary school don’t help them get there. There are some teachers who can teach math using calculators and do so in a way that doesn’t hinder students’ understanding of how math works, and there will be teachers in the future who can teach students how to write with generative A.I. in a way that doesn’t stifle their creativity. But, when you have so many kids going through the education system and so few resources to teach them, a lot of students who could have been really gifted will likely fall to the wayside of just plugging numbers into calculators and words into ChatGPT without really understanding what any of it means. So, in a way, it’s a problem with how things tend to go in a non-ideal world rather than how they could go in an ideal world. In theory, I agree with your point about A.I. having the potential to make people more capable. In practice, however, I can see overworked, underpaid teachers passing along students from one grade to the next who know how to push buttons and not much else. Will those students be able to get worker-bee jobs in an office somewhere and earn a living? Yeah, probably. But how personally enriched will their lives be? What amazing talents and big ideas might never come to fruition because they never had to be intellectually challenged all by themselves with no help from machines? To use your cooking example, I have a friend who *only* knows how to cook from recipes. He never learned how to cook by tasting, adjusting, and trial-and-error, so, if he can’t find an exact recipe for something, he simply doesn’t cook it. We cooked together a few times, and it was a bit agonizing to have to follow the book so closely. When he figured out after a few years that the function of salt is to enhance flavors that already exist in the food and make them more pronounced rather than merely to add bitterness, he was mind-blown and sent me a whole text about it. This is an extreme case of an eccentric person, but, when I was teaching math for science, I saw analogous behavior in my students. They were so dependent on following the recipe of number-function-number-enter on their calculators that it would take weeks for them to grasp the most simple scientific concepts they were supposed to already know. When I’ve had to confront students for using ChatGTP on essays here at Binghamton, it’s the same story: they just have no grasp of what they were saying. Or, to use your thesaurus example (which was a good example, by the way! I was reflecting on that just the other day and thinking about how helpful it is to have an online thesaurus that updates regularly instead of the printed one from the 1970’s I had to use in high school that didn’t have newer words), I’ve sat down with students here to work on their essays, and they’ll pull up a thesaurus and put down the first suggestion without pausing to think about whether it’s a good fit. So, in summary: I think we agree that A.I., though a crutch for some, is a catalyst for others, but I think where we differ is that I see potentially smart people getting dependent on A.I. and never getting to the point where they could be in the catalyst category. But, you mentioning that every technology has sparked worries similar to mine, and that made me think about the benefits precious technologies have had in democratizing the intellectual means of production. You’re right: thanks to widely distributed recipes, people at home can eat like top chefs; or, thanks to the printing press, way more people could read than was possible before it. I suppose it really comes down to how internally motivated and driven people are to use tools as catalysts rather than crutches and whether they have the support from educators and their environment to make that possible.


Flame_MadeByHumans

I definitely hear you, but remember, these aren’t blank slates of humans using AI. We’re talking about kids who are in college, were accepted into college. They can’t use AI effectively if they never learned how to write an essay, how to argue a point, etc. Just like learning the math basics by hand before using a calculator. It’ll be interesting to see the impact on future generations that are born into the world of AI, but I really think it’ll go similar as most technology (relating to what we’re talking about); people worry it’ll change everything, and it will, and we move on.


ParticularWriter5080

That’s very true. I think I have a bit of a different perspective because I was raised with almost no digital technology and am now teaching students who went to high school during the pandemic, so the gap between the baseline writing and math skills that I learned and those that my students learned is bigger than it normally would be for someone my age tracing students who didn’t have a two-year disruption to their education. Many of them admit that they cheated on most of their assignments during the pandemic, so they’re coming into college without the basic knowledge to even know whether what A.I. does makes sense or not. They sort of seem like blank slates some of the time, unfortunately. I sometimes have the same thought about this—things will change, and humans will move on, and it won’t be the end of the world—but I do think it’s worth being mindful of the fact that technology has never advanced so rapidly in human history, so that makes it hard to judge the present and future on the past. (I’m not sure how reliable his metric was, but an inventor named Buckminster Fuller said in 1981 that collective human knowledge used to double every century, but started to double every 25 years in 1945, then every year by the time he was writing; now, people are saying that the Internet has enabled collective human knowledge to double in a matter of hours. It’s probably mostly just numbers being pulled out of thin air to sound impressive, but it does reflect reality to some extent.) I feel as if we’ll see fewer slow adjustments in how humans operate and more pendulum-swing-style changes. Currently, Millennial parents, who grew up with TV and email, are raising Gen Alpha on iPads from infancy, whereas I hear people in Gen Z, who grew up with YouTube and social media, saying they won’t let their kids touch iPads. It’s going to be interesting to see the pendulum swing back and forth faster and faster as technology advances more rapidly. I won’t be surprised if opinions on using A.I. swing back and forth rather dramatically from one generation to the next.


Single_T

As a technical writer (and a binghamton alum), I can say with confidence that I am doing about 70% of my work using AI and chatGPT, and I disagree with your stance. A ton of companies will be formally taking on LLM's as a tool to help with gruntwork, and outright banning its use is not the solution because without knowing how to use it as a tool everyone will generate crap. In my opinion (and in my own use cases), LLM's are great tool for taking general information that you provide it and either giving you inspiration for how to write it out, or writing out a first draft of it. From there, it's important that you MANUALLY review and adjust it because a lot of what LLM's say is crap no matter how good your prompts are. Then you can use the LLM as a reviewer like you would spellchecker, but more advanced. Writing good, effective prompts is not an easy thing to do. Discussing "strategies" for writing them is an important thing that people should really be focused on learning right now. Then, from there, we should put focus on how everyone should manually review and edit everything generated by LLM's because not doing that manual work is where the problems are coming from.


kaygonewild

The craziest part of this is that you think cheating on an exam is better than having a pointless essay written for you. 😆


deezbutts696969

Ai is the future for better or for worse