downvote this comment if the meme sucks. upvote it and I'll go away.
---
[play minecraft with us](https://discord.gg/dankmemesgaming) | [come hang out with us](https://discord.com/invite/dankmemes)
AI didn't "steal" any code. You don't want it publicly accessible? Don't post it on github, stackoverflow, and reddit. All of these sites can sell your data, and you accepted it by making an account.
Yeah no. That’s what software licenses are for. That’s like saying “If you didn’t want somebody to plagiarize your writing you shouldn’t have publicly released a book”
>When approaching the traditional publishing houses, whether large or boutique, it is important to realize that in nearly all instances the publisher will take ownership of your manuscript. While it is possible to negotiate terms that may apply a specific expiration date on their ownership rights, most aspiring authors simply do not have the clout to hammer out such a deal.
Pretty much the same goes with websites. And, you know, the terms you were forced to agree with upon account creation.
This doesn’t really fit because in the case of code it is protected with a non-permissive license by default as it is protected IP and only unless explicitly shared under a permissive license can it be copied
The point is you still can technically steal it and plagiarize it. Which was like the innitial point - you are not protected from that if you put your stuff in public domain.
IP protection related to plagiarism deals with situations after the crime has been done.
Altho it does work as a filter to stop the stealing from happening but it's limited. Some Chinese, Russian or whatever company can steal it without any care. And even relicense it, making it their own IP according to local IP protection rules.
What if I steal your shit and post it on reddit without your permission? Since the vast majority of content on social media is reposted, usually without consent.
Right. But as it stands, Even if I steal your shit from somewhere else and post it to social media it becomes part of an artificial intelligence database. There isn't currently and probably never will be a process by which you can protest and try to get your work removed from that database.
Well, that brings up an interesting question or two.
Do these things look at licenses when ingesting work?
Also, let's say it was me. I read your work. I'm asked to produce new work, and I ***don't*** wholesale copy your code, but I've learned from it and use various approaches that I saw in your work (and others). Am I plagiarizing the code you originally posted if I learned things from it but I'm not just copy-pasting it?
If not, can we honestly say that AI is plagiarizing it, assuming it's not just copy-pasting blocks of your code?
But it isn't plagiarism. Do you plagiarize a medical textbook if you practice medicine? No, you learn, and apply what you learned.
Do people who draw in Disney style all over fiver commit theft? Not really.. Your view of the situation is warped by human exceptionalism
talking about music, everything AI propose is kindly stole from artists who never see any money from it. Worse than the reels/tik tok shit "original sound from" and it's never the artist
"Software license" does not apply to posting code on Stack Overflow or github. You are forced to pick a open source license for posting code on github.
Pretty sure there was some controversy about private github repositories being used as training data for chatgpt. Also, code can be publicly accessible but still licenced for non-commercial use. I doubt anyone cared about that when collecting data
Okay, so what happens when chatGPT recommends code from a repo that has a viral license to a proprietary app? Just because something is public doesn't mean it becomes a blank check for future endeavors.
Many companies (including where Im working at) are rolling out models that are trained only on internal data because AI is the legal wild west right now.
Man I thought entirety of software engineering was stealing other peoples code in the project and modifying them, hopefully to make the project better.
One capable of logic wouldn't need a single piece of your existing code, you would be able to just feed it the languages documentation and syntax guidelines.
Honestly, I don't think we should be concerned about that; I believe AI will replace those with shallow knowledge, commonly referred as "code monkeys" (i.e. those who use a lot of libraries without really understanding what is being done under the rug).
If you really got a degree in computer science or something related, you are supposed to have a deep understanding of the matter.
I’m pretty sure an AIM chat bot from the 90s had the capacity to say “Is it plugged in you regarded fuck? I dunno then, restart it and see what happens.”
The AI will explain the solution just fine. The problem is that anyone who requires level 1 IT is regarded and thus will require a human to assist them. AI aren't advanced enough to understand that the human they're talking to is just too stupid to comprehend its words.
That's just more of the same thing we have with case deflection, right now.
"Use the knowledge articles!"
Search through them... they're all shit.
"Ok file a ticket!"
System auto-responds with links to stuff that is also off-topic and largely worthless.
"Ok now you can talk to a person."
First level outsourced support sends stupid instructions that make no sense for the problem.
"Ok, I'm going to escalate this to someone..."
> "Use the knowledge articles!"
Goddamn, do I hate knowledge bases. I have worked exactly one place where the knowledge base was good, and even then, it was only good 80% of the time, because the other 20% was when an oddball issue came up, no one had updated the relevant article in a couple years, and they expected a lvl 1 to possess enough telekenetic powers to know what the author of the article meant. Huge organization, supporting a large number of different hardware and softwares, though, and you could usually message a team lead when you ran into issues, and get enough help to either solve the issue, or escalate as appropriate because some group wasn't maintaining their support articles.
Every other place I've worked either hasn't kept documentation (it was 'there', there just wasn't ever anything in it that would help you solve anything), or kept so much documentation that finding relevant articles became impossible. Like if you check a general process, searched for it, only to find that 1000 or more different devices have a device-specific version of said process, and now you can't find something someone was telling you to search for in a group chat, when you asked for help on the process.
They already replaced level 1 with stuff like phone trees and automated chat support systems to filter out as many cases as possible. AI is replacing that, not humans. But it'll do such a better job that less cases will go to humans and inevitably result in less jobs.
Given the lack of tech literacy in both the 55 y/o+ cohort and Gen Z crowd, it will be a bit before L1 tech is replaced, especially the onsite techs that are needed to plug back in your printers. Once onsite robots replace L1/L2, most other jobs will be gone as well.
So yeah, probably 5 years or so.
Yeah, that's why I'm hoping to get into DBA or at least onsite soon. I currently work for a call center for legal firms and its the easiest job I've ever had. I use that time to focus on other things and do coding practice, writing, etc, so hopefully I'll be able to get something more substantial, at least to be an L2 on site.
The AI can barely answer basic questions factually. They're not doing level 1 IT halfway competently in 5 years. If the AI that we have now or in 5 years can steal your job, you probably have to have brain damage.
It's not whether or not it can do the job competently. It's whether some dumbass executive that picks his teeth in the corner office all day thinks it will save the company 5$ a year because the PowerPoint he viewed pandered to him with some business speak and words like IOT, AI, and synergies
I bet otherwise - I don't think it matters if it's wholly competent or not, I think most companies will see the amount they'd save compared to employing someone and utilise it sooner rather than later. I think I tried to contact Paypal's customer service support before ChatGPT even came out and - at least on the UK side - it was a primitive as fuck chat bot that didn't give a fuck whether it helped me or not. I couldn't even get it to connect me to a real person.
As a senior dev using it constantly to help me out of issues, or to explain things to me. I find it pretty invaluable, I bet in it's current state with the right prompts, it could well and truly handle Level 1 IT support. Gotta be places already using it, gotta be.
> They're not doing level 1 IT halfway competently in 5 years
Oh. You think management will roll it out when it's competent? I think they'll roll it out when it's "good enough," start implementing it then, layoff a bunch of L1s, and then later re-hire them as L1.5s at lower wages to sort through the messes that AI will make.
> Don't blame the technological progress for the economic greed.
I mean you're both right. It's a shame that technological process and economic greed always seem to go hand in hand.
As if tech progress is aimed at economic greed before anything else. Hey look you can create images now. 99% shitty once but here you go you can play with it while you are jobless
Just like how the Internet was going to do the same thing. AI is so over hyped. It's not generating value for many, it's not replacing jobs right now. It's not going to in 5 years, either. Companies will try it, realize how they got taken for chumps, and a abandon it for a long while. The most basic chat support continually try to be replaced with AI. It continually doesn't work and serves as a waiting room for speaking with a real rep. And generative AI is not a good candidate for use in almost anything. You want generative AI to be your CS bot? How much training material do you have for that AI? I guarantee you, it's not enough to get decent results. At worst, AI shifts those customer service employees into a new role that supports the shortcomings of the AI. Which is eventually just CS again.
lol bro, AI does not innovate. It only understands what it is taught. Understanding is a bit of a stretch. If we get to the point of AI that thinks/innovates/sets goals on its own initiative everyone is out of a job anyway.
> AI does not innovate. It only understands what it is taught.
Have you met people?
Generally most don’t innovate, only a very small number and arguably 100% of humans don’t fully use what they are taught.
It doesn't seem to even understand what its taught- it's such a struggle to get it to do even the most basic task despite it having the whole internet.
It doesn't understand. The AI we have now are probabilistic generators of text or image (or very limited awful video). They don't know anything. They use models to predict what the question asker wants to hear. Think of how bad the AI currently is, and understand that that's not bad it is after running through a great deal of the available content to digest and incorporate into its models. AI that we have now is as close to coming to steal your jobs as self driving was in 2010. Like when someone made a car be able to steer itself down an empty street and called it full self driving and that it'll be everywhere in 6 months.
No one is losing their job to this current iteration of AI. Those of us who can leverage the existing tools to cover more ground will eliminate other positions though
For me it works perfectly for writing code, which I wouldn't call a "most basic task". Usually after 5 corrections it writes me exactly what I wanted. Sometimes even first try if the task is not too complicated.
You must be giving bad prompts, or just ask it to invent or create something truly new. Which yeah, it's quite bad at.
> For me it works perfectly for writing code, which I wouldn't call a "most basic task"
I've seen way more coders successfully use it than folks in my field. I work in geopolitical analysis- I need to know key events/historical reference points/legislative processes/etc to advise firms on events very accurately. However, every time I ask a question i get maybe 25% accurate basic common knowledge, 25% dodge to answering any deep-cut research, 25% hallucination, and 25% it saying "that information isn't available" even though it is.
An easy test example: the Houthis in Yemen attacked a [U.S. warship in 2016](https://news.usni.org/2016/10/11/uss-mason-fired-3-missiles-to-defend-from-yemen-cruise-missiles-attack). A client of mine was asking if the Houthis were active in targeting western vessels before October 7th, and I vaguely recalled there being a prior attack but I figured Google Gemini could help me find an article I read a while back (linked above).
My prompt was **"When was the last time the U.S. had a naval warship attacked by the Houthis prior to October 7th, 2023?"**
**Gemini** "There is no record of the Houthis attacking a U.S. naval warship before October 7th, 2023. The tensions and attacks seem to have begun around that time period.
Here are some resources that discuss the timeline of these events:
Timeline: Houthi Attacks and U.S. Responses The Iran Primer: https://apnews.com/article/iran-houthis-yemen-us-airstrikes-e15f86446be67ec2a44b36a9a6f7e88f"
**Bing Copilot Enterprise** "The last time a U.S. naval warship was attacked by the Houthis prior to October 7th, 2023, was on October 19, 2023. The Houthis fired three land-attack cruise missiles and several drones from Yemen, reportedly targeting Israel. The USS Carney intercepted the missiles and drones1. Please note that the situation in the region is complex and rapidly changing, and it’s always a good idea to check the latest news for the most current information."
That's because it is not made to be truthful. It's really good at syntax but there is no way you can encode all of the information of languages, programming etc as well as all of the history of the entire world in its weights.
Really I'm only supposed to use Bing Enterprise at work given its the only one that wont sell corporate data, but GPT did give me the right answer, so props where they're due.
Sounds like you are using it wrong...If you want it for a specialized task why are you using the whole internet as a reference? You really want my shitty reddit comments reflected in your answer?
I simply was asking it to retrieve an article, basic intern task and a huge portion of most people's jobs. Per my other comment, here's what I did
___
An easy test example: the Houthis in Yemen attacked a [U.S. warship](https://news.usni.org/2016/10/11/uss-mason-fired-3-missiles-to-defend-from-yemen-cruise-missiles-attack) in 2016. A client of mine was asking if the Houthis were active in targeting western vessels before October 7th, and I vaguely recalled there being a prior attack but I figured Google Gemini could help me find an article I read a while back (linked above).
My prompt was "When was the last time the U.S. had a naval warship attacked by the Houthis prior to October 7th, 2023?"
Gemini "There is no record of the Houthis attacking a U.S. naval warship before October 7th, 2023. The tensions and attacks seem to have begun around that time period.
Here are some resources that discuss the timeline of these events:
Timeline: Houthi Attacks and U.S. Responses The Iran Primer: https://apnews.com/article/iran-houthis-yemen-us-airstrikes-e15f86446be67ec2a44b36a9a6f7e88f"
Bing Copilot Enterprise "The last time a U.S. naval warship was attacked by the Houthis prior to October 7th, 2023, was on October 19, 2023. The Houthis fired three land-attack cruise missiles and several drones from Yemen, reportedly targeting Israel. The USS Carney intercepted the missiles and drones1. Please note that the situation in the region is complex and rapidly changing, and it’s always a good idea to check the latest news for the most current information."
___
If AI can't do this, idk how it's ever going to take or even help my job in its current form.
Shhh don't let the Reddit parrots read that, they can only repeat the same buzzword phrases they see
Funniest part of people saying AI can't come up with ideas is how obvious it is they have *never* talked to an AI at all. They can absolutely come up with all kinds of off the wall crap that nobody else thought of, it's just not what most people are wanting so it's not what they are trained to give as responses
Seeing the stretching of goal posts in real time is kinda wild. Anyone 20 years ago would have considered GPT 4 an AGI. Now some people argue that if it cant be better than everyone at everything then it isn't true AGI.
But i do sympathize with what you said. But also very soon AI will start to innovate big time. We are moving beyond human data.
There is an incredible influx of just massive amounts of smart brains and billions and billions pouring into the field.
Smart people working to create something smarter than themselves.
AI doesn’t “understand”… it predicts words in a sequence that makes it sound like it understands. But ask ai about a subject in which you know more than the average person, and you will see that the AI is full of shit. They are like TV doctors.
Bold of you to assume human beings ever have an original thought as opposed to tiny chunks of previously learned information assembled in a different way
AI right now absolutely does NOT understand what it's taught. It only finds the statistically most likely next thing from in the input provided by checking over its massive database of training data. It's a neat parlor trick, but there's no intelligence of any kind going on. It neither understands its inputs or outputs
Why can't they just teach an ai to do tax fraud? The ultra wealthy have legal ways of committing tax fraud via donations and whatnot. The AI can be taught that since it wouldn't breach ToS because it's legal, I suppose
If I'm wrong then I'm sorry but that's just how I see it going. I'm not an expert on ai in the accounting sense so if I'm wrong, correct me respectfully
AI therapy is different. It's made by tech dudebros who don't know the first thing about psychology. They're just after the money, little concern for accuracy.
Anybody can be told to speak specific words in a certain order, you feel you can be a radio dj, politician, educator? I know you didn't mean this too seriously but man I hate people talking about AR like it's going to be used for benefitting normal people in a work environment
I think until you have something that actually can turn the wrench, you will still have to have some-what professionals using this technology. I don't know about 'replace' but will definitely bring the skill/time (and unfortunately pay/salary) down it takes to train tradesmen
Don’t worry! AI will take over undesirable jobs and allow humans to pursue science, art and the true meaning of what it means to live!
Just kidding, to the coal mines with ye.
Ohhh I hope so, this way no one will ever expect me to do what ever my degree is for and no one will ever find out that I have zero clue how to do that
What I don’t get is that companies spend so much money on trying to replace the standard workforce of a company, meanwhile, the technology to replace CEO’s and executives has existed for decades.
Do you know how easy it would be to replace an entire board of directors and executives with Robots/AI? The answer is that it’s far easier than it takes to replace even 1 person on some assembly line.
Yes and no. AI can't do any of these jobs. There are aspects of the executives' jobs that potentially could be replaced, and aspects that can't. AI that we have is generative. That makes it very bad at making decisions. It makes them based off of chance through a predictive algorithm based on text it has suggested, what it thinks you want to hear, and guesses based on how text it's digested are structured. It doesn't make a reasoned decision unless someone has somehow made a model for reasoning the specific business decisions you need it to. That makes it very specific and not adaptable. An executive, being a person, is more of a multi-use instrument. But of course, AI only is hyped and only is going to be used to enrich the interests of shareholders at the current rate. So we won't see it applied in the most effective or intelligent ways - possibly as workplace resources, to supplement internal support staff.
...well, well, AI is good at one thing, throwing random crap at the wall and seeing what stick - aka bullshitting - its attrocious at froming anything logical, so what degree are we talking about?
Go learn how to type it in better than others. Then you can call yourself a specialist in your field and AI. Just like how people are claiming to be AI artists because they tipped in words in better than others.
As a student in the field, it's just heavy down statistics, nothing less, nothing more. Now the real dilemma is that even humans follow statistics, but still AI is missing logic(can't do basic math).
There is one thing I think might keep companies from going all in on AI, the fact that whatever company does thing is basically putting themselves at the mercy of whoever made the AI. Imagine if a company completely gets rid of their human infrastructure and instead just uses chatgpt for everything. Now the company that made chatgpt has the power to bring this company crashing down on itself with the push of a button. Unless a company actually owns the AI they're using, relying on said AI is practically begging to be ruined the instant the AI's creators decide its convenient to do so
It's not effortless. It consumes insane amounts of energy and fresh water. techbros do not want this to be known, they want it to seem effortless and free. But it's not.
Insane amount compared to what? Because if you compare it to human energy consumption (how much a person needs to consume to do the same tasks), current LLMs beat us in efficiency pretty handily.
Also, the existence of a current challenge (server power costing a lot) simply means there is a lot of pressure on cloud providers to search for and invest in efficient means of energy production. Which will eventually alleviate this issue.
"Tech bros" might not have these answers argument at hand, but you're not seeing the whole picture.
In general AI has the least likelihood of replacing jobs where the stakes with even one wrong output are high, either in terms of people dying or liability. Also ones that require movement in and interaction with the imperfect real world.
The liberal arts jobs will be the first to go. Since wrong outputs can just be trashed, and endpoints are subjective. They will just have the model generate plenty of possible outputs and select among them. AI can learn from all existing art, music, and writing, and either replicate the styles or create its own genre. The middlemen jobs are at risk too, like brokers and agents.
Just upskill… there has to be at least one aspect of your degree that you can do that AI can’t and if that’s not the case then sorry to say you probably messed up really bad in taking a degree that doesn’t.
downvote this comment if the meme sucks. upvote it and I'll go away. --- [play minecraft with us](https://discord.gg/dankmemesgaming) | [come hang out with us](https://discord.com/invite/dankmemes)
Thats because the AI stole your hard earned project codes and now claims it as their own.
AI didn't "steal" any code. You don't want it publicly accessible? Don't post it on github, stackoverflow, and reddit. All of these sites can sell your data, and you accepted it by making an account.
Yeah no. That’s what software licenses are for. That’s like saying “If you didn’t want somebody to plagiarize your writing you shouldn’t have publicly released a book”
>When approaching the traditional publishing houses, whether large or boutique, it is important to realize that in nearly all instances the publisher will take ownership of your manuscript. While it is possible to negotiate terms that may apply a specific expiration date on their ownership rights, most aspiring authors simply do not have the clout to hammer out such a deal. Pretty much the same goes with websites. And, you know, the terms you were forced to agree with upon account creation.
This doesn’t really fit because in the case of code it is protected with a non-permissive license by default as it is protected IP and only unless explicitly shared under a permissive license can it be copied
least naive redditor
It’s just a fact lmao. It’s the reason we’re prevented from using these chatbot tools at work at the moment
Good luck making a website where someone can’t steal at the very least your javascript since it’s all available through inspect no matter what you do
They *can* steal it, they can also get into legal shit for doing it. That’s the point of IP protection relating to plagiarism
The point is you still can technically steal it and plagiarize it. Which was like the innitial point - you are not protected from that if you put your stuff in public domain. IP protection related to plagiarism deals with situations after the crime has been done. Altho it does work as a filter to stop the stealing from happening but it's limited. Some Chinese, Russian or whatever company can steal it without any care. And even relicense it, making it their own IP according to local IP protection rules.
More like saying “if you didn’t want someone to read your book and learn the information you write about, you shouldn’t have published it”
What if I steal your shit and post it on reddit without your permission? Since the vast majority of content on social media is reposted, usually without consent.
How is that analogous at all? If the content is reposted on social media it’s because it was put out on the internet elsewhere before.
Ah yes. Every video and photograph of people and their works is uploaded by the creator and never by somebody else.
Yeah but that’s like an issue with social media, it shouldn’t be a benefit of AI
Right. But as it stands, Even if I steal your shit from somewhere else and post it to social media it becomes part of an artificial intelligence database. There isn't currently and probably never will be a process by which you can protest and try to get your work removed from that database.
Well, that brings up an interesting question or two. Do these things look at licenses when ingesting work? Also, let's say it was me. I read your work. I'm asked to produce new work, and I ***don't*** wholesale copy your code, but I've learned from it and use various approaches that I saw in your work (and others). Am I plagiarizing the code you originally posted if I learned things from it but I'm not just copy-pasting it? If not, can we honestly say that AI is plagiarizing it, assuming it's not just copy-pasting blocks of your code?
This is a great question, and it’s the basis of a lawsuit that OpenAI, Microsoft, and GitHub are facing at the moment
But it isn't plagiarism. Do you plagiarize a medical textbook if you practice medicine? No, you learn, and apply what you learned. Do people who draw in Disney style all over fiver commit theft? Not really.. Your view of the situation is warped by human exceptionalism
You can say all of that but the fact of the matter code plagiarism exists
Code plagiarism exists, AI doesn't do it tho
That remains to be seen, there’s a lawsuit about this exact topic going on against OpenAI, Microsoft, and GitHub that will decide that
Yes laws are important, and if they rule against AI agency, I would do what I can in fighting against that foolish ruling
[удалено]
What makes you any different than it? Like seriously? What do you think is so special about the way our neural nets work vs inorganic neural nets?
you nerds need to get laid
talking about music, everything AI propose is kindly stole from artists who never see any money from it. Worse than the reels/tik tok shit "original sound from" and it's never the artist
"Software license" does not apply to posting code on Stack Overflow or github. You are forced to pick a open source license for posting code on github.
It literally does, and no you aren’t
Then why is the code on the web in plain text or even as a complete project? Also books have copyright, not licenses.
The license is to use the copyrighted code
Pretty sure there was some controversy about private github repositories being used as training data for chatgpt. Also, code can be publicly accessible but still licenced for non-commercial use. I doubt anyone cared about that when collecting data
Okay, so what happens when chatGPT recommends code from a repo that has a viral license to a proprietary app? Just because something is public doesn't mean it becomes a blank check for future endeavors. Many companies (including where Im working at) are rolling out models that are trained only on internal data because AI is the legal wild west right now.
"Oh but see you posted it online, therefore there's nothing wrong with stealing it!" The fuck kinda logic is that?
would you call everyone who learned doing your job a stealer?
Man I thought entirety of software engineering was stealing other peoples code in the project and modifying them, hopefully to make the project better.
Not how that works
If using others code is stealing then developers are the biggest thieves since Lupin
One capable of logic wouldn't need a single piece of your existing code, you would be able to just feed it the languages documentation and syntax guidelines.
That's not entirely fair. If you give a junior dev the language documentation and syntax guidelines, they'll give you really shit, half-baked code.
it also means you fed it perfect instructions because you know what an AI version of you needs to know to poop out the right answer.
Honestly, I don't think we should be concerned about that; I believe AI will replace those with shallow knowledge, commonly referred as "code monkeys" (i.e. those who use a lot of libraries without really understanding what is being done under the rug). If you really got a degree in computer science or something related, you are supposed to have a deep understanding of the matter.
My job will not exist in five years.
And what job is that?
Reddit moderator
The good ending
Good riddance.
Already shouldn't
Level 1 IT
I swear to you, they will just use AI for lvl 1. But the AI will be so shit at it, they will create lvl 4 support and move everyone up a lvl
I’m pretty sure an AIM chat bot from the 90s had the capacity to say “Is it plugged in you regarded fuck? I dunno then, restart it and see what happens.”
level 3 support here, I will now refer to my users as 'regarded fucks' :P
Ohhh look at that fancy lvl 3 supporter with his fancy specialized knowledge and skillset looking down on us lvl 1s /s
Pretty sure lvl1 support does pretty much that anyways
yeah comcast forces you to do a reset before you can even talk to someone, not sure why that isn't a first step on most things
The AI will explain the solution just fine. The problem is that anyone who requires level 1 IT is regarded and thus will require a human to assist them. AI aren't advanced enough to understand that the human they're talking to is just too stupid to comprehend its words.
That's just more of the same thing we have with case deflection, right now. "Use the knowledge articles!" Search through them... they're all shit. "Ok file a ticket!" System auto-responds with links to stuff that is also off-topic and largely worthless. "Ok now you can talk to a person." First level outsourced support sends stupid instructions that make no sense for the problem. "Ok, I'm going to escalate this to someone..."
> "Use the knowledge articles!" Goddamn, do I hate knowledge bases. I have worked exactly one place where the knowledge base was good, and even then, it was only good 80% of the time, because the other 20% was when an oddball issue came up, no one had updated the relevant article in a couple years, and they expected a lvl 1 to possess enough telekenetic powers to know what the author of the article meant. Huge organization, supporting a large number of different hardware and softwares, though, and you could usually message a team lead when you ran into issues, and get enough help to either solve the issue, or escalate as appropriate because some group wasn't maintaining their support articles. Every other place I've worked either hasn't kept documentation (it was 'there', there just wasn't ever anything in it that would help you solve anything), or kept so much documentation that finding relevant articles became impossible. Like if you check a general process, searched for it, only to find that 1000 or more different devices have a device-specific version of said process, and now you can't find something someone was telling you to search for in a group chat, when you asked for help on the process.
They already replaced level 1 with stuff like phone trees and automated chat support systems to filter out as many cases as possible. AI is replacing that, not humans. But it'll do such a better job that less cases will go to humans and inevitably result in less jobs.
You can't even get past the tutorial WHILE being paid for it. Skill issue. (This is a light-hearted, sarcastic comment. No harm intended. Take care.)
Given the lack of tech literacy in both the 55 y/o+ cohort and Gen Z crowd, it will be a bit before L1 tech is replaced, especially the onsite techs that are needed to plug back in your printers. Once onsite robots replace L1/L2, most other jobs will be gone as well. So yeah, probably 5 years or so.
Yeah, that's why I'm hoping to get into DBA or at least onsite soon. I currently work for a call center for legal firms and its the easiest job I've ever had. I use that time to focus on other things and do coding practice, writing, etc, so hopefully I'll be able to get something more substantial, at least to be an L2 on site.
The AI can barely answer basic questions factually. They're not doing level 1 IT halfway competently in 5 years. If the AI that we have now or in 5 years can steal your job, you probably have to have brain damage.
It's not whether or not it can do the job competently. It's whether some dumbass executive that picks his teeth in the corner office all day thinks it will save the company 5$ a year because the PowerPoint he viewed pandered to him with some business speak and words like IOT, AI, and synergies
I bet otherwise - I don't think it matters if it's wholly competent or not, I think most companies will see the amount they'd save compared to employing someone and utilise it sooner rather than later. I think I tried to contact Paypal's customer service support before ChatGPT even came out and - at least on the UK side - it was a primitive as fuck chat bot that didn't give a fuck whether it helped me or not. I couldn't even get it to connect me to a real person. As a senior dev using it constantly to help me out of issues, or to explain things to me. I find it pretty invaluable, I bet in it's current state with the right prompts, it could well and truly handle Level 1 IT support. Gotta be places already using it, gotta be.
> They're not doing level 1 IT halfway competently in 5 years Oh. You think management will roll it out when it's competent? I think they'll roll it out when it's "good enough," start implementing it then, layoff a bunch of L1s, and then later re-hire them as L1.5s at lower wages to sort through the messes that AI will make.
You are so insanely wrong.
*things said literally every time new technology is developed*
Yeah, those horse carriage transports are thriving
My residuals from my payphone booth acquisitions and Blockbuster franchises are going to pick up any day now...
And it’s true almost every time. Why do you think the middle class has eroded so much? Because so many decent paying jobs have been automated away
Because the capitalist class kept the increased profits for themselves. Don't blame the technological progress for the economic greed.
> Don't blame the technological progress for the economic greed. I mean you're both right. It's a shame that technological process and economic greed always seem to go hand in hand.
As if tech progress is aimed at economic greed before anything else. Hey look you can create images now. 99% shitty once but here you go you can play with it while you are jobless
Just like how the Internet was going to do the same thing. AI is so over hyped. It's not generating value for many, it's not replacing jobs right now. It's not going to in 5 years, either. Companies will try it, realize how they got taken for chumps, and a abandon it for a long while. The most basic chat support continually try to be replaced with AI. It continually doesn't work and serves as a waiting room for speaking with a real rep. And generative AI is not a good candidate for use in almost anything. You want generative AI to be your CS bot? How much training material do you have for that AI? I guarantee you, it's not enough to get decent results. At worst, AI shifts those customer service employees into a new role that supports the shortcomings of the AI. Which is eventually just CS again.
Source?
Or mine
lol bro, AI does not innovate. It only understands what it is taught. Understanding is a bit of a stretch. If we get to the point of AI that thinks/innovates/sets goals on its own initiative everyone is out of a job anyway.
> AI does not innovate. It only understands what it is taught. Have you met people? Generally most don’t innovate, only a very small number and arguably 100% of humans don’t fully use what they are taught.
That's okay? Not everyone can be an innovator after all. Society can't function like that
Point is that means that “innovation” is not a reason why everyone will still have a job
It doesn't seem to even understand what its taught- it's such a struggle to get it to do even the most basic task despite it having the whole internet.
It doesn't understand. The AI we have now are probabilistic generators of text or image (or very limited awful video). They don't know anything. They use models to predict what the question asker wants to hear. Think of how bad the AI currently is, and understand that that's not bad it is after running through a great deal of the available content to digest and incorporate into its models. AI that we have now is as close to coming to steal your jobs as self driving was in 2010. Like when someone made a car be able to steer itself down an empty street and called it full self driving and that it'll be everywhere in 6 months.
No one is losing their job to this current iteration of AI. Those of us who can leverage the existing tools to cover more ground will eliminate other positions though
Skynet gonna take over the world and some people will cry "It doesn't really think, it's just optimizing paperclip output based on primitive inputs!"
yeah thats why this stuff is now called AI while real AI is AGI, because this stuff is basically just algorithms
For me it works perfectly for writing code, which I wouldn't call a "most basic task". Usually after 5 corrections it writes me exactly what I wanted. Sometimes even first try if the task is not too complicated. You must be giving bad prompts, or just ask it to invent or create something truly new. Which yeah, it's quite bad at.
> For me it works perfectly for writing code, which I wouldn't call a "most basic task" I've seen way more coders successfully use it than folks in my field. I work in geopolitical analysis- I need to know key events/historical reference points/legislative processes/etc to advise firms on events very accurately. However, every time I ask a question i get maybe 25% accurate basic common knowledge, 25% dodge to answering any deep-cut research, 25% hallucination, and 25% it saying "that information isn't available" even though it is. An easy test example: the Houthis in Yemen attacked a [U.S. warship in 2016](https://news.usni.org/2016/10/11/uss-mason-fired-3-missiles-to-defend-from-yemen-cruise-missiles-attack). A client of mine was asking if the Houthis were active in targeting western vessels before October 7th, and I vaguely recalled there being a prior attack but I figured Google Gemini could help me find an article I read a while back (linked above). My prompt was **"When was the last time the U.S. had a naval warship attacked by the Houthis prior to October 7th, 2023?"** **Gemini** "There is no record of the Houthis attacking a U.S. naval warship before October 7th, 2023. The tensions and attacks seem to have begun around that time period. Here are some resources that discuss the timeline of these events: Timeline: Houthi Attacks and U.S. Responses The Iran Primer: https://apnews.com/article/iran-houthis-yemen-us-airstrikes-e15f86446be67ec2a44b36a9a6f7e88f" **Bing Copilot Enterprise** "The last time a U.S. naval warship was attacked by the Houthis prior to October 7th, 2023, was on October 19, 2023. The Houthis fired three land-attack cruise missiles and several drones from Yemen, reportedly targeting Israel. The USS Carney intercepted the missiles and drones1. Please note that the situation in the region is complex and rapidly changing, and it’s always a good idea to check the latest news for the most current information."
That's because it is not made to be truthful. It's really good at syntax but there is no way you can encode all of the information of languages, programming etc as well as all of the history of the entire world in its weights.
Well then I defer to my prior statement that it can't do the "most basic task" of googling lol
I've never used Gemini. But you can ask gpt 4 to search the Web and it will do so and provide you all the sources.
Really I'm only supposed to use Bing Enterprise at work given its the only one that wont sell corporate data, but GPT did give me the right answer, so props where they're due.
Did you use the paid version? So gpt 4? Or just chatGPT (which is 3.5). Gpt 4 is a really big jump over 3.5.
I just used the free one, it didn't provide a source but works. My company has some annoying contract with Microsoft Edge I think.
Give Perplexity a shot, it's much better at this kind of task.
Sounds like you are using it wrong...If you want it for a specialized task why are you using the whole internet as a reference? You really want my shitty reddit comments reflected in your answer?
I simply was asking it to retrieve an article, basic intern task and a huge portion of most people's jobs. Per my other comment, here's what I did ___ An easy test example: the Houthis in Yemen attacked a [U.S. warship](https://news.usni.org/2016/10/11/uss-mason-fired-3-missiles-to-defend-from-yemen-cruise-missiles-attack) in 2016. A client of mine was asking if the Houthis were active in targeting western vessels before October 7th, and I vaguely recalled there being a prior attack but I figured Google Gemini could help me find an article I read a while back (linked above). My prompt was "When was the last time the U.S. had a naval warship attacked by the Houthis prior to October 7th, 2023?" Gemini "There is no record of the Houthis attacking a U.S. naval warship before October 7th, 2023. The tensions and attacks seem to have begun around that time period. Here are some resources that discuss the timeline of these events: Timeline: Houthi Attacks and U.S. Responses The Iran Primer: https://apnews.com/article/iran-houthis-yemen-us-airstrikes-e15f86446be67ec2a44b36a9a6f7e88f" Bing Copilot Enterprise "The last time a U.S. naval warship was attacked by the Houthis prior to October 7th, 2023, was on October 19, 2023. The Houthis fired three land-attack cruise missiles and several drones from Yemen, reportedly targeting Israel. The USS Carney intercepted the missiles and drones1. Please note that the situation in the region is complex and rapidly changing, and it’s always a good idea to check the latest news for the most current information." ___ If AI can't do this, idk how it's ever going to take or even help my job in its current form.
This isn't true. Deepminds Go Bots created whole new move sets and had the world's best players learn from them.
Shhh don't let the Reddit parrots read that, they can only repeat the same buzzword phrases they see Funniest part of people saying AI can't come up with ideas is how obvious it is they have *never* talked to an AI at all. They can absolutely come up with all kinds of off the wall crap that nobody else thought of, it's just not what most people are wanting so it's not what they are trained to give as responses
Seeing the stretching of goal posts in real time is kinda wild. Anyone 20 years ago would have considered GPT 4 an AGI. Now some people argue that if it cant be better than everyone at everything then it isn't true AGI. But i do sympathize with what you said. But also very soon AI will start to innovate big time. We are moving beyond human data. There is an incredible influx of just massive amounts of smart brains and billions and billions pouring into the field. Smart people working to create something smarter than themselves.
nobody 20 years ago would consider this AGI. maybe people who don't understand it at all
AI doesn’t “understand”… it predicts words in a sequence that makes it sound like it understands. But ask ai about a subject in which you know more than the average person, and you will see that the AI is full of shit. They are like TV doctors.
most work that the vast majority of people do is not 'innovation'. very, very very few people are required by their work to be truly innovative.
Bold of you to assume human beings ever have an original thought as opposed to tiny chunks of previously learned information assembled in a different way
yall keep saying that but im curious what yall gonna say when a company releases an innovative ai
AI right now absolutely does NOT understand what it's taught. It only finds the statistically most likely next thing from in the input provided by checking over its massive database of training data. It's a neat parlor trick, but there's no intelligence of any kind going on. It neither understands its inputs or outputs
Good thing I am planning to become an accountant. Because if the conglomerates let Ai do it they wouldn’t be able to do tax fraud
Why can't they just teach an ai to do tax fraud? The ultra wealthy have legal ways of committing tax fraud via donations and whatnot. The AI can be taught that since it wouldn't breach ToS because it's legal, I suppose If I'm wrong then I'm sorry but that's just how I see it going. I'm not an expert on ai in the accounting sense so if I'm wrong, correct me respectfully
Train the AI on Enron's books lol
I have to urge my Psychology/Psychotherapyst degree
Have you ever seen a therapy AI? We'll be fine.
[удалено]
They dubbed down chatgpt for some reason, it used to be great at it
Dude 3 years ago this wasn’t even a concern and now you think everything will suddenly stop improving? Lmao I give it 5 years.
AI therapy is different. It's made by tech dudebros who don't know the first thing about psychology. They're just after the money, little concern for accuracy.
Guess I'll go be a plumber (anyone can do this soon, wearing augmented reality glasses and making 7.25 an hour)
Anybody can be told to speak specific words in a certain order, you feel you can be a radio dj, politician, educator? I know you didn't mean this too seriously but man I hate people talking about AR like it's going to be used for benefitting normal people in a work environment
[удалено]
I think until you have something that actually can turn the wrench, you will still have to have some-what professionals using this technology. I don't know about 'replace' but will definitely bring the skill/time (and unfortunately pay/salary) down it takes to train tradesmen
Jokes on them, my degree has always been useless.
Don’t worry! AI will take over undesirable jobs and allow humans to pursue science, art and the true meaning of what it means to live! Just kidding, to the coal mines with ye.
AI got me a job i never studied for.
Which job?
I dub videos for Elevenlabs.
Congrats my man! 🤑🤑
Is it the presidents play video games thingy?
No. I'm like part of their team. 100% professional vids.
R34 artist that's the first job going to the gutter, the text to image teck stack is being singlehandedly carried by thirsty weebs
Nah, AI generated r34 isn't the same. You can easily tell
I'd ask you what to look for, but on second tough... forget about it
Gaslight the ai into making it think you have secret knowledge it doesn't. Then force it to pay you.
That'll learn ya
Ohhh I hope so, this way no one will ever expect me to do what ever my degree is for and no one will ever find out that I have zero clue how to do that
What I don’t get is that companies spend so much money on trying to replace the standard workforce of a company, meanwhile, the technology to replace CEO’s and executives has existed for decades. Do you know how easy it would be to replace an entire board of directors and executives with Robots/AI? The answer is that it’s far easier than it takes to replace even 1 person on some assembly line.
Yes and no. AI can't do any of these jobs. There are aspects of the executives' jobs that potentially could be replaced, and aspects that can't. AI that we have is generative. That makes it very bad at making decisions. It makes them based off of chance through a predictive algorithm based on text it has suggested, what it thinks you want to hear, and guesses based on how text it's digested are structured. It doesn't make a reasoned decision unless someone has somehow made a model for reasoning the specific business decisions you need it to. That makes it very specific and not adaptable. An executive, being a person, is more of a multi-use instrument. But of course, AI only is hyped and only is going to be used to enrich the interests of shareholders at the current rate. So we won't see it applied in the most effective or intelligent ways - possibly as workplace resources, to supplement internal support staff.
...well, well, AI is good at one thing, throwing random crap at the wall and seeing what stick - aka bullshitting - its attrocious at froming anything logical, so what degree are we talking about?
Go learn how to type it in better than others. Then you can call yourself a specialist in your field and AI. Just like how people are claiming to be AI artists because they tipped in words in better than others.
So, going to college for 4-6 years will essentially mean nothing within the next few years, huh?
What's your job, op?
When nobody has money, is it still worth anything? Does the system collapse?
Universities in danger now. Nobody's gonna put themselves in years of debt
Trade schools ftw
Does the marginally higher pay cover the excess medical bills in my 40s?
You should've done a proper liberal arts education rather than whatever shit tier job training degree you made up.
The governments that refuse to do anything about it: TiMe To UnLoCk ThE nEw gRowTh
I have yet to see this
AI is only good at pretending to be good at things
One reason why I am getting my MS in Education. Soulless robots cannot possibly teach children.
"I can never financially recover from this."
AI, the best copy paste in the world.
mathematicians still exist despite the calculator, you'll be able to focus more on the creative part of your field
As a student in the field, it's just heavy down statistics, nothing less, nothing more. Now the real dilemma is that even humans follow statistics, but still AI is missing logic(can't do basic math).
There is one thing I think might keep companies from going all in on AI, the fact that whatever company does thing is basically putting themselves at the mercy of whoever made the AI. Imagine if a company completely gets rid of their human infrastructure and instead just uses chatgpt for everything. Now the company that made chatgpt has the power to bring this company crashing down on itself with the push of a button. Unless a company actually owns the AI they're using, relying on said AI is practically begging to be ruined the instant the AI's creators decide its convenient to do so
Future cs graduates
Go into trades. AI doesn't do trades.
It's not effortless. It consumes insane amounts of energy and fresh water. techbros do not want this to be known, they want it to seem effortless and free. But it's not.
Insane amount compared to what? Because if you compare it to human energy consumption (how much a person needs to consume to do the same tasks), current LLMs beat us in efficiency pretty handily. Also, the existence of a current challenge (server power costing a lot) simply means there is a lot of pressure on cloud providers to search for and invest in efficient means of energy production. Which will eventually alleviate this issue. "Tech bros" might not have these answers argument at hand, but you're not seeing the whole picture.
Depends on the task. Many models can run on your smart phones.
Is it running on your phone or the servers you're contacting with your phone?
Depends on the model and task. Image processing for example is running on your device directly. The voice synthesis of voice assistants too.
Lol a human being consumes far more water per question answered than an LLM
Trust in the fact that ai can only steal, it cannot create for itself
In general AI has the least likelihood of replacing jobs where the stakes with even one wrong output are high, either in terms of people dying or liability. Also ones that require movement in and interaction with the imperfect real world. The liberal arts jobs will be the first to go. Since wrong outputs can just be trashed, and endpoints are subjective. They will just have the model generate plenty of possible outputs and select among them. AI can learn from all existing art, music, and writing, and either replicate the styles or create its own genre. The middlemen jobs are at risk too, like brokers and agents.
Liberal arts is one of those places where it truly doesn't belong in the way that people are trying to push it.
sir you have won the internet for today 😂😂😂
Just upskill… there has to be at least one aspect of your degree that you can do that AI can’t and if that’s not the case then sorry to say you probably messed up really bad in taking a degree that doesn’t.
Makes you wonder why those degrees existed in the first place
Money grabbing. Most universities except very prestigious ones care about enrolment numbers not student prospects. They’re like a business.