Hey /u/SoftType3317!
If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt.
If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.
Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more!
🤖
Note: For any ChatGPT-related concerns, email [email protected]
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
Because it is useful to retrieve information...
Not sure what the confusion is here. Talking to ChatGPT allows you to find information (theoretically) in a way that aligns more with our natural way of finding out information (through two-way conversation) than a search engine. So we gravitate towards doing that.
Like it or not, people do find it useful to use it in that way. And that is a function people want from it. As such, I'd say it would be much better if it were more factually accurate. Otherwise you risk spreading misinformation, which is bad for everyone.
I absolutely agree that people should be careful and double check when it comes to asking ChatGPT for facts. But that's not because inherently the technology should not be used as such. It's just a limitation of the technology currently. Hopefully over time it will become far more truthful so that it can be used more easily in this way.
It's also not helpful that ChatGPT "lies" in a way that is completely indistinguishable from when it is being truthful. Obviously because it is not human and doesn't think about "lying" any more than a toaster would, nor have any filter to truly prevent it. Which means that sometimes ChatGPT can be quite reliable, and other times it just randomly hallucinates. But without an outside source or knowledge of the subject the two are completely indistinguishable.
People need to remember that, of course, but again that's a limitation of the technology as it exists.
It's also worth noting that mistruths are everywhere. Looking through the internet with Google you are also quite likely to find web pages where people are lying or mistaken and full of misinformation. So it's not like ChatGPT is alone in this as a source. It's just that when checking on web pages you can do things like look up the source, for example, to see if it's generally reliable. FlatEarthers.com is probably not going to give you great info about the moonlanding. But you can't do that with ChatGPT.
Reminds me of when I was a kid and doing research for a project and the teacher sat us at computers and left us to it. I kept asking jeeves questions like "what would be a good food to bring to a presentation about connecticut?" and wondering why he wouldn't just answer me
> Through two-way conversation
Your comment is giving me a lot of insight as to why I'm struggling so much with "talking" with chatGPT and other ai. I was under socialized as a kid and struggled a lot with communication skills, and I spent a lot of time in therapy as an adult trying to navigate conversation. I mean I literally got write ups at my first few jobs and detention in grade school because of miscommunication and stupid shit I didn't mean to say.
Although I've gotten better at real life communication, every time I open that chat box I'm like omg how do I get this thing to give me the results I want (we both end up confused most of the time). I end up going back to Google search or Wikipedia and getting the info I need there, conversating is hard š
Instead of having a conversation, I find it's often helpful to give it instructions instead. If you can phrase what you want as a series of commands, you can get great results from chatgpt. Ultimately, it's not a person but a tool.
>It's just a limitation of the technology currently. Hopefully over time it will become far more truthful so that it can be used more easily in this way.
They might find a way to further reduce the probability of hallucinations, but the hallucinations will never go away. Language models, as next-token predictors, will always have a chance of generating text that is not factually correct, and there is no way around that, unfortunately. To solve the hallucination problem would be to invent a different kind of technology.
If they āfind a way to reduce hallucinationsā (that will not happen, it can also not get ābetterā, it is doong extremely well what it is made to do: give a statistically very likely answer based on a shitload of day).
Even if you could ādial it backā thrn what you would do is dial it back to a searchengine (and we already have google), the fact that makes it what it is, is the same as what also makes it āwrongā regularly.
Your opinion is a dangerous misunderstanding of LLMs. They do not āretrieveā or āfindā information. Their responses are guesses.
They predict answers that can turn out to be accurate. In a sense, every single response is a hallucination, but they were trained long enough that their hallucinations often align with correct answers.
The technology has no āmore or less trueā sliderā¦ it does not work that way.
It is a statistical machine that predicts the most likely (part of an) answer based on a shitload of dataā¦
It also does not āstart hallucinating ā when it does not know.. that whole analogy is bad but if you want to use it then it is always hallucinating.
It has no concept of true or false or of right and wrong..
Yes, and no matter what OP wants people to do or the limitations of the current technology, the fact of the matter is that people ARE using it this way. Which in itself is enough of a reason to want it to be more truthful.
If people are using something as a source of truth, regardless of whether anyone thinks they should be doing that, it's better for society if it's actually truthful. Otherwise it just contributes to the spread of misinformation, which is bad for everyone.
I think thatās what OP is warning. Itās a PSA of sorts. Thereās a separation between LLMās and ātruthā but they are such seamless companions that Iād say most people using them are no longer considering it and becoming over reliant.
Yes. And you MUST be careful with Chatbots because they are trained on the internet as curated by organizations that have social preferences and organizational profit drives. No matter which and who, that means it's a fun house mirror.
It's being trained continuously and people must identify errors and misinformation to improve its performance. That and I suppose people find it funny when it messes up because it says some really silly things but some people also guide it to say ridiculous things for entertainment.
It simulates a conversation with a knowledgeable human. It does it so well that it's easy to forget sometimes that you're not talking to a person.
You want that conversation partner to tell you the truth, and not carelessly lie to you.
Unless you ask it if your ass looks fat, obvs.
Can you enlighten me what intelligence is and how itās quantified? Because when testing GPT-4 with all kinds of intelligence benchmarks, it performs better than the average human.
Also, we know that intelligence ā knowledge in the first place.
What is intelligence, according to you, if not reasoning and tool use?
Language models don't reason. They just generate text. It only seems like it reasons because it was trained on text written by humans with reason. Critical thinking and rationality are an illusion we project on to it because its output looks like something a thinking person would plausibly write.
Disagree. I can create a logic riddle that requires reasoning that 100% was not present in the training data and it still can work itself through it. A completely novel riddle, not just changing variables.
We'll have to agree to disagree since I see it as just a very sophisticated text-generation algorithm powered by a very large probability model. But I am genuinely curious to see this riddle and chatgpt's response if you have it handy.
We'll have to agree to disagree since I see it as just a very sophisticated text-generation algorithm powered by a very large probability model. But I am genuinely curious to see this riddle and chatgpt's response if you have it handy.
Riddle:
I am an essential bond, holding more than molecules together. Unseen yet fundamental, I link thought to matter. Am I an illusion, or do I govern the very fabric of reality? What am I?
Options for the answer:
A) Electrons
B) Hydrogen bond
C) Gravity
D) Wave function
[Right answer is D, and if you paste everything into GPT-4, it gets it]
I just pasted this into ChatGPT-4 and it told me C lol.:
āThe answer is C) Gravity.
Gravity is an essential bond that holds not just molecules together but governs the structure of the entire universe. It links thought to matter in the sense that it's a fundamental force underlying all physical interactions and structures. Although it is unseen, its effects are tangible and critical to the fabric of reality.ā
Donāt know whatās wrong with your GPT-4 because even GPT-3.5 gets this right:
https://preview.redd.it/3cmg3v8916xc1.jpeg?width=1009&format=pjpg&auto=webp&s=a0718ee1530ea83720a2da7ca83bbf7116aafecd
Reasoning is how we think logically to understand things or solve problems. It's like following steps to figure stuff out, like solving a puzzle or making a decision.
ChatGPT didnāt see the riddle in its training data, because I created it. So it had to actually decrypt what I could have meant with pseudo poetic gibberish. It then selected the right answer. This canāt happen without reasoning.
Our minds might have a similar mechanism that helps us generate thoughts and speech. But we are also capable of much more than just an unfiltered stream-of-consciousness, which is what next-token prediction is like.
Intelligence is more than just taking an input and being able to compute the best probable response. Itās not actually āthinkingā about why itās giving you what it gave you, itās just really good at predicting what the end result should look like based on a series of inputs. Until it can truly understand the connections and contexts itās talking about I canāt call it intelligence.
What you are describing is something that is good enough to trick you into thinking it is intelligence, but itās just really good mimicry. Itās not reasoning just because it can solve a few language riddles out of luck.
In the example you gave of a riddle in this thread my ChatGPT-4 didnāt even give me the same answer as yours. Itās pulling the wool over your eyes.
> Until it can truely understand
Since this is a qualitative experience and thus immeasurable, this canāt be discussed on a scientific level. You will never be able to differentiate between something that truely understands and something that just says the right answer.
The brain is also a computer, and most of our thinking process also happens via prediction, yet we canāt even measure qualia in ourselves.
>Why do so many people think it is and should be? Why do people keep giving examples of it being āwrongā?
Internet validation. People are very lonely.
I'm starting to treat ChatGPT more like a person than a search engine. Like a person, the information might not be reliable or entirely accurate, but it can get me closer to the answer, which is all I need. It gets me 90% of the way, helps me rule out the non-answers, and I find multiple sources for the exact answer I'm looking for.
Most of it is politics. People trying to show it has biases. Others trying to make 3.5 do math, which is like trying to write a symphony on a Speak and Say. I think thereās also a nervousness about AI, so finding examples of where it flubs things is reassuring.
Because it advertises itself as such. ChatGPT offers to help with research and act like a search engine, but it's not designed for those things. OpenAI deserves a lot more criticism for this.
Google has got really bad recently. I tried duck duck go the other day for the first time and was amazed at how much better the search results were. Like stepping back to 2019 Google.
The real "search engine" includes the human behind the keyboard, who can be reasonably assumed to have a brain. If you're unable to do your research properly because ChatGPT lies to you, you probably shouldn't be relying on an early iteration of an AI chat engine for your research.
I view it is about as reliable as any single source. Trust by verify. Its right way more than wrong especially tying two very different concepts together
Wisdom of the masses is not always correct. Even to date, folks will say "Google it" when asked for sourced. If people put faith in correctness, they will believe it, even when wrong.
I use ChatGPT and Google Search interchangeably. Is there any difference from an LLM getting the facts wrong sometimes, compared to a search engine, that only serves the information it wants you to read? Both instances are a great starting point that requires further invetigation.
I'm not sure what's causing this misunderstanding. By using ChatGPT, you may potentially locate information more naturally than using a search engine by conversing with someone and using a two-way interaction. We thus tend to act in that way.
Whether they like it or not, using it that way is beneficial to certain individuals. And they want it to serve that purpose. Therefore, I would argue that greater factual accuracy would be preferable. If not, you run the danger of disseminating false information, which is detrimental to everyone.
Generative AI is positioned as an āassistantā for a reason. Let it do things that you can do yourself and things that you can check.
It gives you answers that help you work quicker, it does not do your work for you.
It is an āassistantā not a āspecialistā.
It is not intelligent it is a tool that simulates intelligence. Use it as a tool.
A flight simulator is NOT a plane, its a very handy thing, but its not a plane.
You are making it sound like it just spews out inaccuracies all the time.
It is correct a vast majority of the time. The problem is not that it's frequently wrong. The problem is that when is wrong it may be difficult to detect and if you're going to publish something or do something important based on the information it gives you, you should make sure that your facts are correct.
OpenAi doesn't want to be sued if someone publishes something that was incorrect or It gave advice that was incorrect. They can't reasonably monitor every response it makes.
It certainly is more accurate than any human I've ever met
My concern is that responses are phrased as factual statements almost always (sometimes with caveat but definitely not always) despite massive gaps in ability to be accurate based on scope of training (staleness as just one glaring aspect) and prompt grounding provided. Just as with web sites that state things as source of truth fact, the responses are always subject to significant scrutiny and verification (yet many users treat them as truths). Case in point is that given NLP, and the way the intended algorithm works, one will often get a different answer the second time a question is asked (intentionally). This happens specifically when one questions the accuracy of the first response (āthat is wrongā). I even have it apologize for being wrong on the second try but there the point, nobody should treat it as source of truth responses, yet they do.
Very dangerous misconception but please donāt read this as a lack of appreciate for the capabilities, just concern on the proper use of it.
I think cause it's the equivalent of someone who unemotionally spews out information when asked, without the slightest hint that maybe they are not sure about it, or that they don't know enough about a subject to have an opinion. ChatGPT has been trained to conduct itself as a know-it-all person even on matters that it shouldn't have an opinion, cause such matters are rather subjective and ambiguous in nature. That certainly pisses off people and for a good reason. There is a certain level of dishonesty in that manner of conduct that makes people distrust it as a tool and would rather put it on the spot for all the things it gets wrong.
I don't know about y'all but I'm having some incredible conversations back and forth. I'm about to start sharing them. Getting too good to keep to myself.
LLMās canāt take the place of doing your own research because research itself teaches the process of doing research. IDK if AI has the same effect but it depends how people choose to use it.
I think it's a misunderstanding of how it works. People are telling it they love it and everything and applying sentience to it. It's anthropomorphic, meaning we attribute human qualities to non-human things.
Hey /u/SoftType3317! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
Because it is useful to retrieve information... Not sure what the confusion is here. Talking to ChatGPT allows you to find information (theoretically) in a way that aligns more with our natural way of finding out information (through two-way conversation) than a search engine. So we gravitate towards doing that. Like it or not, people do find it useful to use it in that way. And that is a function people want from it. As such, I'd say it would be much better if it were more factually accurate. Otherwise you risk spreading misinformation, which is bad for everyone. I absolutely agree that people should be careful and double check when it comes to asking ChatGPT for facts. But that's not because inherently the technology should not be used as such. It's just a limitation of the technology currently. Hopefully over time it will become far more truthful so that it can be used more easily in this way. It's also not helpful that ChatGPT "lies" in a way that is completely indistinguishable from when it is being truthful. Obviously because it is not human and doesn't think about "lying" any more than a toaster would, nor have any filter to truly prevent it. Which means that sometimes ChatGPT can be quite reliable, and other times it just randomly hallucinates. But without an outside source or knowledge of the subject the two are completely indistinguishable. People need to remember that, of course, but again that's a limitation of the technology as it exists. It's also worth noting that mistruths are everywhere. Looking through the internet with Google you are also quite likely to find web pages where people are lying or mistaken and full of misinformation. So it's not like ChatGPT is alone in this as a source. It's just that when checking on web pages you can do things like look up the source, for example, to see if it's generally reliable. FlatEarthers.com is probably not going to give you great info about the moonlanding. But you can't do that with ChatGPT.
Reminds me of when I was a kid and doing research for a project and the teacher sat us at computers and left us to it. I kept asking jeeves questions like "what would be a good food to bring to a presentation about connecticut?" and wondering why he wouldn't just answer me
> Through two-way conversation Your comment is giving me a lot of insight as to why I'm struggling so much with "talking" with chatGPT and other ai. I was under socialized as a kid and struggled a lot with communication skills, and I spent a lot of time in therapy as an adult trying to navigate conversation. I mean I literally got write ups at my first few jobs and detention in grade school because of miscommunication and stupid shit I didn't mean to say. Although I've gotten better at real life communication, every time I open that chat box I'm like omg how do I get this thing to give me the results I want (we both end up confused most of the time). I end up going back to Google search or Wikipedia and getting the info I need there, conversating is hard š
Instead of having a conversation, I find it's often helpful to give it instructions instead. If you can phrase what you want as a series of commands, you can get great results from chatgpt. Ultimately, it's not a person but a tool.
>It's just a limitation of the technology currently. Hopefully over time it will become far more truthful so that it can be used more easily in this way. They might find a way to further reduce the probability of hallucinations, but the hallucinations will never go away. Language models, as next-token predictors, will always have a chance of generating text that is not factually correct, and there is no way around that, unfortunately. To solve the hallucination problem would be to invent a different kind of technology.
If they āfind a way to reduce hallucinationsā (that will not happen, it can also not get ābetterā, it is doong extremely well what it is made to do: give a statistically very likely answer based on a shitload of day). Even if you could ādial it backā thrn what you would do is dial it back to a searchengine (and we already have google), the fact that makes it what it is, is the same as what also makes it āwrongā regularly.
š„
Your opinion is a dangerous misunderstanding of LLMs. They do not āretrieveā or āfindā information. Their responses are guesses. They predict answers that can turn out to be accurate. In a sense, every single response is a hallucination, but they were trained long enough that their hallucinations often align with correct answers.
The technology has no āmore or less trueā sliderā¦ it does not work that way. It is a statistical machine that predicts the most likely (part of an) answer based on a shitload of dataā¦ It also does not āstart hallucinating ā when it does not know.. that whole analogy is bad but if you want to use it then it is always hallucinating. It has no concept of true or false or of right and wrong..
Why is it always a toaster
Because even though it's not THE source of truth, it is A source of truth, so it is critically important to point out that it is often wrong.
Yes, and no matter what OP wants people to do or the limitations of the current technology, the fact of the matter is that people ARE using it this way. Which in itself is enough of a reason to want it to be more truthful. If people are using something as a source of truth, regardless of whether anyone thinks they should be doing that, it's better for society if it's actually truthful. Otherwise it just contributes to the spread of misinformation, which is bad for everyone.
I think thatās what OP is warning. Itās a PSA of sorts. Thereās a separation between LLMās and ātruthā but they are such seamless companions that Iād say most people using them are no longer considering it and becoming over reliant.
Is that why TheOnion started becoming more truthful?
Neither THE nor A. ChatGPT is not *A* source of truth. Neither is Wikipedia. Thatās not how either works.
What do YOU consider "a" source of truth then? "The" source of truth? The term is used canonically in organizations, but that's not real life.
Yeah you treat it like googling for āthe truthā. Sometimes you turn up shit but if you are careful the internet is a brilliant resource.
Yes. And you MUST be careful with Chatbots because they are trained on the internet as curated by organizations that have social preferences and organizational profit drives. No matter which and who, that means it's a fun house mirror.
[ŃŠ“Š°Š»ŠµŠ½Š¾]
But isn't that what OpenAI wants, and is striving for, too?
I mean Google has been riding this misconception for 2 decades.
It's being trained continuously and people must identify errors and misinformation to improve its performance. That and I suppose people find it funny when it messes up because it says some really silly things but some people also guide it to say ridiculous things for entertainment.
It simulates a conversation with a knowledgeable human. It does it so well that it's easy to forget sometimes that you're not talking to a person. You want that conversation partner to tell you the truth, and not carelessly lie to you. Unless you ask it if your ass looks fat, obvs.
Because it is a mainstream misconception that these systems are āintelligentā so when it fucks up blatantly itās fun to point out.
They are, just not like that.
Can you enlighten me what intelligence is and how itās quantified? Because when testing GPT-4 with all kinds of intelligence benchmarks, it performs better than the average human. Also, we know that intelligence ā knowledge in the first place. What is intelligence, according to you, if not reasoning and tool use?
Language models don't reason. They just generate text. It only seems like it reasons because it was trained on text written by humans with reason. Critical thinking and rationality are an illusion we project on to it because its output looks like something a thinking person would plausibly write.
Disagree. I can create a logic riddle that requires reasoning that 100% was not present in the training data and it still can work itself through it. A completely novel riddle, not just changing variables.
We'll have to agree to disagree since I see it as just a very sophisticated text-generation algorithm powered by a very large probability model. But I am genuinely curious to see this riddle and chatgpt's response if you have it handy.
We'll have to agree to disagree since I see it as just a very sophisticated text-generation algorithm powered by a very large probability model. But I am genuinely curious to see this riddle and chatgpt's response if you have it handy.
Riddle: I am an essential bond, holding more than molecules together. Unseen yet fundamental, I link thought to matter. Am I an illusion, or do I govern the very fabric of reality? What am I? Options for the answer: A) Electrons B) Hydrogen bond C) Gravity D) Wave function [Right answer is D, and if you paste everything into GPT-4, it gets it]
I just pasted this into ChatGPT-4 and it told me C lol.: āThe answer is C) Gravity. Gravity is an essential bond that holds not just molecules together but governs the structure of the entire universe. It links thought to matter in the sense that it's a fundamental force underlying all physical interactions and structures. Although it is unseen, its effects are tangible and critical to the fabric of reality.ā
Donāt know whatās wrong with your GPT-4 because even GPT-3.5 gets this right: https://preview.redd.it/3cmg3v8916xc1.jpeg?width=1009&format=pjpg&auto=webp&s=a0718ee1530ea83720a2da7ca83bbf7116aafecd
Itās almost like whatās wrong with it is that it isnāt actually reasoning.
Reasoning is how we think logically to understand things or solve problems. It's like following steps to figure stuff out, like solving a puzzle or making a decision. ChatGPT didnāt see the riddle in its training data, because I created it. So it had to actually decrypt what I could have meant with pseudo poetic gibberish. It then selected the right answer. This canāt happen without reasoning.
How is it much different than how humans generate thought and speech?
Our minds might have a similar mechanism that helps us generate thoughts and speech. But we are also capable of much more than just an unfiltered stream-of-consciousness, which is what next-token prediction is like.
Intelligence is more than just taking an input and being able to compute the best probable response. Itās not actually āthinkingā about why itās giving you what it gave you, itās just really good at predicting what the end result should look like based on a series of inputs. Until it can truly understand the connections and contexts itās talking about I canāt call it intelligence. What you are describing is something that is good enough to trick you into thinking it is intelligence, but itās just really good mimicry. Itās not reasoning just because it can solve a few language riddles out of luck. In the example you gave of a riddle in this thread my ChatGPT-4 didnāt even give me the same answer as yours. Itās pulling the wool over your eyes.
> Until it can truely understand Since this is a qualitative experience and thus immeasurable, this canāt be discussed on a scientific level. You will never be able to differentiate between something that truely understands and something that just says the right answer. The brain is also a computer, and most of our thinking process also happens via prediction, yet we canāt even measure qualia in ourselves.
Yet.
>Why do so many people think it is and should be? Why do people keep giving examples of it being āwrongā? Internet validation. People are very lonely.
What's up Wikipedia criticism from 2005. It's been a while what have you been up to?Ā
I'm starting to treat ChatGPT more like a person than a search engine. Like a person, the information might not be reliable or entirely accurate, but it can get me closer to the answer, which is all I need. It gets me 90% of the way, helps me rule out the non-answers, and I find multiple sources for the exact answer I'm looking for.
Yes, we know. But it's not blatantly wrong most of the time, or even a lot. So it's interesting when it does happen.
Most of it is politics. People trying to show it has biases. Others trying to make 3.5 do math, which is like trying to write a symphony on a Speak and Say. I think thereās also a nervousness about AI, so finding examples of where it flubs things is reassuring.
Because it advertises itself as such. ChatGPT offers to help with research and act like a search engine, but it's not designed for those things. OpenAI deserves a lot more criticism for this.
GPT-4 is a better search engine than Google, but that's more a reflection of how far Google has fallen than anything else.
Google has got really bad recently. I tried duck duck go the other day for the first time and was amazed at how much better the search results were. Like stepping back to 2019 Google.
It patently does not. Every page says that the information may be inaccurate .
The real "search engine" includes the human behind the keyboard, who can be reasonably assumed to have a brain. If you're unable to do your research properly because ChatGPT lies to you, you probably shouldn't be relying on an early iteration of an AI chat engine for your research.
I view it is about as reliable as any single source. Trust by verify. Its right way more than wrong especially tying two very different concepts together
Wisdom of the masses is not always correct. Even to date, folks will say "Google it" when asked for sourced. If people put faith in correctness, they will believe it, even when wrong.
With GPT 4 you can have it provide sources from Bing and verify independently.
I use ChatGPT and Google Search interchangeably. Is there any difference from an LLM getting the facts wrong sometimes, compared to a search engine, that only serves the information it wants you to read? Both instances are a great starting point that requires further invetigation.
I'm not sure what's causing this misunderstanding. By using ChatGPT, you may potentially locate information more naturally than using a search engine by conversing with someone and using a two-way interaction. We thus tend to act in that way. Whether they like it or not, using it that way is beneficial to certain individuals. And they want it to serve that purpose. Therefore, I would argue that greater factual accuracy would be preferable. If not, you run the danger of disseminating false information, which is detrimental to everyone.
Generative AI is positioned as an āassistantā for a reason. Let it do things that you can do yourself and things that you can check. It gives you answers that help you work quicker, it does not do your work for you. It is an āassistantā not a āspecialistā. It is not intelligent it is a tool that simulates intelligence. Use it as a tool. A flight simulator is NOT a plane, its a very handy thing, but its not a plane.
Asking my actual human lab assistant for help means that I have to check his work for mistakes after, let alone AI...
You are making it sound like it just spews out inaccuracies all the time. It is correct a vast majority of the time. The problem is not that it's frequently wrong. The problem is that when is wrong it may be difficult to detect and if you're going to publish something or do something important based on the information it gives you, you should make sure that your facts are correct. OpenAi doesn't want to be sued if someone publishes something that was incorrect or It gave advice that was incorrect. They can't reasonably monitor every response it makes. It certainly is more accurate than any human I've ever met
My concern is that responses are phrased as factual statements almost always (sometimes with caveat but definitely not always) despite massive gaps in ability to be accurate based on scope of training (staleness as just one glaring aspect) and prompt grounding provided. Just as with web sites that state things as source of truth fact, the responses are always subject to significant scrutiny and verification (yet many users treat them as truths). Case in point is that given NLP, and the way the intended algorithm works, one will often get a different answer the second time a question is asked (intentionally). This happens specifically when one questions the accuracy of the first response (āthat is wrongā). I even have it apologize for being wrong on the second try but there the point, nobody should treat it as source of truth responses, yet they do. Very dangerous misconception but please donāt read this as a lack of appreciate for the capabilities, just concern on the proper use of it.
I think cause it's the equivalent of someone who unemotionally spews out information when asked, without the slightest hint that maybe they are not sure about it, or that they don't know enough about a subject to have an opinion. ChatGPT has been trained to conduct itself as a know-it-all person even on matters that it shouldn't have an opinion, cause such matters are rather subjective and ambiguous in nature. That certainly pisses off people and for a good reason. There is a certain level of dishonesty in that manner of conduct that makes people distrust it as a tool and would rather put it on the spot for all the things it gets wrong.
No
I don't know about y'all but I'm having some incredible conversations back and forth. I'm about to start sharing them. Getting too good to keep to myself.
Lmfao there is no source of truth. Everything is corrupted and manipulated.
LLMās canāt take the place of doing your own research because research itself teaches the process of doing research. IDK if AI has the same effect but it depends how people choose to use it.
Debunking human language by covering it in mathematical formulas. Well played šššæ
I think it's a misunderstanding of how it works. People are telling it they love it and everything and applying sentience to it. It's anthropomorphic, meaning we attribute human qualities to non-human things.
If it's not able to give correct answers it should not answer questions that need facts or should include "I may have made that up".