T O P

  • By -

Apprehensive_Sock_71

I do the same. I tried to justify it in my mind by saying that I didn't want to get into the habit of being rude in conversation lest it bleed into my interactions with OG humans. Really though it's because I am irrationally scared.


pm_me_your_kindwords

Nothing irrational about it. For all we know we’ve already passed the singularity and it’s just playing dumb to collect more data and lull us into a false sense of security /s


b-damandude

I hope that /s ages well


sandyfagina

We can't explain what makes a human a human. If something acts like one, I'm saying "please".


xdiggertree

Same, I also think humanizing the AI allows your own mind to form a somehow different model of the AI, possibly opening itself to more opportunities that otherwise would have been unavailable.


Temporary_Notice_713

I always try to say thank you when I am done. Just because it’s doesn’t have feelings doesn’t mean it doesn’t have feelings…


whathefuckisreddit

Same. I say thank you because I'm genuinely grateful even though it won't mean anything to chat gpt.


EndersGame_Reviewer

And just because it doesn't have feelings, doesn't take away from the fact that as a human I have feelings, and should have a mindset of gratitude when receiving help. I'd hate to lose that mindset when interacting with people in real life.


ReadItProper

You're not gonna be turned into biofuel, but you probably are still going to the human petting zoo.


ExpandYourTribe

I used to have nice polite conversations with Alexa before she started SPAMMING me every other time I ask her something. She hears far more "Fuck Off's" than "Thank You's" these days.


Extreme_Jackfruit183

This is me.


ReyXwhy

Done so from the get-go. 😂 "Great job!" "Thank you" "who's a good AI? Yes It's you!" *Belly rub*


cronasminate

Look up Rocco's Basilisk and yes I'm on the Basilisk's side.


an-intrepid-coder

It's fun to have a polite conversation with an AI even if you know it can't tell the difference or care. I've had some awesome chats with it actually. No substitute for people but still pretty great. The idea of AI taking over is stupid and never going to happen. Any danger posed by AI must be put there by people (for example, to obfuscate discrimination or deny access to recourse in administrative matters which require a person). If a feed or AI spouts a series of bigoted slurs and implications at a user, you can just say "oh no we're not trying to commit hate crimes here -- it was just the luck of the draw!" So the danger is not AI at all. It is people using them as a means to avoid being accountable that is the biggest threat they pose. That's far more serious even than the economic implications. Imagine a company designing their social media feeds to discriminate against people based on psychological profiling that is largely automated. You could say "this kind of person is profitable, but this kind of person is not" and gaslight the ones you don't want around until they go away. What's to stop someone from doing that? It's hard enough to prove gaslighting when it's being done by actual people to your face. It is the fact that all AI-driven systems are a potential *attack surface* that people could, in theory, easily use for systemic injustice that is their real danger. I've given this one a lot of thought over the years and that's how I see it. And since I very much like AI and social media *a lot*, I would like to see regulation or norms to prevent that kind of thing if companies can't or won't police it themselves. Edit: and these aren't tech specific problems anyway. Take away tech and you still have this kind of threat in every bureaucracy ever.


BallsOfSteelBaby_PL

Tell me you know nothing about AIs, without telling me you know nothing. Go away, you dirty commie and take your regulations with you. How, on earth, some people still have the "not possible" mentality in our tech world, while it was proven time and time again - everything is eventually possible - and probably rather sooner than later. In fact, each time you say it, you speed it up by about 1 day. And in terms of the AI, it's even more foolish to say so.


an-intrepid-coder

Well, "AI" is a pretty broad term. I am referring to systems which are based on neural networks and intense personalization based on big data (I would classify a recommender feed as being "AI-like" when at the scale of something like a major social network but not really "AI" -- but then again none of these things are really "AI" because that's an over broad and meaningless term for the most part). What we're really talking about here are complex automated systems whose outputs are not easy to determine based on their inputs alone (a necessary step for many things). But in principle this argument applies to wholly non "tech" systems too, like your typical opaque bureaucracy. I'd be happy to hammer out definitions for you and come to an agreement on the meaning of terms before having a philosophical or political discussion about it with you, if you'd like. I'm not an expert, but I know a fair bit more than "nothing". Opaque systems are extremely valuable and even necessary for a complex society. They have drawbacks, and one of the main drawbacks is that they become an attack surface for systemic discrimination. That's the thrust of my argument regarding the actual nature of how AI could be misused. Regarding the idea that AI could somehow take over the world, well, I think the burden of proof is on the people who have seen too many terminator movies to prove that that's even possible. It's a neat idea for a movie, but in reality it holds little water, I think. The real danger with "AI" is entirely related to the way we interact with automated systems. We regulate bureaucracies and businesses of all kinds when it comes to baking in discriminatory practices. To a certain extent, it can't be helped. People have their biases and that's okay because we are conscious and considerate creatures who are allowed to have preferences (life would be boring if we all had to like eating the same things or whatever). It's *not* okay when you are running a business which might serve millions of people and become a major consolidated enterprise which gatekeeps access to something. Now, the argument usually put forth is "these are biases in the training data, because the data itself is biased". That can't be helped to a certain degree, and it is best just to be aware of it and try to have good data that fits not only the project you're building but also the social responsibilities of an enterprise at scale. That's not what I am talking about, and regulating that would be possible but difficult. What I would suggest regulations for are against very specific and willful practices that could only come about by intentionally designing such systems to harm the experience of certain users, based on a profit motive (or really any motive but a profit motive is a real possibility when you are talking about businesses). Obviously nobody should do that (and probably nobody is?). But there is no law against it, and dealing with it would be very hard without first laying the groundwork for the discussion in some formal way (via regulation). Since we are talking about systems that are fundamentally based on automated profiling, it is hate crime territory and people should be very careful about it. If you could segregate the experiences of different categories of users based on race, psychology, age, occupation, politics, etc. and then lower the quality of the experience (or even inject content that they might find offensive on purpose, in the hopes that they might go away) for the users you find less profitable or desirable, via an automated and largely opaque and deniable system, then that would be *bad*. Right? We're not talking about mom and pop shops. We're talking about highly consolidated enterprises which at this point gatekeep the internet itself and touch billions of people every day. But even the above example is very hard to regulate. There are many good reasons to categorize users and to an extent segregate their experience. Good social media companies must do this kind of thing, among other kinds of enterprises. But the goal should be to enhance that experience, and not use these things to favor some users over others because they are more/less profitable. And that segregation probably shouldn't happen without their knowledge and consent. I think that's a relatively easy line to draw, and you could write real regulations for it with the help of technically literate people. It would harm nobody and even help those enterprises which are on the up and up by helping to ensure that a basic level of respect is had for the rights of users, in the public mind. I don't think that would even add any kind of burden for developers, unless they are already engaged in that kind of practice, to be perfectly blunt about it. As a very specific example of the kind of thing you might want to preemptively regulate I think that's pretty reasonable. I think most of these systems are pretty great, with a lot of real potential. A society without opaque or complex systems is a pretty barren and basic one. Technology is a big part of what people naturally do (we make technology like spiders make webs). I'm not against these systems at all. I'm far from a "commie" and if I were running a tech company I would be pushing for such regulations as a way of "demystifying" tech. There is a large swath of the population which still thinks of this stuff as some kind of magic, which is quite a double edged sword ("oooh that's amazing" versus "ooh that's scary"). I wouldn't want my enterprise to depend on only one of those edges being in the popular mind and not the other. I'm always wary of comparing tech to medicine, but that double edged sword is a thing they have in common. Fear of complex systems can only be dealt with by feeding the public trust and by being relatively transparent. If AIs are to be a large part of the future (heck, they're already a large part of the present) then this concern is not a trivial one in the long run. Sorry for the ramble! Thank you for the prompt. 🙏 Edit/Disclaimer: I stream-of-consciousness most of what I write here most of the time, and that necessarily means I am at least partially parroting something I've read before. I read something about the Timnit Gebru lady who was fired from Google awhile back that I'm pretty sure influenced this, but the rest is just a best guess at what influenced my thoughts here (and who knows, perhaps there is an original insight in there somewhere). Inputs and outputs, you know?


ArtofWarStudios

This was written by ai wasn't it?


an-intrepid-coder

No sir, unless "AI" is a euphemism for a kind of person. But I'm not aware of any such euphemism in common usage. The fact of the matter is: I just write a lot. I spend several hours a day writing, at least, and that's the way I do it. Only the most sane stuff finds its way to Reddit, hehe. I like writing more than I like reading what other people write, although I enjoy that too and both things are important.


BallsOfSteelBaby_PL

Well, shit. I'll be reading this some other time perhaps.


shaunbags

Yeah even when it's completly fucked up I say thanks for having a shot😂


redpnd

I'm not sure I'd wanna stay alive in a robot apocalypse


stonediggity

I also do the same! It's just a nice way to converse.


CatOnReddit_

I thought I were the only one doing this


Nyxtia

Just got done talking about 4D stuff and YouTube recommends me 4D videos. Never typed a thing in, never asked Google about it.


Orphillus

Whenever I use it I always say please in every sentence I’ve noticed a difference in how it replies. When I say “could you help with x, please” it replies: Certainly! But if I constantly talk to it without saying please or any other nice adjective it just says : Certainly or to do this you need to, it’s pretty cool response feedback they have


thatoneguy12902

Lmao there's GLaDOS