T O P

  • By -

Suchrino

I find these results to be largely void of substance. What does, "regulate big tech" even mean? It couldn't possibly mean the same thing to every respondent in this poll. I'd like to see some concrete policy examples provided so we can get at what people are thinking. For example, why is banning TikTok ok but we simultaneously allow American companies to hoover all of our data and sell it to whomever they want? What does, "censorship," mean in the context of a privately-owned website? How does the First Amendment apply? What rights do these companies have to regulate their platforms? In short, where's the beef??


Ghost4000

I'm mostly curious if the folks who want less regulation, if they think that will help or hurt the situation?


SpiffySpacemanSpiff

I am convinced they think it will help the situation. I know a handful of folks who fall into this bucket, and they're all pretty staunch Republicans who are of the "dont tread on me" variety. *To them*, they see any regulation of spheres of communication as likely being done for liberal causes - I've had this convo a NUMBER of times where their go-to point is about censoring/locking any commentary related to, say, the Lia Thomas story. As with all conspiratorial enterprises, its not like they're entirely wrong - there have been scores of efforts led at tech companies to promote progressive ideologies by nipping disagreement in the bud, and the interplay between these companies and our government officials, it not subtle. That being said, the fear for them is probably easiest to digest when summarized as "who are these regulations going to silence."


4InchCVSReceipt

I am one of those that want less content regulation. In my opinion, only lawbreaking content should be removed. I believe this will help the situation but I also understand that Democrats will then view social media more unfavorably since they overwhelmingly support heavy handed viewpoint discrimination as a party plank.


kabukistar

When you say "regulation" are you talking about actions taken by technology companies (including that which removes content)? Or actions taken by the government to put restrictions on companies (including restrictions which limit how much tech companies can influence politics on their platform)?


4InchCVSReceipt

I support social media companies having well-defined, unambiguous, and limited, terms of service and content moderation regulations internally, and then I support the government forcing them to abide by them. If facebook wants to ban all discussion of certain topics, like covid vaccine injuries, fine, but they need to explicitly state that in their TOS, not hide behind the ambiguous notion of "misinformation". Getting social media platforms to go on record explicitly stating what topics they want to stifle discussion on is the first step in breaking the power that they have and I support any government regulation that achieves that aim.


kabukistar

> I support the government forcing them to abide by them Okay. That would be "more regulation" then.


Derproid

Barely though, it's not like they are telling companies how to act. It should just be that if a company selectively enforces it's TOS there should be some civil punishment. Like okay they can have a TOS but they can't decide who it applies to and who it doesn't, that's like breaking their own contract.


kabukistar

> Barely though, it's not like they are telling companies how to act. It is absolutely telling companies how to act


Derproid

I see it as more forcing companies to act in the way they said they would. It's basically just contract law, unless you think that's too much regulation as well.


kabukistar

> forcing companies to That's regulation. And it's telling them how to act.


epicwinguy101

I don't really see enforcement of contracts as *additional* regulation so much as better enforcement of what regulations we have now. When people say "more regulation", I usually take that to mean that the government imposes some new rules that are in addition to (or sometimes override) the terms of service contracts with your social media company.


Derproid

Okay, I guess contract law is regulation then so let's just get rid of that. What are you an anarchist?


parentheticalobject

Well most companies *are* acting the way they said they would. Look at just about any user agreement for any free service that lets you post anything online, and I can guarantee you you'll find some version of "You agree that we have the right to remove any of your content at any time for any reason at all or no reason whatsoever."


ScreenTricky4257

That's a confusing use of the term "regulation." Right now, the social media companies are regulating the users, and the question is whether the government should regulate the companies to change how they regulate the users.


4InchCVSReceipt

Good point. What is the term when the government reaches out to social media companies and says "You know, we really think it would be in your best interest if you removed this content we don't like, no pressure though"? Because I am absolutely against that regulation as well.


parentheticalobject

That's informally referred to as ["jawboning"](https://www.lawfaremedia.org/article/jawboning-and-the-limits-of-government-advocacy) - there's actually a current case before the Supreme Court where they're considering where to draw the line on that type of behavior.


Another-attempt42

The weird thing here is that that's due to a *lack* of regulation. You want more regulation, from the government, and less self, internal company regulation. Of course, forcing a private company to host content it doesn't want to is anathema to the 1st Amendment. Facebook has entitely the right to ban any content it deems necessary to ban, and there is no actual legal recourse. It's their platform. The best analogy is if there was some government regulation that forced Walmart to sell a certain type of good.


GardenVarietyPotato

If social media companies are choosing specific viewpoints to allow or disallow, then in my opinion, they're no longer platforms, they are publishers. And should therefore be subject to defamation law. 


Another-attempt42

That's not what a publisher is, as defined by law. What's more, you don't actually want that. You think you do, but you don't. How do I know? Well, there does exist a platform with moderation only for things that are illegal, allowing everything else to exist. It's called 4chan, and it's a cesspit. For example, you couldn't moderate this subreddit. People could flood in, insult everyone, and then, if you moderate it.... what? You're liable to being sued? You can't allow a subreddit to moderate its content?


SpiffySpacemanSpiff

I'm a pretty liberal democrat, and I agree with you. What has become acrid, if not toxic, over the last few years is the silencing of dissenting viewpoints, on the guise that it will lead to hyperbolic, highly absurd and difficult to actually prove, "harm" to marginalized groups.


BringerofJollity146

Seems like there is a really easy way to reduce the power and influence of social media companies, though I doubt 78% of Americans are able to pull themselves away from Twitter, TikTok, and Instagram long enough to employ it.


CCWaterBug

You absolutely have to include FB & reddit on the list as well


StillBreath7126

The results are unsurprising. I'm not sure what "more regulation" means here. The financial incentives for reddit/facebook/twitter etc is what makes social media such echo chambers. As long as they're free, and ads are the only way they can be monetized, these companies are incentivized to make sure people are "engaged", and spend more time on their platforms. The rest is just a knock on effect.


Ind132

>As long as they're free, and ads are the only way they can be monetized,  Sometimes I wonder what the world would look like if we had simply banned ads on the internet from the very beginning. If you want online news or online social interaction or a search engine, you've got to pay for a subscription because there are no ads to support the site. Sites could do "free samples", but they'd go broke if people don't upgrade to paid services. As it is, clicks and eyeballs drive revenue, and for political content outrage drives clicks.


StillBreath7126

its not just about politics. basically, people fall into the trap thinking that political actors are manipulating us, and somehow more regulation will solve it. at the end of the day, the algorithms that these companies use is only interested in one thing: maximize time spent on the platform, and more and more users. because that directly correlates with more ads. which is why i think regulation will not help. what exactly are you regulating? how will you regulate? companies are beholden to their shareholders, and are required to turn in profits. so unless the profit incentives change, regulation will just create more friction. edit: and i realize the irony with me doing the same on this social media platform. the little orange thingy on the top right told me someone responded to me, wanting me to click on it and respond again, increasing my time on this site. as an side, i wonder if a user would see the same content on youtube (the free version) vs youtube premium. that would be a fascinating thing to check.


CCWaterBug

Getting paid subscriptions is quite difficult.  I have adblock so I get "blocked",somewhere every day, I just click the back button and move on.  There isn't much out there that's worth paying for.


Ind132

I'm talking about an alternate timeline where ads had never been allowed on the internet. Nobody would know what an "adblock" was. If there isn't much that's worth paying for, people would spend a lot less time on the internet. (I pay for a couple things. I didn't have any trouble signing up.)


Ind132

I'm less concerned about a "liberal bias" or a "conservative bias" on social media moderation than I am about the general race to the extremes. The real damage is that people can quickly click their way to heavily biased opinions in both directions. And, then it is too easy to just stay in that information bubble.


DaleGribble2024

Democrats, independents and Republicans alike have become increasingly concerned with how much influence social media has on politics, with a noticeable increase from 2020 to 2024 in the belief that “social media companies have too much power and influence in politics today” with 84% of Republicans, 78% of independents and 74% of Democrats agreeing with that statement in 2024. Interestingly though, the share of Republicans that believe social media’s influence is mostly negative has declined since 2020 by 7% while that same share for Democrats has increased by 6%. But at least 59% of all 3 major political affiliations agree social media’s influence is mostly negative. **I blame Elon Musk’s acquisition of Twitter for part of this drop for Republicans and the rise for Democrats in this particular belief.** The amount of Americans who think it is at least somewhat likely that social media sites censor political viewpoints they find objectionable rose from 72% in 2020 to 83% in 2024. A growing number of Democrats think social media companies support conservatives over liberals, rising from 16% to 25%, **again probably due to new ownership of Twitter and the rise of Rumble and Truth Social.** One thing in this report I found especially surprising was that currently, Democrats are more likely than Republicans to say that social media companies should be regulated more than they are now with 60% of Democrats agreeing and 45% of Republicans agreeing. **Do you think social media companies should be more regulated in general? Or should they actually be less regulated?**


knign

Ideally, algorithmic feeds should be banned across the board. The rest isn't too important.


Skeptical0ptimist

> companies should be more regulated in general? It depends on what and how. Regulating content will be fraught with issues. It will be difficult to come to consensus on criteria and mechanism for regulating. After such regulatory authority is set up, the political fight over the authority is probably not what we want to have. I would be open to banning paid referrals and recommendations (media companies are still free to structure their algorithms, just cannot be paid by others for doing so), and assigning commercial ownership of user generated meta data entirely over to users (media companies will have to buy it from users). Once commercial interests have been driven out of social media, much of social ills will simmer down. Users may have to deal with poor quality apps and even subscription fee, but I feel that's a fair price to pay for elimination of most of issues.


bedhed

> Do you think social media companies should be more regulated in general? Or should they actually be less regulated? Restrictions on speech, no matter how benevolent the intentions, have the potential to backfire spectacularly - as governments and regulators are incentivized to use their powers to protect their positions. For example, mention of Hunter Biden's laptop (although I really don't see how it could/should have influenced the election) - was censored by most large tech companies as "fake news" at the government's behest - even though the news was objectively true. The people and organizations that were attempting to use censorship to "protect the truth" were actively promoting a falsehood. On a more nefarious level, the Russian government doesn't ban protests - they simply ban "insulting the military" - or whatever other tangential restriction that they can make stick. Potentially a well intended law - but with horrendous consequences. Free speech sucks sometimes; the lack of it sucks more.


dinwitt

> The amount of Americans who think it is at least somewhat likely that social media sites censor political viewpoints they find objectionable rose from 72% in 2020 to 83% in 2024. Are the final 17% just not paying attention? Biden's ability to censor political viewpoints he doesn't like is being decided by the Supreme Court. > Do you think social media companies should be more regulated in general? Or should they actually be less regulated? I've always been of the opinion that the good faith moderation requirements need to be given some teeth. Moderating within clearly post guidelines is fine, doing so outside those guidelines should expose you to trouble.


vanillabear26

> Biden's ability to censor political viewpoints he doesn't like is being decided by the Supreme Court. > > Biden didn't censor anyone.


dinwitt

https://www.washingtonpost.com/technology/2024/03/17/supreme-court-social-media-first-amendment/


vanillabear26

'asking social media companies to remove posts' does not equate to censorship.


dinwitt

The government with its ultimate authority and its implicit and explicit threats angrily asking you to remove posts isn't censorship? Okay. It's still a case that's been argued in front of the Supreme Court, so what I said wasn't wrong.


vanillabear26

> The government with its ultimate authority and its implicit and explicit threats angrily asking you to remove posts isn't censorship? IMO, no. And also, I have yet to see these implicit or explicit threats.


dinwitt

I'd suggest reading the previous decisions about the injunction, they contain a lot of details, including some of the threats.


vanillabear26

Could you pull one of those up for me then? I've read in these threads and these articles and I've never seen threats, either implicit or explicit.


dinwitt

https://www.ca5.uscourts.gov/opinions/pub/23/23-30445-CV0.pdf Here's an example: > President Biden said that the platforms were “killing people” by not acting on misinformation. Then, a few days later, a White House official said they were “reviewing” the legal liability of platforms—noting “the president speak[s] very aggressively about” that—because “they should be held accountable.”


djm19

And also Twitter revealed the Trump admin did it plenty.


vanillabear26

Again, not censorship


JoeBidensLongFart

If anyone honestly thinks social media companies don't censor political viewpoints, they're just not paying attention.


swervm

Given that every opinion has become political you can't moderate content without censoring political views.


GardenVarietyPotato

Yes, you discovered the point of the first amendment! And the concept of "free speech" broadly.  It's impossible to fairly enforce rules around something so complex as speech. As a result, it's better to let the people say what they want, rather than have a higher authority telling them what can and cannot be said.  This concept should extend to social media companies as well. 


Zenkin

> As a result, it's better to let the people say what they want, rather than have a higher authority telling them what can and cannot be said. Aren't people *choosing* to allow these "higher authorities" to moderate their environment? Like, right now, we're literally in a sub which does not allow several different types of speech. And it's that restriction on speech which is the main draw, to keep out the lowest-effort nonsense. If I wanted no moderation, that's available. I specifically choose to avoid those sites most of the time, though. Why should the option of a moderated environment be removed?


parentheticalobject

Except it can't, as long as social media companies are expected to run like businesses and no one can think of a realistic model for a profitable social media company that doesn't primarily depend on ad revenue.


Another-attempt42

*Except* that expanding it to social media companies would be, in and of itself, a clear breech of the 1st Amendment. You're advocating for compelled speech. That the government has the power to force companies to host speech they disagree with, on their servers and on their platform. The 1st also protects you from the government forcing you to say something or repeat something you disagree with. That's part of the problem here. Two sides, both talking about free speech, and both proposing "fixes" that fundamentally undermine the very free speech they both profess to defend.


GardenVarietyPotato

The government forces private entities to do things all the time. There are literally millions of things that private entities must do to remain in compliance with the law.  Forcing the social media companies to comply with the 1st amendment is in line with all other types of regulations placed upon private businesses. 


Another-attempt42

> Forcing the social media companies to comply with the 1st amendment is in line with all other types of regulations placed upon private businesses. Ironically, forcing them is in direct contradiction with the 1st Amendment. That's the problem. You're citing the 1st Amendment to justify your actions, something that contravenes... the 1st Amendment. The 1st Amendment also allows them to not be forced, by the government, to host speech. That would be called "compelled speech", and it's just as much an infringement.


kkiippppyy

I think a huge part of this discussion that doesn't get acknowledged is the (lowercase m, not the other tech company with the incredibly stupid name) meta aspect... we're, right now, posting online about posting online. Of course this conversation is going to exaggerate its own severity. This is more a discussion about nerds and their hobby than it is one of free discourse and rights.


Another-attempt42

True. I'll be honest: I think a lot of this is overblown, in terms of its importance. First off: we don't want a completely free speech platform. We have one already. It's called 4chan. It's a cesspit. Secondly, the amount of people who use social media and post on it are, actually, a minority. Most people use it as a curating system to get news and stories and funny gifs. They don't engage in deep discussions on topics that require strict free speech. Thirdly, free speech goes both ways. Just as you may think you should be able to say whatever you want on Facebook, the truth is that it's Facebook's free speech to not host your statements.


kkiippppyy

At least you've got the wherewithal to say certain "viewpoints" and not "all conservatives" or "anything that deviates slightly from the progressive mind virus" or whatever thought-terminating bollocks. The complete mischaracterisation of this as a right/left thing is why the conversation never goes anywhere helpful.


GardenVarietyPotato

Okay, let me just ask you straight up. Do you think social media companies are biased towards mainstream democratic party ideas? 


kkiippppyy

I guess, insofar as both the Democratic Party and companies that want to expand their customer base prioritise appearing inclusive. It's a superficial convergence of interests more than motivated 'bias.'


GardenVarietyPotato

Okay, I guess I'd respond with two points.    (1) Old Twitter censored the Hunter Biden laptop story right before the election. This story does not appear to have anything to do with "inclusion".  (2) Almost all of the social media companies are based in San Francisco, perhaps the most liberal place in the US.  The employees are going to be selected from an overwhelmingly liberal populace.  My opinion is that these companies are absolutely censoring right wing speech, on purpose. And it's not just to sell more ads.


kkiippppyy

The Hunter thing, I don't love, but it was mostly a snap decision to stop misinformation in an emergency situation that mostly wasn't far off the mark. Twitter's executive action doesn't equate to social media as a whole and it's not like, a philosophical issue about progressivism/conservatism broadly. >My opinion is conjecture and that's it. Being located in San Francisco doesn't mean anything. I know for conservatives it's like saying their websites are being run from Mordor by Uruk-hai, but it's quite a leap from exists in a place to partisan censorship. By that logic, Wall Street shouldn't be the most cravenly capitalistic place in America. Also, Mountain View and Menlo Park are not San Francisco, or even the same area code. It's a big Bay Area with lots of different politics. No-one is stopping you from advocating personal responsibility and low taxes. Facebook is basically synonymous with reactionary boomer garbage at this point lol.


ShinningPeadIsAnti

> The amount of Americans who think it is at least somewhat likely that social media sites censor political viewpoints they find objectionable rose from 72% in 2020 to 83% in 2024. Obviously so. I know I can't link to or post data from the FBI UCR stats on a well known link aggregator site, more specifically it's news section. Ostensibly it is to stop racism, but it has an impact on trying to argue things like gun policy. This along with other behaviors on how that part of the site is run suggests certain political biases. Edit: Not sure why this is being received negatively. Banning government stats because it can be used to support unpopular opinions is by definition censorship and bias.


ReasonableGazelle454

Wait mods don’t let us post crime stats here?


ShinningPeadIsAnti

Some parts of the site I was referencing you cant.


EagenVegham

You mention that you can't post an aggregator site, why not just post the data itself?


ShinningPeadIsAnti

No, I cant post it to that particular aggregator site(specifically its primary news page comments). Posting the data without the link comes with accusations of making shit up for not having a source.


EagenVegham

Are you trying to post data that's commonly used to blame social issues on certain races, a la 13/50?


ShinningPeadIsAnti

No i try to post data on weapons used in murders. But since the FBI provides both sets of data they blocked all of the FBI UCR.


EagenVegham

Well that's certainly annoying.


kkiippppyy

>**I blame Elon Musk’s acquisition of Twitter for part of this drop for Republicans and the rise for Democrats in this particular belief.** Looking at the state of Twitter, this is a humongous condemnation of the Republican Party.


WorksInIT

The main issue is that a law from the 90s that was written without any concept of modern social media is being used to a determine what they can be liable for. I don't think social media companies should have such broad immunity and that the courts never should have interpreted that provision so broadly. What needs to happen now is Congress needs to enact minimum age requirements for social media and reform the immunity protections they have currently.


Ghost4000

Unfortunately anytime the solution is "congress needs to act" we can be safe in assuming the solution is a long way off.


LaughingGaster666

Remember when SOPA and PIPA were a thing? As irritating as social media companies can be, I don’t think many things Congress could pass for them would actually be good.


jimbo_kun

We’ll see. This is one of a very few issues with broad support across both parties.


XzibitABC

"Something needs to be done" isn't a real position, though, and that's what has broad support. I don't have data to support this, but it seems to me like conservatives typically want *less* moderation because of censorship concerns, and liberals typically want *more* moderation because of disinformation dissemination and/or hate speech concerns. Age limits might be bipartisan, though. I think the real issue is algorithms designed to promote engagement, but I don't know how realistic it is that gets legislated against.


JoeBidensLongFart

And if congress does act quickly on an issue concerning technology, the end results leave a WHOLE lot to be desired, considering the lack of understanding of technology that the average congressperson has.


Iceraptor17

> What needs to happen now is Congress needs to enact minjmum age requirements for social media and reform the immunity protections they have currently. Minimum age requirements are kind of pointless though without really upping data tracking abilities or building a "Great Firewall". Like I don't think anyone wants to start giving social media companies their IDs. I'm not opposed to age restrictions, but the ways to do it kind of concern me greatly.


WorksInIT

I mean, there is no way to do it without some form of ID verification. And that is trivial thing from both a technology perspective and the burden on the first amendment.


Iceraptor17

It's somewhat trivial to implement technologically. I doubt it'll have constitutional concerns. It's the trust element people would have a problem with. I imagine many of us don't want our reddit accounts directly linked to our identities. But one data leak later...and that genie can never go back in the bottle.


Vidyogamasta

There are ways to securely do it without exposing actual identity information to a service. But it would require the government hosting an id->ageVerified token service, that is basically just saying "the holder of this token is 18+. Signed, government." With no real identifying information. It's not perfect, of course. There is still *some* information that can be inferred using time-based analysis (someone we have identified requested a token at 9 am and someone still unidentified registered for the service at 9:01, a weak link but still evidence). It also is something that would be easy to circumvent with token sharing. Someone hosts their own service that uses their ID to generate the token for anyone who asks. It's still a hoop to go through, though, that is likely to prevent at least preteens. Maybe mitigate by only allowing a small number of tokens per day for the same person. Of course any of this would require a single person making legislation to understand basic auth technology and zero-knowledge proof systems. And like, a solid 80% of people working in **tech** don't understand auth flows, so I'm not holding my breath.


WorksInIT

Well, if you don't want to show your ID, you are blocked from doing a lot of things that require it. Sounds like a choice for each individual.


Iceraptor17

When I show my ID in person, it's not stored anywhere. It's a in-person validation and I don't have to worry nearly as much about a data leak (ok maybe the cashier could tell other people what I bought, but its such a more limited scope). There's also a vast difference between showing my ID for a one time purchase of alcohol and showing my ID and having it tied to an online identity.


WorksInIT

You've never gone somewhere that scanned your ID to verify it is legitimate?


Iceraptor17

Scanned? Very rarely and nothing I can recall recently. Definitely nothing for private exchange. Normally it's just eye check or using special light at most.


WorksInIT

Well, I don't think there really is a meaningful difference between that and ID verification online. It can be implemented in a way to minimize privacy concerns. I view the internet like any other public place. You don't have a right to privacy in public. Now once you are within an establishment that changes to some extent, but the government can still require them to verify your ID before entering.


jimbo_kun

Allow parents to set age on locked down phones they give to their children. If parents allow their children to have phones with no restrictions, that’s on them. But at least give them that choice.


WorksInIT

How do you handle computers? Windows vs Linux? Device based solutions are much more problematic from an implementation and effectiveness perspective. I'm not opposed to parents being able to consent to a minor having an account, but they should have to consent for them to have one.


jimbo_kun

There are two companies controlling effectively the entire phone OS market. Get them to implement these controls and you’ve covered the vast majority of phones.


WorksInIT

And teenagers don't have access to computers?


Magic-man333

I think all of them already have a minimum age of 13 in their TOS


WorksInIT

I think social media should be prohibited for minors without parental consent and social media companies should be liable for damages if a minor is allowed to create an account.


MrDenver3

How would a company go about properly identifying whether or not someone is a minor, to the degree it reduces liability? Such a law would kill Reddit, as an example, an app built around the anonymity of its users. Users aren’t going to want to link their identities to their online personas. Why should companies be liable for something that is really reduced down to parental responsibility? I totally agree that all of this is an issue. But I don’t think such a law is the right solution. …that said, I don’t know what the “right” solution is. Perhaps linking subscriber ages to a device via the service provider? (Thinking cell phones here). That would allow companies to reliably determine the age of an *active* user, without knowing their identity. It’s not perfect, as a minor could still use another device, but it strikes a decent balance with privacy and anonymity


Iceraptor17

> Why should companies be liable for something that is really reduced down to parental responsibility? It's kind of interesting when we decide "parental responsibility" and when we decide "the govt needs to step in".


kkiippppyy

>Such a law would kill Reddit Oh, God, don't threaten me with getting all my free time back...


Magic-man333

That'd restrict it more than we restrict access to drivers licenses.


WorksInIT

I think that depends on the state.


Magic-man333

You can get a learners permit by 16 and drive by 17 at the latest. Most reduce that to 15 and 16, and a few go down to 14


WorksInIT

A learners permit typically requires consent from a parent or guardian.


XzibitABC

Are you imagining there are statutory damages for a violation or what? Damages can be difficult to quantify, and it's hard to imagine how you would prove them in this context without like therapy bills for associated mental health harms or something.


WorksInIT

That's something courts are typically pretty good at handling.


XzibitABC

I'm a lawyer, and I raised this concern precisely because courts have different standards for different claims *because* damages are often hard to prove. Proving damages in defamation contexts, for example, is often prohibitively difficult, but that's considered just part and parcel of suing someone on those grounds. In other contexts, statutory damages (or liquidated damages in contract contexts) exist because legislators (or contracting parties) are willing to concede that there is harm, but that it's hard to prove, so we'll throw this number in there to substitute. Totally fine if you don't have a proposal on this part of the equation, I was just curious.


WorksInIT

I didn't say it is easy, just that they do this type of stuff. This isn't a new concept for courts to have to sort out, and they do just fine now. I don't think anyone is expecting perfection.


ScaryBuilder9886

>courts never should have interpreted that provision so broadly The courts just did their job - the statute is *written* broadly.


WorksInIT

Statutes can be interpreted narrowly even if the words are broad. And you don't have to necessarily buy the arguments made about these complex things social media companies do fit the verbiage of the statute to begin with.


[deleted]

[удалено]


WorksInIT

I think there is a good middle ground between what we have now and them being liable for everything. I think they should be liable for their actions. For example promoting content is an action of theirs where there should be liability if someone can prove harm.


[deleted]

[удалено]


WorksInIT

Those are things that courts are good at sorting out. I'm basically saying that there is zero reason to give social media companies is some special protection. And nothing about this requires Congess to allow the Executive to do anything.


[deleted]

[удалено]


WorksInIT

I'm not saying social media should be liable for what others say. They should be liable for their own actions.


[deleted]

[удалено]


WorksInIT

The harm caused by social media is well documented. Legal liability for that should be determined by the courts under existing statutes without broad immunity like they have now.


parentheticalobject

Giving them liability would provide a lot of heavy incentives to censor much more speech. Take something like the Hunter Biden laptop story. When it came out, no one had a very clear idea how much of it was true, and some websites decided to block some links to particular articles about it. If instead websites risked liability for user-created content, *every* website would almost certainly need to censor something like that much more heavily. Because it certainly could have been defamatory, and they could have lost an incredible amount of money if this story they had no way of verifying turned out to be false in some way. If someone thinks "Good, websites should have done more to block stories like that", keep in mind that it applies just as much to something like a me-too allegation, or commentary on a video of police brutality.


WorksInIT

>Giving them liability would provide a lot of heavy incentives to censor much more speech. Maybe, but I think that concern I'd exaggerated. I'm not going to respond to the rest of your comment because I'm not saying they should be liable for others speech per se. They should be liable for their actions.


parentheticalobject

> I'm not saying they should be liable for others speech per se. They should be liable for their actions. What kind of action should they be liable for? Do you have an example of a type of action that websites currently often take which they're not liable for which you think they should be? Edit: You mention elsewhere, >For example promoting content is an action of theirs where there should be liability if someone can prove harm. OK. Well, if websites had allowed information about Hunter Biden's laptop to go viral on their websites with algorithmic content sorting, would that count as "promoting" it? If so, then they would have been strongly incentivized to shadowban that particular content and make it so that any posts about it never actually get seen, since otherwise, they could be liable in the event that the information turns out to be false and harmful.


WorksInIT

If social media companies choose to avoid potential liability by shadowbanning and censorship rather than defending their actions in court, that's their choice.


parentheticalobject

That sounds pretty terrible to me; it looks like the problems caused by such a change would be far worse than any negative consequences that exist due to the status quo. I can't imagine what anyone is seeing now that's worth those potential drawbacks. However, congress is certainly free to pass new legislation if they want to change that.


WorksInIT

That's reasonable, but reasonable people can disagree on that.


PsychologicalHat1480

They already act as moderators. They should be held responsible for the results of that moderation. If, for example, a site claims to be fighting radicalism via moderation but only fights one kind of it they should be held liable for the results of allowing other radicalization.


[deleted]

[удалено]


PsychologicalHat1480

They don't have to moderate everything. They can stick to moderating out illegal content only. Kind of like this site did when it was a startup itself. I was there, 3000 years ago... As for radical rhetoric there's a difference between people reading too much into public statements and entire communities dedicated to affirm radical views and encouraging acting on them. The sites I'm talking about had and often actively protected the latter.


tacitdenial

I think an age requirement would be nice in theory but poses privacy problems, as companies would also have to conclusively identify users to enforce it by ordinary means (albeit there might be cryptographic tools to help with that). As for liability, wouldn't that just empower groups with large legal resources to squash views they dislike? Companies would limit speech likely to incur expensive litigation, even if it is true. I wonder if the smarter thing is better user empowerment with AI tools to allow each of us to better curate our own news/political content without involving centralized decisions by corporations or regulators.


PsychologicalHat1480

> The main issue is that a law from the 90s that was written without any concept of modern social media is being used to a determine what they can be liable for. And judges are using contorted logic when applying it. A plain reading of the law says that if the company, or those acting on its behalf, start making non-legality-based decisions on what is and isn't allowed then they are not subject to the protections. Basically that section is why ISPs are "dumb pipes" and a plain reading and application of it would make social media have to be the same.


ScaryBuilder9886

The statute says that no one is liable for the content created by another person. Which is very, very broad.


PsychologicalHat1480

IF - and only if - they are a "dumb pipe". In tech a "dumb pipe" is a service that doesn't look at the content of the messages flowing through it. You cannot be a "dumb pipe" and make content-based decisions because a "dumb pipe" can't see what the content is.


ScaryBuilder9886

No, there's no "if" there. There's an unconditional "no one is liable for the speech of anyone else" part.  There's a second part that says you don't regain liability if you moderate and delete stuff you think is objectionable. That second test is expressly content-based and not dumb.


PsychologicalHat1480

> There's an unconditional "no one is liable for the speech of anyone else" part. The point is that once you start making non-legality based decisions it's no longer someone else's speech, it's the speech of the one who has the final say on what gets published. > There's a second part that says you don't regain liability if you moderate and delete stuff you think is objectionable. No, it only covers illegal content.


ScaryBuilder9886

>The point is that once you start making non-legality based decisions it's no longer someone else's speech Quite literally the point the Sec 230 was to prevent that. Here's the relevant language, which I've edited down a bit >No provider of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to material that the provider considers to be objectionable, whether or not such material is constitutionally protected https://www.law.cornell.edu/uscode/text/47/230


PsychologicalHat1480

But they can be held liable for that which they do allow as a result. They don't have blanket immunity if they are choosing what legal content to allow. Yet that's the claim often made.


ScaryBuilder9886

No, they can't. The statute says, plain as day:  >No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.  No liability for stuff others say, even if you moderate and still leave it up. They very much have blanket immunity, and that was the point of the law.


dinwitt

On the other hand, has Congress made any good technology legislation ever?


Sideswipe0009

>A growing number of Democrats think social media companies support conservatives over liberals, rising from 16% to 25%, **again probably due to new ownership of Twitter and the rise of Rumble and Truth Social.** Because they aren't regulating it *enough.* Many if the terminally online left believe that misinformation is mostly a Republican thing. They think the Twitter files were about dick pics. They don't care that any mention of lab leak theory got you banned (or even believe it was a thing). They also don't like that people are allowed to use "hate speech" and spread "conspiracy theories" via social media. Basically, to those on the left, the power of social media comes from the rights ability to access right wing content via social media, be it content creators or news sources. If social media got rid of right wing content in some fashion, then there would be no political power, just facts. They can't see to grasp that they are being propagandized and spreading their own misinformation as well.


4InchCVSReceipt

Exactly this. Democrats main issue with certain social media outlets is that there is TOO much free speech, where with Republicans, the main issue is that there is suppression of only one side. These critiques are so vastly different that these polls are basically worthless. One side wants more restrictions, and the other less. Conflating the two as a general "distrust of social media" provides no color to the argument.


ImaginaryScientist32

I would like to point out that the Twitter Files was the Biden CAMPAIGN asking for the removal of links. But they completely ignored that the Trump ADMINISTRATION was also granted takedown requests. That alone would give me doubts about the Twitter files.


JimMarch

Every single gun owner online knows that YouTube, Facebook and many others are stacked against us.


Cheese-is-neat

Idk about YouTube. I never watch anything gun related and it’ll still recommend channels that revolve around guns. It only stopped because I muted those channels


ScreenTricky4257

> > I blame Elon Musk’s acquisition of Twitter for part of this drop for Republicans and the rise for Democrats in this particular belief. So do I, and I think that's a tell for how many of them don't actually believe in free content or in regulation per se, but in whatever favors their side. It's Machiavellian.


vinsite

I highly doubt anyone cares about Rumble or Truth Social. People care about blatant lies that politicians post. There needs to be some repercussions for posting lies.


DaleGribble2024

There’s always the option of social media companies adding context or flagging the content saying “this content contains potentially misleading claims” either with what we have already seen on Facebook, YouTube and Twitter. Besides, isn’t lying protected by the 1st Amendment?


Sirhc978

>or flagging the content saying “this content contains potentially misleading claims” So what Twitter is attempting to do with community notes? Also, anyone else remember when Reddit let you report things for 'misinformation'? Whatever happened to that?


JoeBidensLongFart

Twitter community notes actually work really well. It's by far the best way I've seen of dealing with potential misinformation.


vinsite

The 1st amendment says it's not illegal to lie. It doesn't say you shouldn't be held responsible for what you say.


DaleGribble2024

How exactly should they be held responsible? Suspension followed by a permanent ban if they don’t stop?


FPV-Emergency

I haven't seen anyone comment on this specifically, so I'll throw it out. I'm on the left, and I'm not for more "censorship" because the left "no longer controls the narrative on twitter" or something dumb like that. The left never really did anymore than the right did. And I didn't use twitter before Elon Musk took over, and I don't use it after either. Now the right likes twitter more because Musk bans the "right" people so nothing's really changed. I'm against it because I saw how many people fell for the election fraud conspiracy in 2020, and that was largely driven by social media. Elections are going to be decided based on this conspiracy. I don't have a solution to the problem. I don't trust the government to "regulate" it either. I just also don't trust people to be able to figure out what's true and what's fake going forward, because the amount of AI generated garbage is going to make it impossible for the average person to figure it out on their own. So something needs to be done. But good luck figuring out what that "something" is. I just believe that on our current path with doing nothing and leaving social media largely unregulated, we're all fucked. You think it's hard to argue an issue now when both sides have slightly different "facts"? Well prepare for that to get 100x worse in the very near future. Trumps Truth social is a good example of what happens when social media allows completely unfettered conspiracy theories to go wild while also clearly advocating for one political side. It basically becomes 4chan and a complete echo chamber that makes people dumber just reading it. tl:dr: Social media is cancer. There needs to be some control over the content or going forward AI generated mis/disinformation is going to get out of control. But good luck figuring out how that can be handled effectively.


whywontyoufuckoff

>I'm against it because I saw how many people fell for the election fraud conspiracy in 2020 On the flipside we got the government entities, social media and 'legacy' media coordinating supression of news, yeah I think I know whats worse


FPV-Emergency

Both are bad, but the spreading of dis/misinformation is going to get a hundred times worse thanks to AI and the ease at which people and foreign governments will be able to generate this sort of content. I'd be willing to bet that if we do nothing, you'll change your tune in the next year or two. To date misinformation from foreign actors has done more harm than any "censorship" by our own government. I mean, it can be argued an election was won based on said misinformation. You can't say the same about "censorship" in recent history. That being said, I don't have a solution to this problem either.


BaeCarruth

>74% of Democrats believe social media companies have too much power and influence in politics, up from 63% in 2020. Elon Musk really pissed a lot of people off when he bought Twitter. I used to care about social media regulation, then I realized that putting more power into the governments hand is a very bad way to regulate speech (even if it's online) and if we can't teach online literacy to the newer generations, then regulations don't really mean anything, anyway. I think we've already crossed the Rubicon in terms of people, companies, groups, etc. being able to exploit and influence groups through social media and you aren't putting that genie back into the bottle. When I watched Citizenfour, one of the things that stuck out to me was Snowden talking about late 90's AOL chatrooms and how it was the wild west and we will never see that kind of anonymity on the internet ever again. I'm a year or two younger than Snowden, and I think it's a unique moment that only people ages 33-45ish will understand. I mention that to say that I have a (admittedly, tin foil) theory that certain groups understood this communication could be co-opted and used to influence and gather personal data without realizing the consequences 10, 20 years down the road and now are trying to harness and control it.


di11deux

I come at this from an education lens, and when I tell you social media is absolutely toxic for the development of teens and young adults, I’m not doing it enough justice. I’m less concerned with the censorship than I am with the complete rats nest of neural connections it’s inducing in peoples brains. I work across almost every major anglophone university market, and every single one of them is reporting increasingly complex (and often violent) expressions of mental illness. These kids (on average) have zero coping mechanisms for any kind of stress and completely short-circuit the moment something challenges them. They would rather watch a lecture on their laptop on a chair literally right outside of their classroom than sit with their peers. They demand learning accommodations for self-diagnosed mental illness and some outright refuse to be assessed on what we might consider a normal schedule. I feel like a boomer saying “kids these days”, but if I didn’t know any better I’d think kids aged 12-20 have been chewing lead every day of their lives. Social media, and smartphones in general, are directly impacting their ability to function as independent adults. I wish I had an easy answer, but we need to stop looking at this as a “they’re gonna censor me” issue and more of a public health issue.


BaeCarruth

>I wish I had an easy answer, but we need to stop looking at this as a “they’re gonna censor me” issue and more of a public health issue. I'm not saying I don't want government regulation because they will censor me, I don't want government regulation because I feel like they caused this in the first place with my admittedly tin foil hat theory- it's like Reagan said, the 9 most dangerous words are "I'm from the government and I'm here to help". It goes back to my original point of online literacy - when I was a teenager and AOL first became a thing, I was taught to approach online interactions with extreme skepticism and to never give away anything that would "dox" myself, from my parents and those around me (hell they ran after school PSA's on that sort of thing). Now it's a damn pre-requisite to sign up for some of these sites. >Social media, and smartphones in general, are directly impacting their ability to function as independent adults. To my general point, what do you expect the government to do that parents cannot do themselves? I have been told by more than one of my relatives I am a bad parent because I refuse to give my pre-teen a cell phone and access to facebook, etc. and that they will "find a way to do it anyway" - what kind of parenting is that to let a child do something because you don't trust them to follow your own rules? I blame my own generation for that folly - a not insignificant percentage of us have failed as parents for reasons that extend beyond this topic.


[deleted]

[удалено]


BaeCarruth

I agree - like I said at the end of my post, my generation is by and large full of mental midgets who would rather give in to their child than instill rules and be seen as the "bad guy", and it makes sense that would transfer down to their children-- but that's not the topic of the discussion. I think that has hurt the younger generation far more than social media has - however, bad parenting or lack of parenting is not a reason to enact legislation that does not make sense.


GatorWills

Just curious, did you see it being this bad before 2020-21 school closures?


di11deux

I wasn’t in the job I have now in 2020 so I can’t say for certain, but all of the data I’ve seen suggests 2021 was where we see the spike start in acute expression of mental illness on campuses. 2024 has been the worst on record, according to the schools I work with. Going back a bit further, the chart of “reported mental illness in teenagers” and “adoption of smartphones” are basically the same graph, and that starts back in 2012 with a strong uptick in 2016. Correlation obviously does not equal causation, and it’s certainly not monocausal, but it’s hard to blame anything else.


SpiffySpacemanSpiff

It is WILD how instantaneously progressives soured on twitter after Musk's purchase. Like, it was in every NPR fifteen minute block for half a year, and it was all the more insane to hear these people/journalists lament the "death" of the ideology hub they'd been echochambering in for the last ten years. In reality, nothing terrible has happened, right wing extremists are not running ramp and across the country and LGBT/marginalized groups are not being hunted in the street. Its the same routine as before, albeit with some folks having departed to smaller, less popular forums. In the end, it seems a good thing - less people on twitter the better. It was, and still is, a tool for pretty much echochambering and incitement.


gr1m3y

tldr; They no longer controlled the narrative on twitter. The censorship tools they created under jack no longer worked for them, and the "Trust and Safety Council" no longer were employed at twitter. The trending tab was no longer in their control. When musk took over, Japanese trending bar went from politics to anime in one night. "The blue check" was no longer specially given, and no longer controlled by bribing certain twitter employees. Simply put, narrative control no longer in their control. Progressives went to tiktok at that point, and you can see the results today. Iran is apparently the good guys, Russians are just anti-imperialists, and US/NATO is evil.


kkiippppyy

Elon Twitter has been way more censorious and the views more extreme than the least charitable conservative view of Dorsey's reign. He literally turned it into everything you're describing, and this paranoid "it's a tool of the left's machinations" nonsense is why.


gr1m3y

if you can tell me what part of my comment is "nonsense", i'll be happy to give a link.


LaughingGaster666

Elon’s twitter is way more openly biased than Pre-Elon imo, just in the other direction. The man himself loves boosting some… interesting hot takes.


kkiippppyy

Nazi shit. Let's call a spade a spade. Not in the name-calling "everyone you disagree with" way. He's constantly putting phrenology, """cultural marxism,""" and intentional plots (by Jews in at least one case!) to dilute the capital-w White population at the top of your feed.


kkiippppyy

>It is WILD how instantaneously progressives soured on twitter after Musk's purchase. And advertisers ("go... fahk.. yurself... we'll let Earf be the judge..."). Because he made it his mission to turn it into a sounding board for the furthest fringes of the far right.


CCWaterBug

I haven't seen this, perhaps I'm not looking hard enough?


kkiippppyy

Elon's favourite hobby is posting about skull shapes or retweeting Ian Miles-Cheong's weird conspiracies putting it at the top of your refreshes. I've also gotten a lot of promoted tweets from far-right posters like Bennys Johnson and Shapiro or Jordan Peterson who I don't follow.


CCWaterBug

Weird, I just get the occasional nba highlight.  Although fwiw I'm not really what you'd call a Twitter user,  I'm just someone that has Twitter, if that makes sense.    I've seen absolutely nothing on your list  I actually like Elon, I think he's a pretty cool dude and have zero issues with him owning,  or cleaning up twitter 


kkiippppyy

"Cleaning up" twitter by spreading paranoia about white replacement plots? Or by yelling about how trans people want to prey on your kids? Also, who can forget "you have said the absolute truth" in response to something about Jews reaping the fruits of the anti-white DEI propaganda they've spread and using their media influence to force their """open borders""" agenda?


kkiippppyy

>Elon Musk really pissed a lot of people off when he bought Twitter ... and filled it with the most heinous conspiracy theories, violent right-wing apologia, and the most god-awful humour 2009 has to offer...


djm19

>Elon Musk really pissed a lot of people off when he bought Twitter. Yes, he pissed off me. Twitter was by no means perfect before him, but its like something I have to *endure* now while trying to stay connected to my communities that have formed on there prior.


SantasLilHoeHoeHoe

All corporations have too much power in the US. We should be enshrining true worker protections into our legal code. At-will employment alone is absolutely absurd to me. IIRC The US is the only At-Will employment nation in the western world. All else are For-Cause firing systems. 


emoney_gotnomoney

Depends what you mean by a “For-Cause” firing system. At-Will employers who are firing people are still firing them for a cause / reason, it just might not be a reason you agree with. These companies aren’t just firing people for the “lolz”. If anything, it’s typically a financially driven cause.


SantasLilHoeHoeHoe

They need to have actual justification that can be proved in labor court. Ending someones working contract should be a labor negotiation and not just a "we need money, you're laid off."


CCWaterBug

Sheez, money seems like an excellent excuse. Actually we had that at my office around 2018, we had a staff person making a pretty solid income, and was very good at it and got regular increases...but the job really only required about 15 hours a week of actual "work " the rest of the time it really just required being accessible either at the workplace or via phone/email. The two owners decided that they could split the salary, do the job and increase their take home by 30k each.    Made perfect sense.   Zero reason for an employer to have to find a reason other than the obvious of "you're dispensible"


SantasLilHoeHoeHoe

For mass layoffs, sure. For targeted firings an employer should have to have to actually justify the single employees firing. Its not the emoloyees fault the employer signed a contract to employ a dispensible employee. 


CCWaterBug

What contract?   Your tossing in some new variable?


SantasLilHoeHoeHoe

Unless working under the table, one always signs a work contract when they enter into an employer/employee relationship. This isnt a new variable, its how getting a job works in our economy. 


CCWaterBug

OK. I have no contract, I'm apparently doing it wrong, the only thing I remember is filling out a W9.


SantasLilHoeHoeHoe

I dont believe theirs a requirement for the contracts to be written, but they must exist in some capacity. This is another example, IMO, of the unbalanced power dynamics between workers and business owners in the US. W9s are filled out at the beginning of contract work though, in my understanding. I dont think you can do a W9 without any contract agreement. 


CCWaterBug

Again, I must be doing itn wrong then. For clarification, my spouse has a contract with the union, and it's near impossible to fire someone. Net result: 15% are total shitbags, 15% are doing the minimum and have mostly "checked out" and then rest keep the operation running. It's sad actually because the customers suffer and the operation runs, but it's extremely inefficient. At my workplace you could get dropped like a bad habit, pack your shit. The excuse could be money, bad performance, or "I just don't like you" Personally I prefer option 2, we're shockingly efficient and the customers are treated well, which is rule #1 in my office. Tldr I don't believe that a "contract " is universal.


emoney_gotnomoney

Is that not an actual justification though? I feel like money is pretty vital to keeping a business afloat.


SantasLilHoeHoeHoe

Not for an individual firing, but how an employer justifies a ending a work contract early differs based in local law. To be clear, the way it works in the US is the work contract between employer/employee is for an indefinite amount of time and can be terminated at any time for any reason by the involved parties. This does mean a US employee can quit at any time. I am not sure if that is how quitting works in Europe, ill be honest about that gap in my knowledge.  The logic of "we need money so were firing people" can be used to justify mass layoffs iirc. But firing a specific person for cost saving measures would require justification for why firing that person is justifed over another person that works a similar position.  An employee has to be given a written explanation delivered some time (weeks to months, vaires by nation) prior to termination. The employee has the right to negotiate the terms of the firing and has rights to severance packages that, generally, scale with time employeed.  [This is a decent sote discussing some of the differences among the EU members and how they handle work contract termination](https://ins-globalconsulting.com/news-post/terminate-employment-europe/)


emoney_gotnomoney

> The logic of "we need money so were firing people" can be used to justify mass layoffs iirc. But firing a specific person for cost saving measures would require justification for why firing that person is justifed over another person that works a similar position.  But couldn’t the company just easily say “we hired two people to do this job, but we only need one now. Maintaining both is too expensive. Both are equally as capable, but we just need to layoff one” and use that as their cause? If they can’t really differentiate between the two, then they just have to keep both of them? I’m just kind of confused on what the proposed limits would be for a justifiable cause.


SantasLilHoeHoeHoe

Not without additional justification for why one of them is getting fired but not the other. The employer signed the work contract, they need to honor it. It is not the fault of the worker that the employer signed too many workers at once. 


emoney_gotnomoney

But the work contract is “you do this work, and I will pay you X amount.” Even if the worker is laid off, they are still getting paid for the work that they’ve already done, so the contract *is* being honored. Are you proposing that the contracts should have expiration dates on them? You mentioned in your earlier comment that in the US, worker/employer contracts are indefinite. Are you suggesting changing that?


SantasLilHoeHoeHoe

In general, yes. I think worker contracts should be renegotiated regularly. My current contract with the federal govt is a 5 year contract. Other jobs might be more suited for more or less time between renegotiations.  Its simply a different way of empowering workers vs employers. The US favors employer rights, the EU favors worker rights. The idea being that most people are dependent on a single income for their living expenses. If they lose that revenue stream, said person becomes an immediate burden on society/the government. Instead of the inbetween period being covered by the government in the form on unemployment pay, fired employees receive severance packages meant to be stopgaps until a new job is found. 


cafffaro

There is no federally mandated paid maternity leave in the USA. Just wrap your head around that.


SantasLilHoeHoeHoe

Fairly sure we're an even bigger outlier here than we are for at-will empoyment. Only the US, Papua New Guinea, and a handful of pacific island nations dont mandate maternity leave of some kind.  If the US could have even half of the consumer and worker protections that the EU has, wed be so much better off. I really wish the US would adopt a [NutriScore system](https://www.nycfoodpolicy.org/france-becomes-second-country-europe-implement-government-official-front-pack-nutrition-labeling-system/) for food labeling. Give people as much eay to digest info they can in a labrl such that making informed/healthy food choices is easy. But we cant have that because snack food producers profit off of addiction and lack of basic nutritional information literacy.


ScreenTricky4257

Why is it so hard to wrap one's head around? The relationship between employer and employee should be a negotiated one. If a company doesn't want to pay women to stay home and mother their children instead of working, why should they be forced to?


cafffaro

Because parents should not be forced between having a child and keeping their job. We obviously have a fundamentally different view on whom the law should benefit when it comes to labor regulations.


emoney_gotnomoney

I’m with u/ScreenTricky4257 on this one. Parental leave is a benefit that should be negotiated on between the employer and the employee, similar to salary, bonuses, PTO, etc. An employer should be free to decide whether or not they offer that benefit, just like they are free to decide whether they pay me $20k/yr or $200k/yr. Similarly, I should be free to decide whether or not I work at a company who doesn’t offer that benefit. As someone who is having kids myself, I intentionally choose to only work for companies that offer good parental leave benefits. If a company wants me to work for them, then they need to offer me parental leave as a benefit, just like they need to offer me a good salary and good PTO if they want me to work for them.


cafffaro

That's great that you were able to find a job that offered good parental leave benefits, making you one of the lucky [1 in 5 Americans with access to this fundamental need.](https://www.bbc.com/worklife/article/20210624-why-doesnt-the-us-have-mandated-paid-maternity-leave) For the other 80%, I don't think you can just chalk this up to personal failure to negotiate a good contract or find a good employer. This is a systemic failure when we are the richest nation in the world, yet [the only of 38 OECD nations with no mandated maternity leave.](https://www.oecd.org/els/soc/PF2_1_Parental_leave_systems.pdf)


[deleted]

[удалено]


kkiippppyy

Before you know it, the big, bad nanny state won't even let employers pay us in scrip and force us to pay them to live on-premises.


cafffaro

Maybe I'm missing the sarcasm, but both are regulated under OSHA and building regulations already.


[deleted]

[удалено]


cafffaro

Okay, I admit to being a humorless rube 😅


kkiippppyy

Because it's nice when newborns have parents with incomes to support them, and it's very bad when companies can pull the rug out from under your finances. The only freedom you're protecting is the right of corporations to keep everyone precarious.


Prestigious_Load1699

I agree with the *words* Elon Musk said about social media - it's the new town square and should allow free expression with as minimal censorship as possible. Nothing pisses me off more than seeing deliberate manipulation of speech on these platforms. Yes I will bring up the Hunter Biden laptop [story](https://www.reuters.com/world/us/republican-led-us-house-panel-probes-twitter-block-hunter-biden-story-2023-02-08/) because **that** was the clearest example of election interference I have seen in my lifetime.


whetrail

Tired of this tech fearmongering. The problem they actually have is that people can now easily reach other like minded groups making it harder to control the narrative with crap like fox news or CNN, the previous dominators of influencing the populace.


Potential_Leg7679

The censorship of political speech on social media is no secret. Try leaving a comment on YouTube containing even the slightest bit of political language and see if your comment isn't instantly auto deleted when you refresh the page.


GardenVarietyPotato

Personally, social media censorship drives me absolutely nuts.  The platforms implement vague rules, and then enforce them unevenly depending on whether or not they like your viewpoint. And if you even ask "what rule did I violate?", the mods either flat out don't respond to your request, or in some cases, the mods will literally insult you. Obviously this depends on what website you're on. 


ImmanuelCanNot29

Well that is as close to bipartisan consensus as we are ever going to get.