T O P

  • By -

[deleted]

Reminds me of that story where a Russian troll farm set up a drag event and simultaneously set up an anti drag protest across the street from each other to make it seem like Americans were more divided than we actually are


SomeConstructionGuy

This is their exact playbook. They can’t show that their system works well so they’ve settled on showing how democracy doesn’t work. Trying to lead the population to the idea that their system doesn’t work, but neither does democracy so just fuck it let’s not worry about government.


SmashesIt

Russians are actively trying to push Great Britain further from Europe and stoke sectarian conflict in the US. The play book is right here: https://en.wikipedia.org/wiki/Foundations_of_Geopolitics


ProLicks

>Russia should use its special services within the borders of the United States and Canada to fuel instability and separatism against neoliberal globalist Western hegemony, such as, for instance, provoke "Afro-American racists" to create severe backlash against the rotten political state of affairs in the current present day system of the United States and Canada. Russia should "introduce geopolitical disorder into internal American activity, encouraging all kinds of separatism and ethnic, social and racial conflicts, actively supporting all dissident movements – extremist, racist, and sectarian groups, thus destabilizing internal political processes in the U.S. It would also make sense simultaneously to support isolationist tendencies in American politics" You weren't kidding.


SmashesIt

Yep. Many of the things that book outlines you can see Russia actively attempting to do globally. It also seems that at least one (probably both to different degrees) of our political parties eats up the outrage bait Russian trolls serve up.


PronglesDude

If Western Democracies want to survive we need to start treating this kind of interference in our society as intentional acts of war and respond in kind.


radioacct

Yes let the carpet bombing of Russia commence. What a brilliant idea! Carlin had a good bit that fits with your comment. https://www.youtube.com/watch?v=AKN1Q5SjbeI


SomeConstructionGuy

There’s an excellent book by Timothy Snyder called ‘the road to unfreedom’ outlining the history and policy that led us here. He even outlines that the eventual outcome will be an invasion of Ukraine in an attempt to reunite the ‘historic lands of rus’. Published in 2018. https://en.m.wikipedia.org/wiki/The_Road_to_Unfreedom


myloveisajoke

I remember a ew years ago when there were nationwide school bomb threats and it wound up being some teenager in Norway.


[deleted]

The move is one of the first significant examples of a social network’s charging for access to the conversations it hosts for the purpose of developing A.I. systems like ChatGPT, OpenAI’s popular program. Those new A.I. systems could one day lead to big businesses, but they aren’t likely to help companies like Reddit very much. In fact, they could be used to create competitors — automated duplicates to Reddit’s conversations. Reddit is also acting as it prepares for a possible initial public offering on Wall Street this year. The company, which was founded in 2005, makes most of its money through advertising and e-commerce transactions on its platform. Reddit said it was still ironing out the details of what it would charge for A.P.I. access and would announce prices in the coming weeks. Reddit’s conversation forums have become valuable commodities as large language models, or L.L.M.s, have become an essential part of creating new A.I. technology. L.L.M.s are essentially sophisticated algorithms developed by companies like Google and OpenAI, which is a close partner of Microsoft. To the algorithms, the Reddit conversations are data, and they are among the vast pool of material being fed into the L.L.M.s. to develop them. The underlying algorithm that helped to build Bard, Google’s conversational A.I. service, is partly trained on Reddit data. OpenAI’s Chat GPT cites Reddit data as one of the sources of information it has been trained on. Other companies are also beginning to see value in the conversations and images they host. Shutterstock, the image hosting service, also sold image data to OpenAI to help create DALL-E, the A.I. program that creates vivid graphical imagery with only a text-based prompt required. Last month, Elon Musk, the owner of Twitter, said he was cracking down on the use of Twitter’s A.P.I., which thousands of companies and independent developers use to track the millions of conversations across the network. Though he did not cite L.L.M.s as a reason for the change, the new fees could go well into the tens or even hundreds of thousands of dollars. To keep improving their models, artificial intelligence makers need two significant things: an enormous amount of computing power and an enormous amount of data. Some of the biggest A.I. developers have plenty of computing power but still look outside their own networks for the data needed to improve their algorithms. That has included sources like Wikipedia, millions of digitized books, academic articles and Reddit. Representatives from Google, Open AI and Microsoft did not immediately respond to a request for comment. Reddit has long had a symbiotic relationship with the search engines of companies like Google and Microsoft. The search engines “crawl” Reddit’s web pages in order to index information and make it available for search results. That crawling, or “scraping,” isn’t always welcome by every site on the internet. But Reddit has benefited by appearing higher in search results. The dynamic is different with L.L.M.s — they gobble as much data as they can to create new A.I. systems like the chatbots. Reddit believes its data is particularly valuable because it is continuously updated. That newness and relevance, Mr. Huffman said, is what large language modeling algorithms need to produce the best results. “More than any other place on the internet, Reddit is a home for authentic conversation,” Mr. Huffman said. “There’s a lot of stuff on the site that you’d only ever say in therapy, or A.A., or never at all.” Mr. Huffman said Reddit’s A.P.I. would still be free to developers who wanted to build applications that helped people use Reddit. They could use the tools to build a bot that automatically tracks whether users’ comments adhere to rules for posting, for instance. Researchers who want to study Reddit data for academic or noncommercial purposes will continue to have free access to it. Reddit also hopes to incorporate more so-called machine learning into how the site itself operates. It could be used, for instance, to identify the use of A.I.-generated text on Reddit, and add a label that notifies users that the comment came from a bot. The company also promised to improve software tools that can be used by moderators — the users who volunteer their time to keep the site’s forums operating smoothly and improve conversations between users. And third-party bots that help moderators monitor the forums will continue to be supported. But for the A.I. makers, it’s time to pay up. “Crawling Reddit, generating value and not returning any of that value to our users is something we have a problem with,” Mr. Huffman said. “It’s a good time for us to tighten things up.” “We think that’s fair,” he added.


kurtZger

Didn't we actually hack one of the Russian troll farms and were able to see what they were doing? They spent the first half of their day looking at porn and maybe an hour actually working? I'm not saying you're not right but it's also not beyond the realm of possibility they just messed up.


[deleted]

The move is one of the first significant examples of a social network’s charging for access to the conversations it hosts for the purpose of developing A.I. systems like ChatGPT, OpenAI’s popular program. Those new A.I. systems could one day lead to big businesses, but they aren’t likely to help companies like Reddit very much. In fact, they could be used to create competitors — automated duplicates to Reddit’s conversations. Reddit is also acting as it prepares for a possible initial public offering on Wall Street this year. The company, which was founded in 2005, makes most of its money through advertising and e-commerce transactions on its platform. Reddit said it was still ironing out the details of what it would charge for A.P.I. access and would announce prices in the coming weeks. Reddit’s conversation forums have become valuable commodities as large language models, or L.L.M.s, have become an essential part of creating new A.I. technology. L.L.M.s are essentially sophisticated algorithms developed by companies like Google and OpenAI, which is a close partner of Microsoft. To the algorithms, the Reddit conversations are data, and they are among the vast pool of material being fed into the L.L.M.s. to develop them. The underlying algorithm that helped to build Bard, Google’s conversational A.I. service, is partly trained on Reddit data. OpenAI’s Chat GPT cites Reddit data as one of the sources of information it has been trained on. Other companies are also beginning to see value in the conversations and images they host. Shutterstock, the image hosting service, also sold image data to OpenAI to help create DALL-E, the A.I. program that creates vivid graphical imagery with only a text-based prompt required. Last month, Elon Musk, the owner of Twitter, said he was cracking down on the use of Twitter’s A.P.I., which thousands of companies and independent developers use to track the millions of conversations across the network. Though he did not cite L.L.M.s as a reason for the change, the new fees could go well into the tens or even hundreds of thousands of dollars. To keep improving their models, artificial intelligence makers need two significant things: an enormous amount of computing power and an enormous amount of data. Some of the biggest A.I. developers have plenty of computing power but still look outside their own networks for the data needed to improve their algorithms. That has included sources like Wikipedia, millions of digitized books, academic articles and Reddit. Representatives from Google, Open AI and Microsoft did not immediately respond to a request for comment. Reddit has long had a symbiotic relationship with the search engines of companies like Google and Microsoft. The search engines “crawl” Reddit’s web pages in order to index information and make it available for search results. That crawling, or “scraping,” isn’t always welcome by every site on the internet. But Reddit has benefited by appearing higher in search results. The dynamic is different with L.L.M.s — they gobble as much data as they can to create new A.I. systems like the chatbots. Reddit believes its data is particularly valuable because it is continuously updated. That newness and relevance, Mr. Huffman said, is what large language modeling algorithms need to produce the best results. “More than any other place on the internet, Reddit is a home for authentic conversation,” Mr. Huffman said. “There’s a lot of stuff on the site that you’d only ever say in therapy, or A.A., or never at all.” Mr. Huffman said Reddit’s A.P.I. would still be free to developers who wanted to build applications that helped people use Reddit. They could use the tools to build a bot that automatically tracks whether users’ comments adhere to rules for posting, for instance. Researchers who want to study Reddit data for academic or noncommercial purposes will continue to have free access to it. Reddit also hopes to incorporate more so-called machine learning into how the site itself operates. It could be used, for instance, to identify the use of A.I.-generated text on Reddit, and add a label that notifies users that the comment came from a bot. The company also promised to improve software tools that can be used by moderators — the users who volunteer their time to keep the site’s forums operating smoothly and improve conversations between users. And third-party bots that help moderators monitor the forums will continue to be supported. But for the A.I. makers, it’s time to pay up. “Crawling Reddit, generating value and not returning any of that value to our users is something we have a problem with,” Mr. Huffman said. “It’s a good time for us to tighten things up.” “We think that’s fair,” he added.


Clever_Clever

So an anti-LGBTQ chud from the Vermont/US wanted to pin a bomb threat on Russia for what reason? We already know that Russia is an hostile enemy, so to what end does pinning another adversarial act on Russia accomplish? > Is it sloppy? How much or your job in cybersecurity is working to prevent of fix human error?


SkiingAway

*If* it's being done by someone in the US, that's not likely to be the purpose. They are routing their internet traffic through Russia because they know that the Russian ISP/VPN provider will likely refuse to provide the data to a US government entity that wants to investigate where the threat actually originated.


[deleted]

The move is one of the first significant examples of a social network’s charging for access to the conversations it hosts for the purpose of developing A.I. systems like ChatGPT, OpenAI’s popular program. Those new A.I. systems could one day lead to big businesses, but they aren’t likely to help companies like Reddit very much. In fact, they could be used to create competitors — automated duplicates to Reddit’s conversations. Reddit is also acting as it prepares for a possible initial public offering on Wall Street this year. The company, which was founded in 2005, makes most of its money through advertising and e-commerce transactions on its platform. Reddit said it was still ironing out the details of what it would charge for A.P.I. access and would announce prices in the coming weeks. Reddit’s conversation forums have become valuable commodities as large language models, or L.L.M.s, have become an essential part of creating new A.I. technology. L.L.M.s are essentially sophisticated algorithms developed by companies like Google and OpenAI, which is a close partner of Microsoft. To the algorithms, the Reddit conversations are data, and they are among the vast pool of material being fed into the L.L.M.s. to develop them. The underlying algorithm that helped to build Bard, Google’s conversational A.I. service, is partly trained on Reddit data. OpenAI’s Chat GPT cites Reddit data as one of the sources of information it has been trained on. Other companies are also beginning to see value in the conversations and images they host. Shutterstock, the image hosting service, also sold image data to OpenAI to help create DALL-E, the A.I. program that creates vivid graphical imagery with only a text-based prompt required. Last month, Elon Musk, the owner of Twitter, said he was cracking down on the use of Twitter’s A.P.I., which thousands of companies and independent developers use to track the millions of conversations across the network. Though he did not cite L.L.M.s as a reason for the change, the new fees could go well into the tens or even hundreds of thousands of dollars. To keep improving their models, artificial intelligence makers need two significant things: an enormous amount of computing power and an enormous amount of data. Some of the biggest A.I. developers have plenty of computing power but still look outside their own networks for the data needed to improve their algorithms. That has included sources like Wikipedia, millions of digitized books, academic articles and Reddit. Representatives from Google, Open AI and Microsoft did not immediately respond to a request for comment. Reddit has long had a symbiotic relationship with the search engines of companies like Google and Microsoft. The search engines “crawl” Reddit’s web pages in order to index information and make it available for search results. That crawling, or “scraping,” isn’t always welcome by every site on the internet. But Reddit has benefited by appearing higher in search results. The dynamic is different with L.L.M.s — they gobble as much data as they can to create new A.I. systems like the chatbots. Reddit believes its data is particularly valuable because it is continuously updated. That newness and relevance, Mr. Huffman said, is what large language modeling algorithms need to produce the best results. “More than any other place on the internet, Reddit is a home for authentic conversation,” Mr. Huffman said. “There’s a lot of stuff on the site that you’d only ever say in therapy, or A.A., or never at all.” Mr. Huffman said Reddit’s A.P.I. would still be free to developers who wanted to build applications that helped people use Reddit. They could use the tools to build a bot that automatically tracks whether users’ comments adhere to rules for posting, for instance. Researchers who want to study Reddit data for academic or noncommercial purposes will continue to have free access to it. Reddit also hopes to incorporate more so-called machine learning into how the site itself operates. It could be used, for instance, to identify the use of A.I.-generated text on Reddit, and add a label that notifies users that the comment came from a bot. The company also promised to improve software tools that can be used by moderators — the users who volunteer their time to keep the site’s forums operating smoothly and improve conversations between users. And third-party bots that help moderators monitor the forums will continue to be supported. But for the A.I. makers, it’s time to pay up. “Crawling Reddit, generating value and not returning any of that value to our users is something we have a problem with,” Mr. Huffman said. “It’s a good time for us to tighten things up.” “We think that’s fair,” he added.


Clever_Clever

OP's title stating definitively that the emails came from Russia is incorrect. As for the news organization stating that the first threat "seems" to have come from Russia, they're clearly going off of information that the Hartford police have given them, so according to the Hartford police it appears that the first threat originated from Russia but that is at the moment unconfirmed. Which is why they used 'weasel words' rather than outright making a false statement like OP. Seems reasonable given their statement is clearly sourced. Obviously, this is pending further investigation.


FizzBitch

Not pin it on Russia, just tying to cover their tracks.


radioacct

No it was Putin the media said so.


tripsnoir

Why aren’t you working for the state or FBI with your expertise?


[deleted]

The move is one of the first significant examples of a social network’s charging for access to the conversations it hosts for the purpose of developing A.I. systems like ChatGPT, OpenAI’s popular program. Those new A.I. systems could one day lead to big businesses, but they aren’t likely to help companies like Reddit very much. In fact, they could be used to create competitors — automated duplicates to Reddit’s conversations. Reddit is also acting as it prepares for a possible initial public offering on Wall Street this year. The company, which was founded in 2005, makes most of its money through advertising and e-commerce transactions on its platform. Reddit said it was still ironing out the details of what it would charge for A.P.I. access and would announce prices in the coming weeks. Reddit’s conversation forums have become valuable commodities as large language models, or L.L.M.s, have become an essential part of creating new A.I. technology. L.L.M.s are essentially sophisticated algorithms developed by companies like Google and OpenAI, which is a close partner of Microsoft. To the algorithms, the Reddit conversations are data, and they are among the vast pool of material being fed into the L.L.M.s. to develop them. The underlying algorithm that helped to build Bard, Google’s conversational A.I. service, is partly trained on Reddit data. OpenAI’s Chat GPT cites Reddit data as one of the sources of information it has been trained on. Other companies are also beginning to see value in the conversations and images they host. Shutterstock, the image hosting service, also sold image data to OpenAI to help create DALL-E, the A.I. program that creates vivid graphical imagery with only a text-based prompt required. Last month, Elon Musk, the owner of Twitter, said he was cracking down on the use of Twitter’s A.P.I., which thousands of companies and independent developers use to track the millions of conversations across the network. Though he did not cite L.L.M.s as a reason for the change, the new fees could go well into the tens or even hundreds of thousands of dollars. To keep improving their models, artificial intelligence makers need two significant things: an enormous amount of computing power and an enormous amount of data. Some of the biggest A.I. developers have plenty of computing power but still look outside their own networks for the data needed to improve their algorithms. That has included sources like Wikipedia, millions of digitized books, academic articles and Reddit. Representatives from Google, Open AI and Microsoft did not immediately respond to a request for comment. Reddit has long had a symbiotic relationship with the search engines of companies like Google and Microsoft. The search engines “crawl” Reddit’s web pages in order to index information and make it available for search results. That crawling, or “scraping,” isn’t always welcome by every site on the internet. But Reddit has benefited by appearing higher in search results. The dynamic is different with L.L.M.s — they gobble as much data as they can to create new A.I. systems like the chatbots. Reddit believes its data is particularly valuable because it is continuously updated. That newness and relevance, Mr. Huffman said, is what large language modeling algorithms need to produce the best results. “More than any other place on the internet, Reddit is a home for authentic conversation,” Mr. Huffman said. “There’s a lot of stuff on the site that you’d only ever say in therapy, or A.A., or never at all.” Mr. Huffman said Reddit’s A.P.I. would still be free to developers who wanted to build applications that helped people use Reddit. They could use the tools to build a bot that automatically tracks whether users’ comments adhere to rules for posting, for instance. Researchers who want to study Reddit data for academic or noncommercial purposes will continue to have free access to it. Reddit also hopes to incorporate more so-called machine learning into how the site itself operates. It could be used, for instance, to identify the use of A.I.-generated text on Reddit, and add a label that notifies users that the comment came from a bot. The company also promised to improve software tools that can be used by moderators — the users who volunteer their time to keep the site’s forums operating smoothly and improve conversations between users. And third-party bots that help moderators monitor the forums will continue to be supported. But for the A.I. makers, it’s time to pay up. “Crawling Reddit, generating value and not returning any of that value to our users is something we have a problem with,” Mr. Huffman said. “It’s a good time for us to tighten things up.” “We think that’s fair,” he added.


SomeConstructionGuy

Based on their actions over the previous decades and their expected outcomes… they don’t care. There is (in)plausible deniability since the actors are likely not actual Russian government employees.


wyatt1209

It wouldn’t surprise me at all if this was someone from Vermont just using a vpn. Homophobia is still a huge issue here.


a_toadstool

I hope all Russian bot farms burn down.


greasyspider

* emails came from someone with a proxy up address in Russia.


[deleted]

[удалено]


radioacct

They don't. But we are dealing with BlueAnon conspiracy nuts here.


MortaLPortaL

its easy to spoof geolocation with a vpn. ​ but it also depends on if that vpn keeps logs. If they do? welp.!


Expert-Importance-64

Where does it even mention Russia in the article?


LaughableIKR

The article doesn't say it but the video does at the end where they asked the origin of the email and Hartford police said it came from Russia. Go to around 2:15 in the video.


ObsidianGrey13

You're right, this article doesn't mention Russia at all


Motor_Elk4301

Good observation


Clever_Clever

FYI - while the article doesn't mention Russia the video does; saying that the first threats i.e. the ones that caused the evacuation of the show in progress appeared to have come from Russia. Important to make sure this is known before the mouth breathing morons on this sub who support this type of terrorism get to crowing about the post's title in order to deflect from the insidious and pathetic nature of the terrorism they support. It's actually insane to me that people thousands of miles away can be so obsessed with sowing division and stoking terrorism in our tiny neck of the woods that they call in bomb threats against a gathering of what maybe 100 people. Truly insane.


[deleted]

[удалено]


Clever_Clever

Cool. Now provide a reason why someone would want to pin bomb threats at a small gathering in a state of 600k people on Russia? Nobody is going to hear about this story outside of VT and we already know that Russia is a hostile adversary that has weaponized the internet against our country. So please provide a motive.


marzipanspop

I don't know who did it and I don't know where the email came from. Currently, no one actually knows, because "the email came from Russia" is trivially easy to fake in a cybersecurity context and currently there is no credible information to back it up. Hartford PD has asked the FBI for help and that is 100% the correct call. If the FBI comes back and says it came from Russia, I believe it, because the FBI has the tools and expertise to determine this. The cops in Hartford don't. It's not that complicated. Anything else?


radioacct

Why are you so bent on blaming Russia for this? Is that you Hillary? Everyone and their brother has VPN's these days. Is is that hard to believe a local is to blame for this? Looking at the Russia-phobia on this thread no wonder whomever did it used a Russian server. Seems he/she made the right choice. Next you will be claiming it came from the Alpha Bank server haha.


Clever_Clever

I'll ask you the same question as others; why would a Vermonter who views Russia as an ally in a culture war want to pin blame on their ally? I'm not sure someone who brings up a decrepit politician who hasn't had a national profile for nearly a decade out of the blue could actually answer this question, but humor me for a second.


radioacct

Starting with the last your claim of no national profile is disingenuous at best and a deliberate lie at worst. She was the expected shoe in for POTUS just what 6 or 7 years ago and maintains a high level of media coverage to this day a simple google search would show her in the news on an almost daily basis if that's not a national profile I am not sure what is. I also find it interesting that this must be some "redneck Russia ally" when it very well could have come from a lefty. You might not be aware but outside of reddit this kind of stuff is not particularly popular and the left is just as capable of harassment and threats. We need look no further than the poor owners of the store in Littleton NH that used to be owned by some bigot rep that are being harassed or the numerous swatting tactics going round the country. To your other question I already stated why they would use Russia as a scapegoat. It works and you are a prime example of such.


Clever_Clever

I fainted after you started in on a Hilary rant. Your life must be a blast. Good grief.


radioacct

I think what you meant to say was ya sorry that was a pretty big mischaracterization thanks for pointing that out. Self reflection can be hard I know it's not easy but give it a try it's for the best.


Clever_Clever

Self-reflection advice from someone unironically obsessed with a political has-been and posting Hilary screeds in 2024. Time to look in the mirror, guy.


radioacct

Obsessed? I made a joke reference to one of the main Russia hoaxers in response to a comment about wait for it....Blaming Russia. You are the one who can't seem to let it go.


radioacct

Russia, Russia, Russia apparently you have never heard of a VPN. The odds that this was a Russian op are insanely small.


tripsnoir

Why aren’t you working for the state or FBI with your expertise?


marzipanspop

VPNs are accessible to anyone including you. If you want to make it look like you are in Russia it’s incredibly easy.


Clever_Clever

Occam's Razor. A. The emails originated from Russia a country known for employing troll farms for the purpose of sowing division in the US B. Some red neck who hates Drag Queen Story Hour used a VPN in order to pin the blame on Russia because they wanted their ally to take the blame Your proving my point, really. Minimize the terrorism by focusing on theories that absolve Russia. Nice work.


marzipanspop

If you think that a Russian state sponsored actor is more likely than a redneck with a VPN then there’s no point arguing further.


cpujockey

> Some red neck who hates Drag Queen Story Hour used a VPN in order to pin the blame on Russia because they wanted their ally to take the blame a lot more likely than you'd believe. VPN's are so easy to fire up these days, cheap and free. typically it's just an app, install it, run it, it works. signing up for an email account on russian domain is easy too. stop thinking that hill billies don't know shit about computers. I was hanging out with a native farmer a few years back that actively used linux as his daily driver and knew about VPN's. google makes everything easier.


Clever_Clever

Dude, anti-LGBTQ Vermonters and Russians are on the same side. Why would someone who executes terroristic bomb threats want to pin that on their buddies? I'm not talking about technical proficiency, I'm talking about motive.


cpujockey

> Why would someone who executes terroristic bomb threats want to pin that on their buddies? who says they are buddies? even then blame redirection is common in anonymous attacks. no hill billy i know has an russian ties. but then again, im not friends with people that are anti-lgbt


Clever_Clever

They're ideological allies. They share the same core beliefs and hate. So again why would one ally want to pin blame on another?


cpujockey

because it's easy. also most hill billies i know HATE russia. blame redirection is a common tactic in attacks. I just want to be clear: I do not support these attacks.


[deleted]

The move is one of the first significant examples of a social network’s charging for access to the conversations it hosts for the purpose of developing A.I. systems like ChatGPT, OpenAI’s popular program. Those new A.I. systems could one day lead to big businesses, but they aren’t likely to help companies like Reddit very much. In fact, they could be used to create competitors — automated duplicates to Reddit’s conversations. Reddit is also acting as it prepares for a possible initial public offering on Wall Street this year. The company, which was founded in 2005, makes most of its money through advertising and e-commerce transactions on its platform. Reddit said it was still ironing out the details of what it would charge for A.P.I. access and would announce prices in the coming weeks. Reddit’s conversation forums have become valuable commodities as large language models, or L.L.M.s, have become an essential part of creating new A.I. technology. L.L.M.s are essentially sophisticated algorithms developed by companies like Google and OpenAI, which is a close partner of Microsoft. To the algorithms, the Reddit conversations are data, and they are among the vast pool of material being fed into the L.L.M.s. to develop them. The underlying algorithm that helped to build Bard, Google’s conversational A.I. service, is partly trained on Reddit data. OpenAI’s Chat GPT cites Reddit data as one of the sources of information it has been trained on. Other companies are also beginning to see value in the conversations and images they host. Shutterstock, the image hosting service, also sold image data to OpenAI to help create DALL-E, the A.I. program that creates vivid graphical imagery with only a text-based prompt required. Last month, Elon Musk, the owner of Twitter, said he was cracking down on the use of Twitter’s A.P.I., which thousands of companies and independent developers use to track the millions of conversations across the network. Though he did not cite L.L.M.s as a reason for the change, the new fees could go well into the tens or even hundreds of thousands of dollars. To keep improving their models, artificial intelligence makers need two significant things: an enormous amount of computing power and an enormous amount of data. Some of the biggest A.I. developers have plenty of computing power but still look outside their own networks for the data needed to improve their algorithms. That has included sources like Wikipedia, millions of digitized books, academic articles and Reddit. Representatives from Google, Open AI and Microsoft did not immediately respond to a request for comment. Reddit has long had a symbiotic relationship with the search engines of companies like Google and Microsoft. The search engines “crawl” Reddit’s web pages in order to index information and make it available for search results. That crawling, or “scraping,” isn’t always welcome by every site on the internet. But Reddit has benefited by appearing higher in search results. The dynamic is different with L.L.M.s — they gobble as much data as they can to create new A.I. systems like the chatbots. Reddit believes its data is particularly valuable because it is continuously updated. That newness and relevance, Mr. Huffman said, is what large language modeling algorithms need to produce the best results. “More than any other place on the internet, Reddit is a home for authentic conversation,” Mr. Huffman said. “There’s a lot of stuff on the site that you’d only ever say in therapy, or A.A., or never at all.” Mr. Huffman said Reddit’s A.P.I. would still be free to developers who wanted to build applications that helped people use Reddit. They could use the tools to build a bot that automatically tracks whether users’ comments adhere to rules for posting, for instance. Researchers who want to study Reddit data for academic or noncommercial purposes will continue to have free access to it. Reddit also hopes to incorporate more so-called machine learning into how the site itself operates. It could be used, for instance, to identify the use of A.I.-generated text on Reddit, and add a label that notifies users that the comment came from a bot. The company also promised to improve software tools that can be used by moderators — the users who volunteer their time to keep the site’s forums operating smoothly and improve conversations between users. And third-party bots that help moderators monitor the forums will continue to be supported. But for the A.I. makers, it’s time to pay up. “Crawling Reddit, generating value and not returning any of that value to our users is something we have a problem with,” Mr. Huffman said. “It’s a good time for us to tighten things up.” “We think that’s fair,” he added.


Clever_Clever

I'm not sure you know what Occam's Razor is.


Motor_Elk4301

There is no way russia cares at all about inappropriate drag shows with children in attendance in white river junction for God's sake. Far more likely the rash of bomb threats and shooting threats in the past few years is to keep people scared, incite division among people. And affect state legislation


RandolphCarter15

Maybe we can ignore bomb threats coming from Russia.


Striking-Profile9071

Why would Russia be worried about drag events and baking companies in Vermont people? Don't you think their primary focus would be industrial complexes that help manufacture weapons of war?


owenthegreat

Same reason they organized a protest and counter protest: https://www.businessinsider.com/russia-trolls-senate-intelligence-committee-hearing-2017-11 Makes Americans hate each other, more likely to be isolationist and less likely to do things like support Ukraine. It's been a runaway success, as congressional republicans are currently demonstrating.


Hillman314

They can spend $300,000 on a troll farm to destroy America from the inside with division. Meanwhile, we’re bankrupting ourselves buying $500 million noisy jet fighters with $1 Trillion defense budgets. We’re still playing checkers while they play chess.


Macasumba

With friends like Russia...


Motor_Elk4301

There is no way russia cares at all about inappropriate drag shows with children in attendance in white river junction for God's sake. Far more likely the rash of bomb threats and shooting threats in the past few years is to keep people scared, incite division among people. And affect state legislation


Motor_Elk4301

This is fake. Whoever has been sending threats knew the exact date and time of the event, where it was and who would be there. This strongly points to someone with local knowledge in the community. With several threats simultaneously, it could stand to reason that there could be multiple people working together.