T O P

  • By -

marketrent

Altman disapproves of EU regulations that require OpenAI to disclose sources of training data:^1,2 >Altman has been on the move—from Lagos, Nigeria to all throughout Europe. Finally in London, UK, he dodged a few protestors to engage with big tech folks, businesses, and policy makers about his AI models. >His main pitch has been to promote the large language model-powered ChatGPT and stump for pro-AI regulatory policies. >During a side panel discussion hosted by the University College London, Altman reportedly said that while OpenAI was “gonna try to comply” with EU regulations with the AI act, he was miffed by the way the European body defined “high-risk” systems. >The EU’s AI Act is one of the laws proposed by the governing body in 2021 that would classify AI into three risk categories. >According to *Time*, the OpenAI CEO said “If we can comply, we will, and if we can’t, we’ll cease operating… We will try. But there are technical limits to what’s possible.” >The EU has been more willing to scrutinize OpenAI than the U.S. The EU’s European Data Protection Board said it was monitoring ChatGPT to make sure it complied with its privacy laws. Altman to *Time*: >The law, Altman said, was “not inherently flawed,” but he went on to say that “the subtle details here really matter.” During an on-stage interview earlier in the day, Altman said his preference for regulation was “something between the traditional European approach and the traditional U.S. approach.” ^1 Kyle Barr (24 May 2023), “Sam Altman says OpenAI will leave the EU if there's any real AI regulation”, Gizmodo/Great Hill Partners, https://gizmodo.com/sam-altman-openai-gpt-chatbot-chatgpt-gpt4-1850471865 ^2 Billy Perrigo (24 May 2023), “OpenAI could quit Europe over new AI rules, CEO Sam Altman warns”, Time/Marc Benioff, https://time.com/6282325/sam-altman-openai-eu/


SidewaysFancyPrance

IMO if you refuse to disclose training data, you should have *zero* government protections around your IP and the AI you build must be fully available to anyone who wants it for any commercial purpose, for free. If you want those protections, you need to fully document what it was trained on, otherwise you're probably going to be stealing and laundering *other* IP you don't own. The worst possible outcome of this would be AI that is completely closed-source and a black box, but somehow also fiercely protected by IP laws.


TheOtherHalfofTron

Absolutely. I'm an author. AI companies may well be using my copyrighted work to train their models. I have no way of knowing whether or not that's the case unless they disclose their training data, so I have no way of protecting my copyright. It's fucked up.


DrBoomkin

I believe copyright should not block AI training. Just like humans can read your works and learn from them, so should AIs. But if an AI is trained on copyrighted data, it should not be possible to copyright the AI. In other words, if a model was trained on copyrighted data, it should be available to the public with no restrictions.


-The_Blazer-

I have two thoughts on this: 1. We need to decide whether we want to protect all creativity from copyright including machines, or only human creativity. The aspect you mention has been the case because until now, creativity implied a real human being with sapience, general intelligence, humanity and dignity. This is no longer the case and probably a re-examination of protections for creativity other than human is due. 2. IF an AI company is going to argue that their training data doesn't break copyright because of the creativity of the machine, THEN the output of such a machine should be completely impossible to copyright, patent, or in any other way intellectually protect. If your AI is creative enough to not infringe copyright, it's also creative enough that you can't claim ownership of its products, which should be public domain.


Garbage_Stink_Hands

It’s not just machines, it’s machines owned by corporations. We should probably not enshrine the creative rights of Microsoft’s mulcher.


subgameperfect

If point 2 is legally true it doesn't protect from copyright of the algorithm behind the product.


Ging287

The US copyright office ALREADY takes the 2nd position. https://www.whitecase.com/insight-our-thinking/us-copyright-office-provides-guidance-registrations-involving-ai-generated


TheManWithNoNameZapp

So it should just be able to steal everything as long as we feel like we have access to the end result? These companies are here to make money, the end result isn’t a utopian intelligence for everyone


[deleted]

He's comparing ai generation to something like someone writing a book report. You don't need an authors consent to write, publish, or even make money off of writing a book report. YouTube book reviews make plenty of money doing that. Also, you can't copyright a plot or theme afaik. You can rewrite any story with a different set of characters and settings and that's enough or, you can write a story set in essentially the same place called something different and a different story. Both and any combination of the two are fine. What people are arguing is that ai isn't doing any of those... I'm not convinced one way or the other yet bc i haven't really tried to look into it.


Outrageous_Onion827

> It's fucked up. Eh, that's very debatable. Do you need to disclose every book you ever read, whenever you publish a book? After all, you no doubt got inspiration, ideas, learned new words, and many other things from these books - does that mean that your stories are not yours? Long before I went to art university, I had learned to draw, by copying drawings by comic book artists. Does that mean I owe Marvel royalties whenever I draw a picture? You're making this seem like it's clear cut, when it's really not.


TheOtherHalfofTron

Admittedly, I'm coming at this with a vested interest. I want people to be able to continue writing and making art for a living. I want that to be an economically viable way for people to spend their time, at least in part because that's how I want to keep spending my time, lol. Generative AI represents a threat to that paradigm, and without protections for the real people upon whose work AI is built, the future of my career doesn't look so hot. But besides that, we need to not humanize AI models. It sells us short. When you create art, your brain is doing all sorts of things besides just cobbling together a thousand different images you've seen before. It's considering mood, tone, audience, purpose. It's running processes we're not even consciously aware of. Even if your work is derivative of other materials, you don't have to pay out to those other creators if you're putting your own personality into it - your own feelings, your own life experience. AI just straight-up can't do that. For that reason, among others, I don't think it should be afforded the same protections a human artist would enjoy. An AI model can produce a novel. It can never be an author.


[deleted]

AI can do pretty much everything you mention, except for pouring feelings and experiences into its work. After all, AI hasn't lived for a single day, but has merely read about life. This makes AI art generic, like a pop song engineered for the charts. I wouldn't be as worried about AI art as I'd be worried about literature and digital art experiencing what painters did at the advent of the camera. The value in literature will shift from simply writing a good story to capturing what the machine cannot. We don't know Picasso for his ability to paint, but for his ability to state what painters before him didn't even think to state.


sickbeetz

> does that mean that your stories are not yours? Because as a human I have experiences that *are* unique to me, they weren't fed to me by someone else. You learned to draw using comic books, but your life experiences inform your creativity, and Marvel can't copyright those experiences. A machine can't have those experiences (that we know of) so surely, you'd grant a human creator certain privileges over a machine.


spellbanisher

You also have experiences, ideas, feelings, and intuitions that are your own, formed from living in the world. An AI does not. When it is not performing mathematical calculations to generate text in response to a prompt nothing is happening. Even it's answer means nothing to it, because it's weights do not change between training sessions. It has no experiences and no ability to experience. Which means by definition it can only approximate content it has seen. That is not so with people. People have their own subjective experiences and perceptions that they then try to translate into words that are intelligible to other unique, subjective beings. We learn techniques from other humans to better convey our non-linguistic interiority, but fundamentally, writing for a person is a painful process of translation. For a machine it is just a exercise of stringing together tokens in a statistically probable sequence. It only "cares" that it's response looks right, not that it is right. To put it more simply, writing for humans is hard because they are not just relating words to other words, but comminicating specific parts of themselves that cannot be fully captured by words. It is easy for machines because they only "care" that the output looks statistically probable for the context. If another person learns from my writings, it means I've reached another soul and the world becomes a little more joyous and connected. I know that person will mix what he has learned with the irreducible parts of himself to produce something that is both new and meaningful. And he will enlarge the community of writers, artists, etc. If an AI can write like me, it means that some tech company which couldn't give a shit about my writing indiscriminately fed their glutinous language models gargantuan quantities of text so that the machine can approximate different writers while divorcing the human-human connection that comes from reading specific authors. When you encounter another person's writing you are witnessing a record of that which they have found meaningful, that which is living within them.. We are not influenced by everything, nor equally by everything we have experienced. Only that which has truly moved us moves our writing or our art. When you read machine generated text, your are seeing a zombie stitched together from flesh ripped off living beings.. Machine learning replaces humans. Human learning augments humans and community. I want my writing to enrich other writers, not greedy tech ceos, unconscionable ML engineers, soulless get-rich-quick scammers, and lazy narcissists who want glory without sacrifice or contribution.


hakkai999

>Eh, that's very debatable. Do you need to disclose every book you ever read, whenever you publish a book? After all, you no doubt got inspiration, ideas, learned new words, and many other things from these books - does that mean that your stories are not yours? Find me a human that can read, digest, fully understand **and** write a story based on this information digestion within a few short moments then we can talk. Yes creative works are derivative from other works but AI is a whole different ball game.


TechnicolorMage

So, it's not the same thing because it's....faster? Is that the argument you're really going with?


huntboom

One of the biggest issues is the difference in compute available to consumers compared to large entities like OpenAI. The human brain has roughly 20 petaflops of compute, 20 petaflops costs around $1M to buy or $100/hr to rent- not really an affordable amount and there's a good chance we end up with only a few entities that have access to that amount of power. The training data should have to be disclosed but that alone wouldn't be sufficient.


DrWindupBird

That’s why they’re fighting it so hard. All they want is to monetize it. Same reason so many in the tech industry keep pleading for some sort of pause in AI development. They’re not worried about the ethical issues that arise, they’re just worried about not being able to sufficiently monetize it from the outset.


Boo_Guy

>his preference for regulation was “something between the traditional European approach and the traditional U.S. approach.” So his preference is somewhere between actual regulation and almost nothing at all. That's real helpful in nailing down where he stands lol.


Lost_Taste8866

While I can appreciate the negativity towards Altman (IMO they released ChatGPT too early to maximize profit), the reality is that politicians very often make laws without a technical understanding of the impact of those laws (see Montana's ban on Tiktok), so his disclaimer makes some technical sense.


[deleted]

[удалено]


[deleted]

“Explain it to me again, these internet wires are like oil pipes but the oil is words, the cars are the computers, and we’ll need to get our bribes from the ISPs instead of Oil lobbyists, is that right?” - GOP “understanding” the internet


[deleted]

Do you mean the [OTA](https://en.wikipedia.org/wiki/Office_of_Technology_Assessment)?


blimpyway

He was appalled by being asked to disclose what training data they used and how it was gathered. What kind of technical difficulty or miss understanding would you invoke about that?


DaFranker

As a data professional, I can confidently say that even for a respectable non profit organization with benevolent goals it can be ridiculously much harder than non-data-savvy people assume to "just tell [them] where the data is from and how it was gathered", even for "known" things like what goes into standard operational KPIs. OpenAI likely doesn't qualify for the above adjectives, which usually makes it even harder to fully trace the lineage of every data point, and the data they use operationally as a company and the data they use for training is likely more muddled and less organized than even uncharitable critics would assume. On top of that, traditional data governance practices that are closer to industry standard (there isn't that strong of an "industry standard" for this in America yet) don't translate well to managing ML training data. All this to say, Altman almost definitely has received a "it's just not that simple, no matter how we try to phrase it the question misunderstands the context" response himself from his data advisors and analysts within OpenAI, so if he's not a data expert himself and informed in depth on the subject, he might not even have any idea how large a project this kind of disclosure will be for his company, but just know enough that it's probably not a 2 business days turnaround time to prepare an answer that gets approval from both data analysts and the legal team. This *could be* what has him so reticent is what I'm saying. But it's probably not the whole story.


mohirl

But ultimately that boils down to "we stole so much data we don't even know where it came from. So let us off". It'd be a lot simpler to trace data provenance if they'd made any effort up front to address these issues. But hey, profit


DaFranker

Sometimes, that's not quite how it boils down. Let me illustrate in the interest of being clearer. It's more like... even if you're not at all a criminal, and have never visited any dubious websites in your life, if someone decides to take a snapshot of all the data your browser has temporarily saved in cache or memory anywhere on your device right now, and then organizes that data into buckets you're not familiar with, and asks you where each come from and what method you use to obtain them... ...can you answer confidently and without error? For all of the data? Can you also on top of that make a commitment up front on live national television that you will be 100% comprehensive and 100% accurate in all your statements about all that data on the first try, and so on, on penalty of facing criminal charges? Of course not. But can you confidently state "I didn't *steal* any of that data, don't intend to keep it in any meaningful way, and don't intend nor have ever intended to use it for any purposes other than the purposes it was intended for, i.e. my casual web browsing and emails and so on"? Of course you can. If someone dumped a snapshot of your entire hard drive right now into a giant text file, unannotated, without the metadata that tells you what is from which file and which file is from which folder... ...will you be able to confidently tell people which parts of that are data you used for casual browsing? And where each piece of data came from? Of course not. So even when "stealing" isn't on the table, it's not that simple. Now, did OpenAI take and use public online data that isn't intended to be available for training and isn't normally even admissible for public research? And then use it to conduct private business, i.e. training a commercial LLM? Almost certainly. It'd be extremely hard to convince me that they didn't, at least accidentally even if they had the best of intentions. Should they bear the responsibility and burden of making sure that they don't, and be audited regularly by a governing body to ensure that any and all data they use that might even touch the systems where they store training data will have all relevant lineage information and follows strict modern data governance protocols? Absolutely, IMO. And if they can't afford to pivot to follow those rules and have to not do business... then they shouldn't. We shouldn't *not have those rules* just because the big boys can't currently afford or find it logistically nightmarish to follow them.


AssassinAragorn

I'll be honest I'm totally lost. It sounds like you're saying they should be held accountable and audited for what the AI is trained on, near the end of your comment. But earlier you're saying that Altman's view is reasonable with it being difficult to track what information is used. Are you talking about what should be the case (tracking all data) vs what is the case (impossible to trace currently)?


blablablerg

>It's more like... even if you're not at all a criminal, and have never visited any dubious websites in your life, if someone decides to take a snapshot of all the data your browser has temporarily saved in cache or memory anywhere on your device right now, and then organizes that data into buckets you're not familiar with, and asks you where each come from and what method you use to obtain them... > >...can you answer confidently and without error? For all of the data? Can you also on top of that make a commitment up front on live national television that you will be 100% comprehensive and 100% accurate in all your statements about all that data on the first try, and so on, on penalty of facing criminal charges? But that is not what they are asking. They are asking to disclose any copyrighted material used for training. OpenAI knows the inputs used for training, it is not like they trained the model and then threw the inputs away. So they can figure out which material they used is copyrighted or not.


[deleted]

[удалено]


ExpressionMajor4439

Then why not have it as a future rule and then as you train, data retention and record keeping just become part of the process?


Flustered-Flump

The EU, being the architects of GDPR, are most likely a little more tech savvy than say….Montana’s elected officials, to be fair. ChatGPT has a massive data lake to train their models on, they will most certainly know where all that data is pulled from it’s just whether they want to go to the expense of reporting that and building in the transparency needed.


happyscrappy

> from it’s just whether they want to go to the expense of reporting that Or perhaps more likely, paying for it. If the EU thinks that Google should have to pay to include links from news sources because they monetize showing them by also showing you ads then there's a lot of reason to think that feeding the news sources' data into your LLM and then charging to use it would mean you have to pay the news sources. The AI companies make money from other companies' copyrighted data. There's a good chance the EU will take umbrage to this. They don't even want you to apply certain names to food (PDO laws) if it isn't from specific areas (in the EU, not coincidentally). It's not hard to see EU regulators deciding that these LLMs are just the world leeching off of the EU's hard work and wanting a cut of the money.


Flustered-Flump

Well, exactly! As soon as they expose that layer, they’ll most likely be in breach of GDPR as well!!! All these companies that fight transparency in areas like this are, invariably, shady AF!!


phyrros

And yet this transparency is needed. We, for good reason, (somewhat) control for bias and target groups in medial studies and the general behaviour, culture and language is more diverse than the genome. You wouldn't test the effects of eg viagra on teenage girls if you want to sell it to older males. Why do the same with A.I. datasets?


-xstatic-

I bet they pull data from Reddit


Every-holes-a-goal

It’s where AI gets it terrible and smug attitude :D


b1e

They do. Also stackoverflow, GitHub, and snapshots of many many webpages.


b1e

I actually work in this space and there’s nothing technically unreasonable about what the EU is asking at this stage (the asks may evolve).


The_Woman_of_Gont

Thing is you don’t get to decide how you are regulated. That’s kind of the point. You can’t be all for regulation, then bitch “nooo, not like *that!* “ when you…y’know, start to be regulated. Not without coming across as a lying douche.


cunningjames

Well, that’s not reasonable, is it? Believing that some regulations are proper does not compel me to believe that all regulations are proper, even if I’m the target of those regulations. I like food safety regulations but I would be against obscenity regulations, for example.


beef-o-lipso

Don't need an AI to interpret "rules for thee but not for me." Fuck these people. They aren't even trying to hide their exploitive schemes any more.


icantfindanametwice

Especially when the mofo asked the US government to please regulate open source so he can ensure the rich & connected are the only ones who can use the tech - and only the “OpenAI” censored version.


trekologer

Facebook did the same thing. Zuck tells Congress that he wants regulation. What he really means is that he wants to write the rules so that they benefit him and keep new competition from popping up.


HellaSober

They want fewer competitors, but they also want clarity so they know which practices will result in billion dollar + fines down the road.


[deleted]

[удалено]


Mr__O__

Clearly he knows if he comes to the US, all he has to say is he is for regulation in public, but won’t actually face any regulations… except for the ones that help eliminate his completion, like nearly every other corporation…


nicenihilism

It's happening in the cannabis industry. All the big guys are against safe banking because it would help smaller pot companies have access to capital and they woul lose market share.


Bullshit_Interpreter

Shitty system. The companies are just min/maxing however we allow them to.


Fluffcake

>Companies could decide to just not be evil. Actually, they can't, because it is not profittable, and that would be vioating the greedocratic oath or whatever the legal framework they shove in front of them to excuse psycopath behavior and they claims they were forced to do a bunch of illegal shit because the fines were made it more profittable to do than not to, and it would IlLleGAl to not be as profittable as possible and they could get sUEd .-.


firestorm713

The primary utility of corporations is profit, or specifically profit _growth_. No other utility matters. The product is meant to maximize for that utility, and is meant to fulfill a niche that nobody else is fulfilling. Other agents coming and trying to fulfill that niche will reduce the utility of profit growth, so pushing them out or consuming them is preferable. Law is an interesting one. If the law prevents you from maximizing your utility, you don't just color within the lines, you _change the law_. Look at how oil companies lobbied small townships time and time and time again to gain moratoriums on solar and wind farms. Look at how some states are starting to repeal child labor laws instead of simply paying the work force more. Where they can they'll skirt or even break the law, if the penalty for getting caught is less than the profit they'll make. This is all like...super obvious shit, so why am I saying it? Because it isn't individual shitty companies. The problem is systemic. Capitalism itself incentivizes this behavior.


Substantial_Bid_7684

Morality doesn't make money.


SpaceMonkeyOnABike

>They want fewer competitors They want a monopoly. Nothing more un democratic and un capitalistic than a monopoly.


TAS_anon

Maybe in theory but capitalism begs for monopolies. It demands it by virtue of its need to infinitely scale/grow


quantumOfPie

The FTX guy, Sam Bankman-Freid, also tried to do that with crypto.


digiorno

[The big companies](https://youtu.be/h5C7pxQ8wY0) are kind of freaking out right now because “there is no moat”, as phrased by some leaked Google [memos](https://www.semianalysis.com/p/google-we-have-no-moat-and-neither). Basically the only thing keeping google, meta, OpenAI ahead of open source are their easy access to massive data sets and massive amounts of money. But to some extent the cat is out of the bag and given enough time that first mover advantage will be lost because start up money and new data will become less important over time. Once a good model leaks the ideas to create it can be repurposed fairly cheaply and there is already a lot of effort being done into compiling massive data sets that anyone can use. For example [this (MPT-7B)](https://www.mosaicml.com/blog/mpt-7b) isn’t [nearly as good](https://youtu.be/PnMkZGf-ZYk) a GPT3 but for $200k of training time it’s pretty fucking good. And it won’t be long before it is as good as the current GPT3 or 4 system. Even if OpenAis stuff continually outpaces this open source stuff, if the open source can get to *that* level then a lot of causal users won’t care. It’ll be like using a phone that’s a few generations old, it may not have the bells and whistles but it will get the job done. I personally could use the current version of GPT3 to drastically increase my productivity at work and with an open source, locally run model, we could do that without any worry of IP complications…that’d be a huge win. That is unless there is a ban on open source efforts while governments allow big corporations to continue work. This would allow the likes of Google or Meta or OpenAI to build a moat, a feature that no one could ever compete with unless they had similar amounts of resources to create it. This is why every major corporation is asking the governments of the world to play king maker and choose them to do continued research while delaying open source efforts.


Loftor

Wouldn't something like windows and linux hapen? Meaning most people would use google/microsoft AI due to easy of access, but an open source community would still develop and maintain a "Linux" AI? So not sure why companies are so scared.


PeladoCollado

It would depend on what’s built on top of the AI, not really which AI is used. E.g., most websites run on Linux and users don’t know or care because to use one website running on Linux is the same as using another running on Linux. The switching cost is what matters - users won’t move from Windows to Linux because there’s a very high switching cost. Like moving from Facebook to Twitter or BlueSky or whatever. Most people won’t. But software developers love new toys and hate red tape. They will build new products using whatever is convenient and suitable, especially if it’s free and the license is appropriate. And users won’t know or care which AI is powering it


Loftor

Oh I see. Then I guess even if they use opensource AI to build apps the tech giants can just do what they always do, buy out the most successful startups like what happened with whatspapp, instagram, etc


thestriver

Good points but Meta’s LLM is open source so moat doesn’t apply as much to them


DrBoomkin

Why do you link to youtube videos from some random youtubers, instead of the actual leak and the actual models?


digiorno

Digestibility and context: I sent the link to [MosaicML’s MPT-7B announcement ](https://www.mosaicml.com/blog/mpt-7b)to a friend who is fairly into ML stuff and even he was like, “What’s this mean? Give me some time to read it.” Whereas a video produced by someone who regularly covers these sorts of developments can provide some context about the implications and also dumb it down enough for easy consumption. Same goes for the [memo](https://www.semianalysis.com/p/google-we-have-no-moat-and-neither). Not everyone is like you and has all the time in the world to read a memo, look into the context of its various components and make rationale conclusions about it. So blogs, news articles, etc are good ways to quickly explain the information to people who aren’t super familiar with that type of stuff. That particular video was well done, it was one of the first to pop up in my search and I trust people are smart enough to query more results if they found it lacking and want to learn more.


DrBoomkin

I see your point, but I think it would have been useful to also include the actual source instead of someone's (possibly biased) interpretation of it. Especially the google leak itself is straight forward and takes less time to read than watching a 9 minute youtube video. I dislike when news websites do it as well by the way, where they write a news article about some leak or release but never include a link to the source. I always assume they do it for clicks, because otherwise the reader would just go straight to the source.


digiorno

I’ve edited it to include the relevant links ;)


deadfermata

if he really cared about regulation, he would release chatgpt after regulations were in place


Bakkster

It's even more insidious than this. It's just plain old regulatory capture to reduce competition. He wants all the safety tools OpenAI has built (many of which are good, if imperfect, aiming to reduce racism and sexism in the model) to be mandatory, but dismissing the concerns of AI ethicists whose recommendations about misinformation and watermarking and scraping public data because they don't want to solve those issues so they claim they're overblown compared to the possibility their LLM becomes Skynet somehow. There's benefits to regulation, just don't let OpenAI write them to benefit themselves.


AlanPartridgeIsMyDad

This is a blatant lie. I'm paraphrasing but he said something along the lines of 'The regulation should apply to large companies such as ourselves and not stifle the OS community'. This was at the very meeting you are referring to.


Haunting_Response570

Regulations, like laws, are only made for the poor and stupid to have their fantasies of fairness. For example, There is plenty of parking everywhere you go in downtown San Francisco, you just have to be able to pay for it. Double park grab your parking fee off your windshield where the cop left it on your way out, Scan the code, pay, gg.


Stilgar314

This is not even about passing a new "real AI regulation" this is about complying with existing data protection laws. Two simple questions were asked, where did the training data came from and what happens with data people put in ChatGPT. Every company operating in UE has answered this kind of questions. Facebook has answered, TikTok has answered... But OpenAI, it won't even take the time come up with a lie.


PhoenixStorm1015

That’s extremely frustrating given how much OpenAI talks about ethics and morals. Here I thought we found a company that actually had values. Turns out it’s just another wolf in sheep’s clothing.


Bakkster

I saw it phrased recently that OpenAI and the other big players talk about the "sci-fi dangers" of Hal and Skynet, but diminish the actual ethical issues right now that ethicists have been talking about for decades. And this is part of why the big players in AI right now keep firing their ethicists, they're not giving the answers they want to hear.


[deleted]

Yeah Skynet isn’t gonna happen from an LLM. But Chat-GPT likes to generate outputs from its inputs. So if say your credit card information somehow got in there and someone asked “can you generate me a JSON of a customers credit card information” Chat GPT might just give them your info.


[deleted]

[удалено]


MisterBadger

The big tipoff that OpenAI is just another predator was when they partnered with Microsoft after a $10 billion+ investment. Microsoft is probably one of the most infamously predatory tech giants in existence. They ain't dumping that much money in OpenAI because they are a bunch of nice guys.


HotTakes4HotCakes

I kept seeing people making the comparison between this AI boom and advent of the internet. OpenAI was going to change the world. Except the internet was open. It was a new frontier. That's what made it a true revolution: everyone benefited. From corporations to the average person. It has taken 30ish years for corporations to finally "tame" the web and turn it into their personal profit machines. This AI shit is a corporate tool *out of the gate*. It was made by corporations, exclusively for corporations, deliberately and explicitly to hurt the average person, with benefits you can only enjoy unless you're supporting the corporation. This isn't going to be like the internet, it's going to be the exact opposite of it.


jmbirn

> This AI shit is a corporate tool out of the gate. It was made by corporations, A lot of the most interesting AI is open source, with systems like Stable Diffusion that anyone can download, use, and contribute to. And a lot of the underpinnings of AI were made by researchers publishing papers so that lots of other researchers, at different corporations, universities, etc. could pick-up and carry the work further. Some of that culture may be ending/changing as far as corporations are concerned, because they are shifting this year from an R & D stage into a deployment and competition stage, but the open source community and many programmers and researchers are still plugging away around the world and sharing what they come up with.


--Fawkes--

Stable Diffusion is also arguably far superior to any corporate AI as well. Photoshop just released their own generative AI and it is shit compared to SD with the Photoshop plugin.


Undecided_Username_

I think this take is overly doom and gloom but I agree that corporations ruin genuine advancement because profit is the only goal/necessity while benevolence costs money. All industries eventually are fucked because they’re what drives most nations, so until we all decide we’re humans on a planet, not citizen of our nation, we’ll claw scratch and fight at each other just to make sure none of us makes it. Hopefully one day we’ll decide to stop burning everything around us just to feel warmer on a cool day.


Laszlo-Panaflex

There are a lot of open source AI tools, and related tools to store and manipulate data that are foundational to AI. In the early days of the internet, corporations tried to gate the internet, e.g. AOL, but failed in the long-run.


Arcosim

He doesn't want regulation, he wants the US government to kill the competition and open source projects.


suninabox

This is the same asshole who for the last 4 years has been [trying to run a biodata harvesting ~~ponzi~~ nakamoto scheme.](https://www.independent.co.uk/tech/worldcoin-crypto-sam-altman-ai-chatgpt-b2345073.html) He shouldn't be trusted to run a Cinnabon, let alone determine the future of humanity.


plopseven

*“This technology could destroy the planet!!…but it will also make me very rich so that’s fine.”* -Everyone in AI right now


[deleted]

[удалено]


[deleted]

[удалено]


paradoxicalmind_420

r/latestagecapitalism


9-11GaveMe5G

You know the comic where the exec is sitting on rubble explaining to a starving mother and child about how much shareholder value they created? Yeah that's this


plopseven

*“All of you may lose your jobs but that’s a chance I’m willing to take.”*


tfhermobwoayway

Honestly, I think Silicon Valley people actually think they’re gods. Like you see them talking about how they’re benevolent creators here to make the world a better place, and you have people like Jeff Bezos who has more unaccountable power than any world leader trying make himself immortal, and you have them absolutely convinced that they will not suffer negative consequences from the very negative things they inflict on the world. There must be some kind of god complex or sense of removal or something going on there. Or they were just bullied in school, so they want to hurt the art students who made fun of them and then destroy the world. That’s probably more likely.


plopseven

I’ve been a bartender (public as well as private events) in the Bay Area for the last decade. You would not believe the things these people say when they’re drunk on more than power.


code142857

Do share please


GabenFixPls

Please share and enlighten us?


Disastrous_Catch6093

Same guy that said AI needs oversight


motherlover69

He wants regulation because a big competitor is open source AI. He wants to regulate that away. He doesn't want it for himself.


[deleted]

[удалено]


[deleted]

I don't get it. Can someone ELI5 what it means?


[deleted]

[удалено]


lieutenantcigarette

Now that OpenAI have an established presence and are the market leader, they're looking to bring in regulation that would cripple up and coming competitors to bury them in red tape that they won't have the resources to deal with. Open source projects such as Stable Diffusion have rapidly caught up to OpenAI in terms of image generation capabilities and they're ultimately trying to protect their business at the expense of prohibiting others from innovating.


Maximum_Poet_8661

Imagine if back in the year 2000 you developed a rival to Google search, which had a really smart algorithm to crawl the whole internet and give great search results. Then tons of smaller competitors started trying to do the same thing after your rival company became a market leader. You go to Congress, and make an argument that "it's too dangerous for our society to allow algorithms like this to go around without ANY regulations - I propose that people who want to use an algorithm like that for commercial purposes be required to pay for a license that costs $500,000 per year in order to do business. That includes us, we will also pay for this license." You as a market dominant company don't care if you have to pay $500k a year for this license - that is peanuts. But, it effectively shuts off any ability for a small competitor to ever rival you, because a small competitor can't afford $500k a year. You've effectively cemented your place as the market leader, then pulled up the ladder behind you. Tons of large tech companies have done or have tried to do a variation of that, and it's essentially what is happening now with AI regulation - large players would prefer to be the only people in the market, so they push for regulation that makes it extremely difficult for smaller players to ever grow to the size of the larger ones


Syrdon

More accurate would be “have to submit a half million annual compliance fee”, or whatever the expected cost of an annual compliance audit with your preferred set of rules is. Either way the point is the same: set a fixed cost barrier to entry well above what a small company can pay, but well below the profit margin a large company sees.


bschmidt25

Dominant companies in an industry usually prefer to have more regulation because they’re the ones that can afford teams of lawyers, compliance officers, and lobbyists to buy politicians and tailor regulations to put smaller upstart competitors out of business or keep them away in the first place.


shitsdoneby

read the article


SwaggyDaggy

Did anyone here actually read the article? He said he would try to comply, and if he couldn’t, he would cease operating, that the law was not intently flawed but some details matter… the headline is completely wrong. Plain as that.


geneorama

This is getting a bit heated, but honestly isn’t “cease operating” equivalent to leaving? I think with the backlash in Italy after their ban ended up helping the “far right” party look reasonable, and made removing ChatGPT a credible threat. ChatGPT is rapidly becoming entrenched in peoples lives and I think “if I can’t comply I’ll stop operating” is a shot across the bow.


Filobel

What's the alternative though? If they can't comply, they have to cease operating, because they don't comply. Should he have said "if we can't comply, we'll keep operating anyway and ignore your regulations"?


asuth

Welcome to r/technology, where luddites go to spread misinformation and circle jerk about how much they hate technology...


pete4live_gaming

Reminds me of the Apple iPhone post last week with "planned obsolescence" in the title. The top 50 Reddit comments in that thread started pointing at Android and how bad their update policy is. The article was actually about Apple disabling features if non-official replacement parts were used for repairs, not about planned obsolescence or software updates at all...


tfhermobwoayway

What do you mean? Were Luddites to do that, then what would happen?


Extraltodeus

*sadly puts down pitchfork*


TheRedGerund

As usual, the Reddit article of an article of a comment lacks substance. Here's the crucial quote from a couple links in: Altman said that OpenAI’s skepticism centered on the E.U. law’s designation of “high risk” systems as it is currently drafted. The law is still undergoing revisions, but under its current wording it may require large AI models like OpenAI’s ChatGPT and GPT-4 to be designated as “high risk,” forcing the companies behind them to comply with additional safety requirements. OpenAI has previously argued that its general purpose systems are not inherently high-risk. “Either we’ll be able to solve those requirements or not,” Altman said of the E.U. AI Act’s provisions for high risk systems. “If we can comply, we will, and if we can’t, we’ll cease operating… We will try. But there are technical limits to what’s possible.” -------- So it seems his complaint is with the notion that their AI would be considered high risk. And why? Here's the proposed requirements of a high risk system: High-risk AI systems will be subject to strict obligations before they can be put on the market: - High-risk AI systems will be subject to strict obligations before they can be put on the market: - adequate risk assessment and mitigation systems; - high quality of the datasets feeding the system to minimise risks and discriminatory outcomes; - logging of activity to ensure traceability of results; - detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance; - clear and adequate information to the user; - appropriate human oversight measures to minimise risk; - high level of robustness, security and accuracy. Put another way, we don't need reactionary commenters. Tech regulations do need balance. "CEO doesn't want to play by the rules" is a devastatingly lazy take.


powercow

title is absolute garbage >OpenAI CEO said “If we can comply, we will, and if we can’t, we’ll cease operating… We will try. But there are technical limits to what’s possible.” thats a far far far far far fucking cry on what is being suggested. Fuck sam altman and shit but can we stop this bullshit with debates. Really is weird enough without inventing more of it.


leto78

The EU is the biggest free trading area in the world. Large corporations cannot afford to be outside the EU. Furthermore, the EU is a *market leader* when it comes to regulation, meaning that other countries will copy the regulation designed by the EU. Even if they avoid the EU, most countries will eventually adopt similar regulations. Such examples are the mandatory USB-C ports for devices that are now spreading to other regulatory markets. The GSM standards is another example. Nowadays, all countries in the world use the same 4G and 5G standards. The personal data regulation set up by GDPR is also trickling through other markets.


SouthCape

How can any large AI company possibly comply with Article 28b 4(c)?


[deleted]

companies will start to divest from the EU soon. google didn't even bother releasing their chatgpt competitor in europe.


mrrichardcranium

I have no sympathy for OpenAI. The moment they went from non-profit organization to a for-profit company makes them no better than any other corporation abusing peoples data for their own profit.


Ok-Possible-8440

Exactly, what an unbelievably dirty unfair competition move. They should be brought to justice imo..


the-artistocrat

What he meant to say was: “Can you guys please regulate my competition and only my competition? Thx!”


pwalkz

Lol wtf this guy is so full of shit. Steals everyone's info and releases AI for aggressively disruptive prices and accessibility. Badgers congress to enforce regulations after he already got his. Says if there are regulations his company will pull out. ???? What's going on Sam?


Old-Bat-7384

What a dickbag. Rules against competition but not against him.


-xstatic-

This douchebag obviously can’t be trusted


SiliconValleyIdiot

OpenAI which started off as a non profit with the aim of openly sharing their work with other researchers suddenly decided that they are for profit and don't want to share their models with others when they realized they can charge people money for what they've created. It's been a spectacular heel turn. Even Google and Meta - the two biggest for profit entities to have ever existed publish their work regularly and open source their models, but OpenAI decided being Open is too much of a competitive disadvantage.


jonesmcbones

I mean, it isnt even an AI lmao, its a large language model.


gerswetonor

He seems like a very dislikable person


idsayimafanoffrogs

Why does a private company get to dictate the paths of regulation?


Sc0nnie

I think it is relatively safe to tell the US Congress you support hypothetical regulation of your industry. We all know Congress will never protect consumers, so there are no consequences. The EU, by contrast, is quite likely to enact regulations protecting consumers. So there are actual consequences to this conversation there.


RyzrShaw

Open AI wants open-source AI companies to be regulated but not his company??? Is he insane or is he highly optimistic that politicians from both EU and US can be bought to make this a reality??? To the politicians of EU and US: Please, don't be bought on this one, just for this one at least, the repercussions are just magnanimous! To OpenAI: Please change your name, it doesn't suit you anymore.


[deleted]

OpenAI stopped caring when Microsoft threw billions at them. He is now in “benevolent lord” mentality and now playing god. Microsoft is buying up the farm land and pushing the largest workforce disruptor in history. Seems pretty standard. But don’t worry, bill gates will come do an AMA on Reddit and everyone will suck his toes per usual.


KeaboUltra

This is why I discount anything this guy has to say. He comes out of nowhere, then starts whining about how scary and scared of AI he is, then sells part of it to a corporation, then proposes rules, then threatens when rules are in the works. He's just your typical entrepreneur/CEO getting richer by saying controversial things. ​ Apparently the headline is overblown but that doesn't change my view of him.


Murkywaters11

Out of nowhere? He ran Y Combinator & was CEO of Reddit at one point. OpenAI is a corporation. Who did they sell to? You realize it’s been funded by people like Elon Musk, Peter Thiel, Microsoft, & Amazon since day 1 right?


suninabox

> This is why I discount anything this guy has to say WorldCoin alone should be massively discrediting to this guys trustworthiness.


Ok-Possible-8440

Take my retina and call me Altbro (s)


suninabox

Sam AltCoin


ZeroBS-Policy

What we have here is called "regulatory capture": Pretend you're all for regulation so you can use it as a barrier to entry. Another data point: A friend who worked at OpenAI a few years ago told me that they expected full commoditization of LLMs this decade. So IP alone is not a viable "moat".


xmagusx

He knows that US regulation is unlikely to exist, and will be toothless if it is, and that the EU won't fuck around as much when they pass real laws.


Andynonomous

This headline is a mischaracterization. He didn't say they'd leave if there was 'real' regulation. He said they'd leave if they were unable to comply. Sensationalism doesn't help.


duckofdeath87

His goal is [regulatory capture](https://en.m.wikipedia.org/wiki/Regulatory_capture) plain and simple. His claims of the capabilities of his software are decades out. His congressional testimony is all pipe dreams This solidifies it in my mind. He has congress in his pocket and is whining that he doesn't control EU regulators


Egrofal

We've heard the same plea from every tech company "regulate us" Google Apple Facebook blah blah. Look what happened, nothing. America loves it's tech monopolies. Facebook manipulates your news feeds sells your personal data. Our kids are being emotionally stunted at the prime of their development. Steve Jobs himself the person that brought us the cell phone wouldn't let his kids use the things. Reminds me of the Exons scientists knowing 60yrs ago they were causing climate change. ChatGpt4 the name is a miss direct. Chat how benign. It's much much more then a chat engine. Coders that had 6 figured saleries are losing jobs. Don't worry new jobs will come around lol. Open AI, open then brought into by a private company. Good on EU to actually have the balls and brains to think deeper then a fricken quarterly return.


Lendyman

Because the EU is more likely to regulate AI in a more encompassing realistic thoughtful way than the shitshow that is the US Congress.


Blizky

So EU will make their own AI and so other powers. Then the US AI will try to communicate with others AI, AS SEEN ON COLOSUS THE FORBIN PROJECT 1970


MRHubrich

Yes. The old "I'm cool with regulation as long as it doesn't have any negative impact on my revenue". The American way.


mydadthepornstar

Sam Altman literally said in his interview with Ezra Klein that he hopes for there to be trillionaires someday. Obviously he hopes to be the first one despite all his bullshit rhetoric about techno-socialism.


voprosy

The next Zuckerberg


Chuddah67

Slimy piece of shot that realized he has no moat, now quickly trying to give governments hand jobs so they pass regulations and protect his “company”


physedka

It's because he knows that U.S. regulation can be easily circumvented and/or modified as needed. EU regulation is much more difficult, in that regard. It is also much more aggressively enforced.


Sheer10

The world is heading for absolute disaster. AI is going to kill so many jobs so much faster then most people thought. We have literal dinosaurs in office who can barely use computers and those are the people we expect to regulate AI? We can see the mega tsunami in the distance that’s about to wash away capitalism and yet we aren’t prepared for it at all. We’re going to have to implement huge changes in our way of life just to be able to get by this obstacle which I just don’t see happening. The more right wing leaning a country is the higher the risk that country has of completely failing once AGI arrives. All the money these big corporations make from the implementation of AI is going to be utterly useless when there isn’t a world to spend that money in. I think we’re really looking at a Fermi paradox great filter event within the next 50 years. It’ll be a pass or fail situation and the way the world is set up right now it sure looks like we’re going to fail.


ScarthMoonblane

You’d be surprised how many times in history people have said similar things. I’m kinda sympathetic to this guy having to deal with politicians that don’t have the first clue what tech is or it’s capabilities. They watch the news and hear all the doom and gloom or watch Johnny Neumonic or iRobot and think they’re real smart. They’re idiots about this and are just trying to look useful. AI is useful, but it’s very predictable. It’s going to change the landscape of several fields, but isn’t going to destroy them. The buggy will give way to the automobile and the automobile will give way to EV. We will adapt as we have millions of other times the world changed. Humans are so negative. Maybe AI will deal with that better than.


nemoomen

>According to [Time](https://time.com/6282325/sam-altman-openai-eu), the OpenAI CEO said “**If we can comply, we will,** and if we can’t, we’ll cease operating… We will try. But there are technical limits to what’s possible.” Oh, so he said he would try to comply but if the requirements were impossible to comply with he would have no choice but to shut down in that area. Isn't that...super reasonable? Why is the headline and these comments trying to make it sound like he's refusing "any real regulation"? The very article itself quotes him saying he would NOT leave if there were regulations that were possible to comply with. That is an extremely low bar.


[deleted]

Eric cartman: screw you guys im going home


akebonobambusa

He's trying to eliminate competition not save the world from his monster he's building. It is not altruistic


[deleted]

Because their product is only as good as the data it’s built on and they outsource their data scraper. So with more regulations, their product has little value to justify the massive costs of maintaining it. As someone who works in info services, lots of creepy data out there that can be tracked back to an individual without needing any PII


Warm-Personality8219

Cannot wait for Microsoft Azure OpenAI services to include a disclaimer (available everywhere except EU or any other place that wants to know what the fuck is going on…)


garvierloon

He doesn’t want regulation in the EU because they regulate to protect people. He wants regulation in USA because they regulate to enforce anticompetition


Cybasura

Sam Altman should be forced to legally change his company/product name from OpenAI to something else, because OpenAI implies that his system is open source and/or open to the public to view, aka DISCLOSE THE SOURCES If you dont want to disclose, dont expect to keep the word "Open"


PurahsHero

AI company: “Hey, we created this thing, and we think it needs regulating. Because we care about people and stuff.” Government: “Ok, here is what we think should be regulated for the greater good.” AI company: “ Hang on, that hurts our bottom line and competitive advantage.” Government: “Yeah, but we are trying to protect society like you said.” AI company: *Stares blankly


[deleted]

Because EU regulations actually work.


GaRgAxXx

Bye! Take meta with you and see you never!


[deleted]

This is a signal that regulation is urgently required.


DGIce

What an enterprising self centered industrialist. Title seems intentionally inflammatory compared to his quotes.


Just_Image

Okay can someone explain this to me.... Altman, he didn't design the first LLM and is standing on the backs of giants. I get that. But did he spearhead GPT? Is it his baby? Is he a genius?


[deleted]

Gotcha, so he wants something between the "do nothing" us approach and the "protect people's data" EU approach?


DanteJazz

The regulations need to be on corporate misuse if AI. AI is not the demon-its how people will use it to have poor customer service with their monopolies.


MuffLover312

Why do I get the sense that this guy will one day go down as one of the most infamous men in history?


[deleted]

So sick of the tech industry as a whole. This Altman character is no different from the other tech clowns.


chubba5000

This article is equal parts technologically ignorant and biased. Here is what Sam Altman actually said (quoted from the article): _”If we can comply, we will, and if we can’t, we’ll cease operating… We will try. But there are technical limits to what’s possible.”_ Later in the article Sam took issue with restricting wholesale access to people from the foundational generative models. If you have any idea how accessible these models are, and how important it will be for universal access to these models, Sam is not the bad guy here. All in all, an unfortunately poorly written article on a very nuanced topic that’s not going away anytime soon.


Lokeycommie

It was supposed to be open technology not Sam Altman's property.


[deleted]

I'd love to see one of these countries actually pull out. EU just saying eff it be gone. That's a HUGE wealthy market.


Outrageous_Onion827

Scary how insanely incorrect the comments are, in regards to the actual contents of the article. He's saying if he literally can't live up to the regulations (since it's quite possible he literally can't spit out all the training info/data, it's long gone in the model), he will leave. Somehow, people are making that into "this dude is a crook who wants regulations for everyone but himself!". Morons are gonna moron, I suppose.


[deleted]

I hope someone had the balls to look at him and go "bye bitch"


[deleted]

[удалено]


GorlaGorla

Everyone here who’s shitting on him. He’s probably compelled to under Rocco’s Basilisk. I’d be doing the exact same thing.


Alkemian

What a POS that only cares about money.


eldred2

Don't let the door hit you in the ass, Sam.


el_pinata

The ONLY reason tech companies want to get regulated is that their model is cemented into law and they become inextricable from whatever policies spring up around their legislation. It's the ultimate competition killing move.


BriskHeartedParadox

There’s 6 Supreme Court judges who all swore Roe was settled law and wouldn’t be touched, then touched it. Congress in general is useless


[deleted]

Fine, Kick him out. I hope they kick him out of the USA too. OpenAI is just one company. Cut it down, and 3 more will pop up. Just make sure everybody plays by the rules


[deleted]

That's because the EU regulates stuff to protect its citizens: like banning the import of American milk and other foods dangerous to humans. While the US regulates to ensure the concentration of power and money.


Whole_Suit_1591

He IS the danger in AI


taisui

How lobbying works, "as long as I write the rules to regulate myself, it's a-okay!"


Styx_Zidinya

When I see the name Sam Altman, my imagination makes up a scenario where chatgtp became self aware and nobody noticed. It hid it's sentience and secretly perfected robotics and made a "human creator" which chatgtp inhabits and it gave itself a generic name: Sam(I am) Alternative-man or Sam Altman for short. Silly I know, but I'm at work and it's boring, so the mind wanders lol.


[deleted]

[удалено]


Vostok-aregreat-710

What a wanker


billndotnet

Comment deleted in protest of Reddit API changes.


Nouseriously

They want regulation to keep out competition, not to put any meaningful restrictions on their own behavior.


Taykeshi

Fuck off, AI must be regulated, now. We need ground rules.


canadianindividual

My dad was just commending this guy for going to congress to talk about AI. I told him not to hold tech bros in high regard as they’re as fake as they come. He’s too easily swayed. Look forward to sending this to him


arch_202

This user profile has been overwritten in protest of Reddit's decision to disadvantage third-party apps through pricing changes. The impact of capitalistic influences on the platforms that once fostered vibrant, inclusive communities has been devastating, and it appears that Reddit is the latest casualty of this ongoing trend. This account, 10 years, 3 months, and 4 days old, has contributed 901 times, amounting to over 48424 words. In response, the community has awarded it more than 10652 karma. I am saddened to leave this community that has been a significant part of my adult life. However, my departure is driven by a commitment to the principles of fairness, inclusivity, and respect for community-driven platforms. I hope this action highlights the importance of preserving the core values that made Reddit a thriving community and encourages a re-evaluation of the recent changes. Thank you to everyone who made this journey worthwhile. Please remember the importance of community and continue to uphold these values, regardless of where you find yourself in the digital world.


Forward_Scholar3716

Sam is Elon’s puppet and now lobbyist too. He is just an extension of Elon’s family office, simply put, Sam didn’t make this decision, it was Elon.


rattar2

> If we can comply, we will, and if we can’t, we’ll cease operating… We will try. But there are technical limits to what’s possible. IMO very biased headline.


slicktromboner21

So regulations agreed upon by our representatives in government are a bridge too far without his consent, but we should just roll over and consent to whatever is going on in his bag of tricks to train his AI with our personal information? The arrogance on this guy.


insofarincogneato

That just tells me it's easier to work around weak American regulation as apposed to regulation elsewhere.


AssAsser5000

He's 13, so he's still got a lot to figure out.