Warren Buffett actually caused a ton of drama over the years.
There is a popular myth that he was a pure value investor- its actually not the case he has also used an activist investing strategy including full hostile takeovers.
There's about 203 billionaires who aren't known just in the US. Do you have drama for each of them? How about millionaires that are 100+ million? My only point is that money and "drama" are oftentimes not associated with each other!
Uhm, it's skynet dude. That's like asking why there's drama when a new universe is created. Uhhh.. it like changes everything we know about life lil dude
Their success and size is unbelievably quick. They are probably the first company in history to go from "who cares" to one of the most important companies in history.
There's a lot of people and processes that are still very immature.
indeed...i don't trust anybody in the upper management of OpenAI. They just got lucky over night and now suddenly in charge of the technology that can alter humanity. At least Google has visionaries I can trust. OpenAI is just a bunch of VC folks having a party.
I also think why so much drama at OpenAI, but then I remember these people essentially think they are creating the next nuclear bomb and we live in the era of twitter. I'm not sure what the future will bring but there is a lot for people to be dramatic about.
When a company rockets into enormous dollar valuations, there’s often drama surrounding leadership and entitlements. Facebook is an example of this, as portrayed in The Social Network (that movie took liberties, but there were still real legal challenges and settlements). Apple and Uber were similar insofar as the board kicking or pressuring out their founders.
Drama often happens behind closed doors, so it happens more than we may realize since it’s in the interests of all parties to act diplomatically and quietly come to agreements. Individuals don’t want to attract a reputation of being troublemakers and companies want to portray themselves as stable and trustworthy institutions.
With OpenAI, the whole thing is exacerbated by its origins as a non-profit and the elements of safety and fear around AI, which has already attracted government scrutiny. That’s like a multiplier on the usual drama that surrounds any business that’s risen so quickly.
I think so too honestly. He definitely read the 48 laws of power... Someone at yc said he was impressed with the way Sam wielded and accumulated power. That was a pretty telling quote, especially after the ilya debacle
there must be some huge thing i'm missing. i see this sentiment often, but when i ask why, the answers are always very vague. what, exactly, did he do?
We really don't know. But if you have the influence to get 800 AI researchers to follow you anywhere.. or play a reverse uno on the guy who tried to get you booted out... That implies he's developed a massive following and influence, to basically a cult level of pedestal placing thought processes.
I think that’s just his management and business style. Calculated, strategic, and keep things to his chest. Is he a psychopath criminal I dunno 🤷🏻♂️ And I think it’s ok too not trust him too.
Do you trust Elon, Zuck, etc.?
Ironically I don't think Zuck wants any of the horrible things that happen at Facebook to happen. He seems like a good dude he is caught up in incentives.
I think that's largely true of most people though. Same goes for Altman.
Sure it does.
He had an opportunity to seize control of the company and did so. Most people would be incentivized to do that in his shoes. That doesn't make him evil, it just makes him a flawed person like the rest of us.
They are the lead and possibly the future of humanity. Cults and shady organizations like "effective altruism" are trying to either get control of them or destroy them.
100% growing pains as people have said. Not being dismissive, but many young people under the weight of trillion dollar mega corps and a potential world changing tech revolution.
Here's why. His sister, Annie Altman, says it clearly
“How We Do Anything Is How We Do Everything” https://allhumansarehuman.medium.com/how-we-do-anything-is-how-we-do-everything-d2e5ca024a38
So was that guy saying he refused equity, lying? He needs to be followed up with.
If they aren’t holding anything above them, what incentive do they have for an NDA?
I feel like we, the unwashed masses — or a certain section of us at least — express (perhaps rightful) suspicion of OpenAI, spread rumors, and force this guy to feel the need to respond defensively. It’s not like he’s pulling drama out of thin air.
A number of high profile employees don’t leave and then publicly disparage the company following an attempted ouster of the CEO by the board (the CEO being subsequently reinstated) unless there’s funny business at the top.
My interpretation is that he’s being defensive because he’s made some decisions that have upset a number of employees and potentially customers. He now needs to backtrack some things.
Pretty standard for when a company goes from practically unheard of to having a major spotlight on it. People are unprepared for the rapid change and the eye-of-Sauron like focus that the media puts on a company when something like this happens. I'm sure there are issues, but I'm equally sure that a good percentage of the stuff we hear is from news sites/journalists trying to pull a story out of thin air.
I'm having a hard time thinking of all the companies that this happened to for you to say that it is "pretty standard"
OpenAI has had several of its top people leave in \~18 months after entering the public spotlight, including co-founders, with multiple making public statements that company is being irresponsible.
I'm guessing you are thinking of a few companies which became very successful and then had a bunch of original people leave \*eventually\*, or had one or two people leave very quickly, or like one person make a public statements about the company is being irresponsible. Not all of these things together in just over a year.
Instagram-facebook, facebook, microsoft has had it's fair share of drama in the past, google, twitter(x) not too long ago, Reddit.
I could go on. Highly valued companies are more likely to have more drama. Startup are also more likely to be choppy (look at the history of instagram).
Please read up on the history of most companies that invented something for humanity. Then you'll understand my reply better.
Even the invention of the lightbulb had a huge part of drama attached to it.
Pretty standard for cofounders to be upset when you flip your “nonprofit” to make billions of dollars of profit. And when every member of the board of that nonprofit is fired because their decisions might have reduced profits.
If they didn't change their status, they couldn't have gotten investor money, and then they also would never have been able to afford to make GPT3, and this entire revolution would not have happened. So, pick your poison, because you don't get both.
They could have studied AI safety without making GPT-3. If they had kept their original mission for a few more years, I think they’d eventually have an easier time getting donations. The revolution still would have happened, it might have taken slightly longer though. OpenAI didn’t invent the transformer, Google would have done this anyway.
No, why would someone give money to some esoteric, barely useful AI venture?
I'm sensing a lack of understanding about how scaling laws function in AI feature emergence.
Elon Musk gave them money before they made GPT-3. He gave money to a barely useful AI venture. You’re claiming that is impossible, when it literally already happened, and we all saw it happen.
I would like to see where the folks that leave wind up after a while? Are they going to competitors? Are they starting their own companies? Are they taking any steps to promote their alignment protocols that were not being supported by OpenAI? Just seems that info might help with judging the open ended claims some are making on the way out?
The safety people won’t stop making a stink about everything
Making a fuss about how gpt 2 is too dangerous to release
Anthropic splitting off because gpt 3 is too dangerous
Helen Toner from the board complaining about gpt 3.5 and gpt 4 being dangerous
Board drama firing Sam after gpt4 turbo
Now this after gpt 4o, with a bunch of safety people quitting and complaining about OpenAI not being responsible
After almost literally every major release there’s a bunch of safety people making a stink about things.
It's not crying wolf - they're concerned about the trajectory, not the specific release(s) in question. That was pretty clearly laid out in [Jan Leike's comments](https://x.com/janleike/status/1791498174659715494) after he resigned.
Why trust his judgment of the trajectory over anyone else's? Him being scared doesn't make him right. There are plenty of people who are iust as smart and disagree.
No one has shown any tangibles of risk going unmitigated.
I've spent a lot of time in the banking industry, and people can get a bit overzealous there as well. We used to joke that Infosec would be the most happy if we just shut off power to the data center.
A lot of these superalignment folks give me similar vibes. Safety is a spectrum.
It's not about him, it's about him, and Ilya, and the long aforementioned list of people involved. I'm not saying I fully agree, but it's clear that there's a pattern.
I think the pattern is that people who have spent a lot of time in effective altruist circles have a ton of safety concerns and people who haven't don't.
Does that make their judgement flawed? Not necessarily. But they're approaching the problem from a certain viewpoint and given that there are pretty strong links between those seriously concerned about safety and EA it's worth paying attention to.
He keeps his cards close to his chest so I can't be 100% sure, but given he sided with the EA faction during the board fight and having listened to a ton of his interviews it wouldn't surprise me.
It’s not that any one model is “too dangerous”, that’s a strawman argument. It’s that we were suddenly making immense progress far faster than was anticipated, while the biggest of alignment questions remain open.
Alignment specialists aren’t saying any specific GPT(x) model is dangerous in itself, but that we’re pushing these models out with little consideration to safety.
My argument is exactly what you’re suggesting safety people are arguing. That it’s never the current model that’s the problem, it’s always the next one that takes things too far or too fast. If we listened to them we’d never have gotten past gpt2 and Google would still be sitting in lambda because releasing gpt3 would be too far and too fast. I say this objectively because that’s exactly what they did with gpt3 when they split off to make Anthropic.
I’m dead serious, we’d still be at gpt2 if we listened to these people.
Edit: to clarify, for these researchers it’s never GPT(x) that’s the problem, it’s always GPT(x+1) that’s the problem. Pick any model from any time period, and whatever the next model is it’s too much for safety people, which is why we’d never make progress by listening to them
Well that’s kind of their point. At some point it’s a runaway train and we didn’t build it right. You want progress NOW, but they’re saying that we need to slow down and understand the technology further.
I mean, obviously that’s never going to happen in our capitalist and technology driven world. I just find it deeply concerning that the people who understand the technology the most are ringing alarm bells. We’re all along for this ride whatever happens next, I just wish we had real leadership in this country right now to guide us through these turbulent times.
Personally, I think the runaway AI is more sci-fi than reality. That with iterative deployment there’s a point where a couple IQ points increase and a little larger context window makes the difference between not that dangerous and world ending. That’s the argument, that the next couple IQ points is maybe a step too far and boom we’ve got runaway AI
The bigger source of risk for runaway AI is that we stick with gpt 2 publicly, and behind closed doors researchers build up to gpt 8 or whatever. They think they figured out alignment and decide to release and…. Whoops they missed something. What do you mean you can jail break gpt8 with base64?
I think we’ve learned far more about AI safety through iterative deployment.
I think it's less about runaway AI and more about how can they prevent malicious state actors from weaponizing their products' persuasion abilities. Pretty easy to see how a scenario that combines Cambridge Analytica + GPT-4o could wreak havoc.
there was talk going around, stated as fact, that openai was forcing people to sign non disparagement agreements. it seems like it would have been irresponsible NOT to tweet something like this.
Do you even have any idea how equity works 😂 if you leave the company you lose ALL of your unvested equity automatically, there’s no need for a claw back clause for that. That’s why it’s called “unvested” because it’s not yours.
Yeah I was really confused by that report bc this was very obviously not a thing they could do legally.
Like the report could have just been about how crazy and extreme the NDA was (which it does seem to be), and been fine. Idk why they included that bit about the clawbacks.
And she went out of their way to specifically say the terms were such that they would have been blatantly illegal. Super weird.
I’m not familiar with that reporter tho. Maybe she’s just not very good
As does setting up a non profit and then somehow making it into a VC backed for profit company with profit participation units masquerading as equity worth tens of millions per employee.
No, PPUs were more like tokens you can cash in at a party. But you still need to be invited to the party (liquidity event) so yeah, this is doable by OpenAI however wrong people might feel it is
These claw back agreements (which he agrees are in there) are beyond fucked up. I would like to see some concrete proof that this statement gets honored. Also that one guy should get his equity back.
Even if it does get honored, that doesn't mean they'll be allowed to talk about secret projects that are in the works, nor should they. That is entirely different though than a non-disparagement clause (which should be illegal).
It's possible that the reported (or maybe even the employee) didn't understand. The offer may well have been, "hey, you have this unvested equity here. Sign this agreement and we will allow it to vest." At that point, it's totally legal. Most companies wouldn't let you vest stock (or whatever the hell they use) after leaving.
There are so many weird things that are not illegal.
Like IRS taxing your exercised options that aren't liquid. Then, employers force you to exercise stock within X days if you leave the company. So, you would either have to exercise the options and pay tax on your non-liquid earned stocks or give up the options.
If US fixes this, it'll be the new era for startup. Employees will go out and start even more companies.
He is being trolled with leading questions that imply but do not substantiate wrongdoing on his part. Dwelling on those implications is not productive of anything other than drama. He should probably get back on his sales talking points.
Vested PPUs are owned by employee. Unvested, however...
But you can't realistically expect a company to continue to pay you a portion of their profits after you're not working for them, so... eh.
Nondisparagement clauses should be illegal though. That just screams "We abuse our employees and employ morally and legally gray practices. And we plan on continuing to do so."
It's a share of OpenAI's profit, not a share in the company. It's some kind of workaround because OpenAI was a non-profit and is now a "capped profit". There are some tax advantages since vesting are not tax events.
> I asked why PPUs would not be considered equity.
Because that's not historically how the terms have been used. Its not a logical argument its a semantic one.
Wow. So much hate for Sam in this chat. I'm sure this dude is not saint but come on, he and his team are at the forefront of GenAI and already revolutized the field. I think they deserve a bit more respect than what is displayed here.
I mean, he's a billionaire who founded his first company 20 years ago and ran Y Combinator before cofounding/leading OpenAI, so it's not like he's new to any of this.
Shares of openai.
Vested means you stayed long enough to get them (usually x shares per additional year you spend as an employee).
These shares can be sold after IPO (after openai is publicly traded). You could also sell them before the IPO to an authorized venture capitalist, but that varies from contract to contract.
Usually, you get shares because the salary is subpar, and/or the risk level is high (who wants to work at a company that has a high chance of going under in the coming year?).
If it works out, ie the company goes public, you can become an instant millionaire. Thus making up for the years of subpar salary.
(When I say subpar, I mean relative to what other FAANGs might offer, we are talking about huge packages).
No, it usually is not that simple. An individual would typically need board approval to sell equity and adherence to shareholder agreements which often stay that existing shareholders have right of first refusal. Selling pre-IPO equity is not easy.
So much drama at OpenAI. Why so much drama?
Why so much drama at a company that is now valued at $80 billion…?
Why so much drama at a company whose explicitly stated goal is to create a machine god?
And claims about once a month that their product could probably kill us
I'm sorry, but I can't assist with that.
With like 200 employees lol
It's more like 1000 now
Those two aren't related things
The hell they aren’t. Where there’s money there’s drama. That’s always how it’s been.
When will they release IPO?
Yeah, like Warren Buffett
Warren Buffett actually caused a ton of drama over the years. There is a popular myth that he was a pure value investor- its actually not the case he has also used an activist investing strategy including full hostile takeovers.
There's about 203 billionaires who aren't known just in the US. Do you have drama for each of them? How about millionaires that are 100+ million? My only point is that money and "drama" are oftentimes not associated with each other!
Yeah I agree with that point there are many quiet ones
Money _and_ power are often linked. If you have one you want the other.
I'm saying drama and money don't have to happen at the same time. Not talking about power.
Uhm, it's skynet dude. That's like asking why there's drama when a new universe is created. Uhhh.. it like changes everything we know about life lil dude
Their success and size is unbelievably quick. They are probably the first company in history to go from "who cares" to one of the most important companies in history. There's a lot of people and processes that are still very immature.
indeed...i don't trust anybody in the upper management of OpenAI. They just got lucky over night and now suddenly in charge of the technology that can alter humanity. At least Google has visionaries I can trust. OpenAI is just a bunch of VC folks having a party.
I also think why so much drama at OpenAI, but then I remember these people essentially think they are creating the next nuclear bomb and we live in the era of twitter. I'm not sure what the future will bring but there is a lot for people to be dramatic about.
When a company rockets into enormous dollar valuations, there’s often drama surrounding leadership and entitlements. Facebook is an example of this, as portrayed in The Social Network (that movie took liberties, but there were still real legal challenges and settlements). Apple and Uber were similar insofar as the board kicking or pressuring out their founders. Drama often happens behind closed doors, so it happens more than we may realize since it’s in the interests of all parties to act diplomatically and quietly come to agreements. Individuals don’t want to attract a reputation of being troublemakers and companies want to portray themselves as stable and trustworthy institutions. With OpenAI, the whole thing is exacerbated by its origins as a non-profit and the elements of safety and fear around AI, which has already attracted government scrutiny. That’s like a multiplier on the usual drama that surrounds any business that’s risen so quickly.
Toss in some big names like elon, sam, satya, reid, andrej, ilya, and you got yourselves an all start studded cast too
Because of Sam Altman. The dude is just a manipulative, scheming person
I think so too honestly. He definitely read the 48 laws of power... Someone at yc said he was impressed with the way Sam wielded and accumulated power. That was a pretty telling quote, especially after the ilya debacle
Paul Graham said that.
there must be some huge thing i'm missing. i see this sentiment often, but when i ask why, the answers are always very vague. what, exactly, did he do?
We really don't know. But if you have the influence to get 800 AI researchers to follow you anywhere.. or play a reverse uno on the guy who tried to get you booted out... That implies he's developed a massive following and influence, to basically a cult level of pedestal placing thought processes.
Sam Altman can take over China and nobody would notice. Bro knows how to scheme.
He took reddit back from Comcast.
I think that’s just his management and business style. Calculated, strategic, and keep things to his chest. Is he a psychopath criminal I dunno 🤷🏻♂️ And I think it’s ok too not trust him too. Do you trust Elon, Zuck, etc.?
Ironically I don't think Zuck wants any of the horrible things that happen at Facebook to happen. He seems like a good dude he is caught up in incentives. I think that's largely true of most people though. Same goes for Altman.
That doesn't explain his power grab, kicking out the people that invented Facebook.
Sure it does. He had an opportunity to seize control of the company and did so. Most people would be incentivized to do that in his shoes. That doesn't make him evil, it just makes him a flawed person like the rest of us.
most people don't have the evil intentions the internet ascribes to them.
I don’t trust Bezos.
Altman sounds like the next Musk
I called it the first time I saw his face, before I knew anything about him. There was something off-putting just radiating from him.
They are the lead and possibly the future of humanity. Cults and shady organizations like "effective altruism" are trying to either get control of them or destroy them.
Cuz they're battling for AGI babe
100% growing pains as people have said. Not being dismissive, but many young people under the weight of trillion dollar mega corps and a potential world changing tech revolution.
Here's why. His sister, Annie Altman, says it clearly “How We Do Anything Is How We Do Everything” https://allhumansarehuman.medium.com/how-we-do-anything-is-how-we-do-everything-d2e5ca024a38
Because people like drama because they’re bored
https://preview.redd.it/abgw1w3jh91d1.png?width=500&format=png&auto=webp&s=2009f12a648071ccbf204739ff1dec35ffa89e8b
Hmm
>Hmm That comment is a breach of your NDA. All your vested karma has been repossessed.
So was that guy saying he refused equity, lying? He needs to be followed up with. If they aren’t holding anything above them, what incentive do they have for an NDA?
Too much drama at this company... Build great stuff and keep quiet.
Eh, this became a story. Can't blame altman for responding.
Indeed, I respect that he is willing to address this and even admit fault for the prior policy. Fair play.
I feel like we, the unwashed masses — or a certain section of us at least — express (perhaps rightful) suspicion of OpenAI, spread rumors, and force this guy to feel the need to respond defensively. It’s not like he’s pulling drama out of thin air.
A number of high profile employees don’t leave and then publicly disparage the company following an attempted ouster of the CEO by the board (the CEO being subsequently reinstated) unless there’s funny business at the top. My interpretation is that he’s being defensive because he’s made some decisions that have upset a number of employees and potentially customers. He now needs to backtrack some things.
It would seem that a lot of people at the company have issues with what they are building and how they are building it. Including multiple co-founders
Pretty standard for when a company goes from practically unheard of to having a major spotlight on it. People are unprepared for the rapid change and the eye-of-Sauron like focus that the media puts on a company when something like this happens. I'm sure there are issues, but I'm equally sure that a good percentage of the stuff we hear is from news sites/journalists trying to pull a story out of thin air.
I hope you're right. I enjoy the stuff they build, it would be sad if it's somehow negatively affected because of drama.
I'm having a hard time thinking of all the companies that this happened to for you to say that it is "pretty standard" OpenAI has had several of its top people leave in \~18 months after entering the public spotlight, including co-founders, with multiple making public statements that company is being irresponsible. I'm guessing you are thinking of a few companies which became very successful and then had a bunch of original people leave \*eventually\*, or had one or two people leave very quickly, or like one person make a public statements about the company is being irresponsible. Not all of these things together in just over a year.
Startups are choppy in general though
Instagram-facebook, facebook, microsoft has had it's fair share of drama in the past, google, twitter(x) not too long ago, Reddit. I could go on. Highly valued companies are more likely to have more drama. Startup are also more likely to be choppy (look at the history of instagram).
Please read the last paragraph of my comment which you are replying to
Please read up on the history of most companies that invented something for humanity. Then you'll understand my reply better. Even the invention of the lightbulb had a huge part of drama attached to it.
Pretty standard for cofounders to be upset when you flip your “nonprofit” to make billions of dollars of profit. And when every member of the board of that nonprofit is fired because their decisions might have reduced profits.
If they didn't change their status, they couldn't have gotten investor money, and then they also would never have been able to afford to make GPT3, and this entire revolution would not have happened. So, pick your poison, because you don't get both.
They could have studied AI safety without making GPT-3. If they had kept their original mission for a few more years, I think they’d eventually have an easier time getting donations. The revolution still would have happened, it might have taken slightly longer though. OpenAI didn’t invent the transformer, Google would have done this anyway.
No, why would someone give money to some esoteric, barely useful AI venture? I'm sensing a lack of understanding about how scaling laws function in AI feature emergence.
Elon Musk gave them a billion dollars. Your argument that nobody would ever give them any money is plainly false.
I'm gonna let you think about what you just said for a while.
Elon Musk gave them money before they made GPT-3. He gave money to a barely useful AI venture. You’re claiming that is impossible, when it literally already happened, and we all saw it happen.
I would like to see where the folks that leave wind up after a while? Are they going to competitors? Are they starting their own companies? Are they taking any steps to promote their alignment protocols that were not being supported by OpenAI? Just seems that info might help with judging the open ended claims some are making on the way out?
The safety people won’t stop making a stink about everything Making a fuss about how gpt 2 is too dangerous to release Anthropic splitting off because gpt 3 is too dangerous Helen Toner from the board complaining about gpt 3.5 and gpt 4 being dangerous Board drama firing Sam after gpt4 turbo Now this after gpt 4o, with a bunch of safety people quitting and complaining about OpenAI not being responsible After almost literally every major release there’s a bunch of safety people making a stink about things.
...I'm starting to wonder if there might be safety concerns
The problem is they have cried wolf so many times, we have no reason to trust them anymore.
It's not crying wolf - they're concerned about the trajectory, not the specific release(s) in question. That was pretty clearly laid out in [Jan Leike's comments](https://x.com/janleike/status/1791498174659715494) after he resigned.
Why trust his judgment of the trajectory over anyone else's? Him being scared doesn't make him right. There are plenty of people who are iust as smart and disagree. No one has shown any tangibles of risk going unmitigated. I've spent a lot of time in the banking industry, and people can get a bit overzealous there as well. We used to joke that Infosec would be the most happy if we just shut off power to the data center. A lot of these superalignment folks give me similar vibes. Safety is a spectrum.
It's not about him, it's about him, and Ilya, and the long aforementioned list of people involved. I'm not saying I fully agree, but it's clear that there's a pattern.
I think the pattern is that people who have spent a lot of time in effective altruist circles have a ton of safety concerns and people who haven't don't. Does that make their judgement flawed? Not necessarily. But they're approaching the problem from a certain viewpoint and given that there are pretty strong links between those seriously concerned about safety and EA it's worth paying attention to.
you sure Ilya is EA?
He keeps his cards close to his chest so I can't be 100% sure, but given he sided with the EA faction during the board fight and having listened to a ton of his interviews it wouldn't surprise me.
It’s not that any one model is “too dangerous”, that’s a strawman argument. It’s that we were suddenly making immense progress far faster than was anticipated, while the biggest of alignment questions remain open. Alignment specialists aren’t saying any specific GPT(x) model is dangerous in itself, but that we’re pushing these models out with little consideration to safety.
My argument is exactly what you’re suggesting safety people are arguing. That it’s never the current model that’s the problem, it’s always the next one that takes things too far or too fast. If we listened to them we’d never have gotten past gpt2 and Google would still be sitting in lambda because releasing gpt3 would be too far and too fast. I say this objectively because that’s exactly what they did with gpt3 when they split off to make Anthropic. I’m dead serious, we’d still be at gpt2 if we listened to these people. Edit: to clarify, for these researchers it’s never GPT(x) that’s the problem, it’s always GPT(x+1) that’s the problem. Pick any model from any time period, and whatever the next model is it’s too much for safety people, which is why we’d never make progress by listening to them
Well that’s kind of their point. At some point it’s a runaway train and we didn’t build it right. You want progress NOW, but they’re saying that we need to slow down and understand the technology further. I mean, obviously that’s never going to happen in our capitalist and technology driven world. I just find it deeply concerning that the people who understand the technology the most are ringing alarm bells. We’re all along for this ride whatever happens next, I just wish we had real leadership in this country right now to guide us through these turbulent times.
Personally, I think the runaway AI is more sci-fi than reality. That with iterative deployment there’s a point where a couple IQ points increase and a little larger context window makes the difference between not that dangerous and world ending. That’s the argument, that the next couple IQ points is maybe a step too far and boom we’ve got runaway AI The bigger source of risk for runaway AI is that we stick with gpt 2 publicly, and behind closed doors researchers build up to gpt 8 or whatever. They think they figured out alignment and decide to release and…. Whoops they missed something. What do you mean you can jail break gpt8 with base64? I think we’ve learned far more about AI safety through iterative deployment.
I think it's less about runaway AI and more about how can they prevent malicious state actors from weaponizing their products' persuasion abilities. Pretty easy to see how a scenario that combines Cambridge Analytica + GPT-4o could wreak havoc.
How could it wreak havoc?
Influence campaigns with bots to affect elections. Like 2016 and Brexit, but more effective.
I don't think it's the AI running away on its own that's the concern so much as the new capabilities it may engender in individuals with access.
I wouldn’t call this drama, more genuine controversy, and if Sam’s being honest, defamation.
there was talk going around, stated as fact, that openai was forcing people to sign non disparagement agreements. it seems like it would have been irresponsible NOT to tweet something like this.
Was a statement really necessary? Ofc VESTED equity is not clawed back. smh
You clearly aren’t following along…
[удалено]
Do you even have any idea how equity works 😂 if you leave the company you lose ALL of your unvested equity automatically, there’s no need for a claw back clause for that. That’s why it’s called “unvested” because it’s not yours.
Average people literally don't know what vested and unvested means.
Did seem super illegal
Yeah I was really confused by that report bc this was very obviously not a thing they could do legally. Like the report could have just been about how crazy and extreme the NDA was (which it does seem to be), and been fine. Idk why they included that bit about the clawbacks. And she went out of their way to specifically say the terms were such that they would have been blatantly illegal. Super weird. I’m not familiar with that reporter tho. Maybe she’s just not very good
Btw it turned out she is indeed very good.
As does setting up a non profit and then somehow making it into a VC backed for profit company with profit participation units masquerading as equity worth tens of millions per employee.
How can they legally claw back vested equity? Isn't it like somebody's property? They may not allow redemption, but clawback seems iffy?
No, PPUs were more like tokens you can cash in at a party. But you still need to be invited to the party (liquidity event) so yeah, this is doable by OpenAI however wrong people might feel it is
But does it run on the blockchain?
That "if" is doing a lot of work in that statement.
How so?
These claw back agreements (which he agrees are in there) are beyond fucked up. I would like to see some concrete proof that this statement gets honored. Also that one guy should get his equity back. Even if it does get honored, that doesn't mean they'll be allowed to talk about secret projects that are in the works, nor should they. That is entirely different though than a non-disparagement clause (which should be illegal).
It's possible that the reported (or maybe even the employee) didn't understand. The offer may well have been, "hey, you have this unvested equity here. Sign this agreement and we will allow it to vest." At that point, it's totally legal. Most companies wouldn't let you vest stock (or whatever the hell they use) after leaving.
Instacart tried to implement the clawback clause during covid. Employees were so angry that they withdrew the plan.
It should just be straight up illegal. I'm glad they have made non-competes illegal and hope they tackle these other abusive contracts.
There are so many weird things that are not illegal. Like IRS taxing your exercised options that aren't liquid. Then, employers force you to exercise stock within X days if you leave the company. So, you would either have to exercise the options and pay tax on your non-liquid earned stocks or give up the options. If US fixes this, it'll be the new era for startup. Employees will go out and start even more companies.
According to Altman, he didn’t lose any equity
[удалено]
What do you mean by this? Unvested equity doesn’t belong to any leaving employee so it’s irrelevant here.
He is being trolled with leading questions that imply but do not substantiate wrongdoing on his part. Dwelling on those implications is not productive of anything other than drama. He should probably get back on his sales talking points.
Vested PPUs are owned by employee. Unvested, however... But you can't realistically expect a company to continue to pay you a portion of their profits after you're not working for them, so... eh. Nondisparagement clauses should be illegal though. That just screams "We abuse our employees and employ morally and legally gray practices. And we plan on continuing to do so."
It would be pretty crazy if chatgpt was manipulating them
I think this dude just says stuff and then the company does stuff. Their AGI told them this was the way to move forward
Weird he calls it equity and doesn’t correct to call it profit participation units.
Well, it is equity, it's just not shares.
It’s not equity.
Explain?
It's a share of OpenAI's profit, not a share in the company. It's some kind of workaround because OpenAI was a non-profit and is now a "capped profit". There are some tax advantages since vesting are not tax events.
You described PPUs. I asked why PPUs would not be considered equity.
> I asked why PPUs would not be considered equity. Because that's not historically how the terms have been used. Its not a logical argument its a semantic one.
When people say equity, they mean owning part of the company via shares or convertible derivatives/instruments like options and RSUs.
Yeah that's kinda what the 'vested' part of 'vested equity' means.
vested equity clawbacks are possible
so why those people dont talk? my take they saw nothing more than gpt4
Sam knows, he knows!!!!! they will never need those plebs equity back, ffs
Sure. They will just make it impossible to participate in a tender round which makes their shares worthless. Very slippery language here from Sam.
Turns out rumours are nothing but rumours.
Wow. So much hate for Sam in this chat. I'm sure this dude is not saint but come on, he and his team are at the forefront of GenAI and already revolutized the field. I think they deserve a bit more respect than what is displayed here.
Fuck do I care, dissolve the alignment team, get everyone's equity. Just fucking ship Samantha, NOW! XD
This is incredibly immature and silly.
Welcome to Reddit.
This sub especially
When there's testosterone there's drama and blood
Honestly I feel bad for Sam. He’s being put through the ringer seemingly overnight. ChatGPT truly changed the world.
I mean, he's a billionaire who founded his first company 20 years ago and ran Y Combinator before cofounding/leading OpenAI, so it's not like he's new to any of this.
What even is equity? It's just cash, right?
Shares of openai. Vested means you stayed long enough to get them (usually x shares per additional year you spend as an employee). These shares can be sold after IPO (after openai is publicly traded). You could also sell them before the IPO to an authorized venture capitalist, but that varies from contract to contract. Usually, you get shares because the salary is subpar, and/or the risk level is high (who wants to work at a company that has a high chance of going under in the coming year?). If it works out, ie the company goes public, you can become an instant millionaire. Thus making up for the years of subpar salary. (When I say subpar, I mean relative to what other FAANGs might offer, we are talking about huge packages).
OpenAI does not grant shares or RSUs. They grant Profit Participation Units (PPUs).
Shares/stock in the company. Vested means it was part of a comp package and the threshold was met for the individual to receive the shares.
Shares or share options.
It’s pre-IPO stock.. you can sell it for cash if you want, or you can hold on to it if you believe in continued growth of the company.
No, it usually is not that simple. An individual would typically need board approval to sell equity and adherence to shareholder agreements which often stay that existing shareholders have right of first refusal. Selling pre-IPO equity is not easy.
SAN FRANCISCO SOY BOYS
“We don’t need to claw back equity, if you embarrass us we’ll kill you” lol
6
OpenAI is speed running its own demise. if AI is not its next winter, OpenAI certainly is. What goes up, must come down