- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
It turns out there’s a very clear reason for that. I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.
Utterly insane.
Move to Europe, that NDA isn’t legal here, which makes the whole thing void.
First rule of OpenAI is: “What is OpenAI?”
Except… this sort of contract is no longer considered to be legal in the United States.
I’m really looking forward to the lawsuits, to be honest.
Edit: lol wow, derp. This is NDA, not non-compete (the article I linked). But that said, companies will often make NDA contracts that are legally questionable, and as was mentioned (and linked to) further down the thread, the NLRB has ruled that NDAs that effectively force employees to broadly surrender their labor law rights are unenforceable.
I may be missing information, but I thought the only major change recently was that non-compete agreements were made effectively illegal, but I don’t believe there was anything that affected non-disclosure agreements and non-disparagement agreements.
They’re not talking about that, they’re talking about this from last year:
Sure major senior leaders are resigning, and, yes the guy looks like he just woke up in a dumpster, but you’ve got to understand every imaginary thing anyone is thinking that AI can do will, like, totally happen. Totally. Probably tomorrow!
This is the best summary I could come up with:
“Her,” tweeted OpenAI CEO Sam Altman, referencing the movie in which a man falls in love with an AI assistant voiced by Scarlett Johansson.
But the product release of ChatGPT 4o was quickly overshadowed by much bigger news out of OpenAI: the resignation of the company’s co-founder and chief scientist, Ilya Sutskever, who also led its superalignment team, as well as that of his co-team leader Jan Leike (who we put on the Future Perfect 50 list last year).
Sutskever publicly regretted his actions and backed Altman’s return, but he’s been mostly absent from the company since, even as other members of OpenAI’s policy, alignment, and safety teams have departed.
His resignation message was simply: “I resigned.” After several days of fervent speculation, he expanded on this on Friday morning, explaining that he was worried OpenAI had shifted away from a safety-focused culture.
All of this is highly ironic for a company that initially advertised itself as OpenAI — that is, as committed in its mission statements to building powerful systems in a transparent and accountable manner.
“Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems,” a recruitment page for Leike and Sutskever’s team at OpenAI states.
The original article contains 1,423 words, the summary contains 211 words. Saved 85%. I’m a bot and I’m open source!
Removed by mod
The labor law violations, probably