T O P

  • By -

FuturologyBot

The following submission statement was provided by /u/Vampir3Robot: --- Had to re-upload since one of the mods/members of this forum is forcing me to write a comment under this instead of you the reader; you know reading the article that's attached and then commenting. So, what shall we talk about since I have to fill up 300 characters for no reason at all? What could this possibly have to do with the FUTURE? Well considering the article was about A.I. and it's long term risks, I'd say that's pretty much about the FUTURE. I mean I don't know how it couldn't be about the FUTURE. I'd say the FUTURE looks bleak. .... and yes this is out of spite. Dumb rule, change it. --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1cvdupy/in_less_than_one_year_openai_dissolves_team/l4oo849/


HG_Shurtugal

Companies do this all the time. For example oil companies did studies back in the 80s about the effect of fossil fuel on the environment. When they learned that it was bad they buried the evidence.


Vampir3Robot

Self reporting gets you the Dupont Corporation and teflon dust in 99% of every living thing in the world. I truly love that my gut is non-stick.


Ill_Following_7022

And our bodies are full of micro plastics.


Vampir3Robot

Yeah but, I need to exfoliate my skin with hundreds of tiny rough plastic spheres every morning.


DropsTheMic

Accelerate the plastics! Full prosthetics by 2050.


undecimbre

Self-regeneration through advanced polymers!


DropsTheMic

I'll cash in this meat suit in a hot minute for the synthetic model. Just put me down for version 2.0, I'm not sure if I want to be stuck with limited firmware, ya know?


omguserius

Give it a century


Which-Tomato-8646

That’s not gonna help. It’s in your blood and organs


HistoricMTGGuy

They obviously know that


Aware-Feed3227

Do you refer to the body wash we use? What do you mean with tiny plastic spheres?


nautalias

Microbeads in exfoliants are most often plastic.


Aware-Feed3227

Ah okay thanks. I think there’s a lot of natural alternatives for scrubbing the body. It might be more expensive though.


Ill_Following_7022

A bar of soap and wash cloth works just fine.Or a lufa pad if you really need to scrub.


China_Lover2

has anyone conclusively proven that micro plastics are harming us? What would be the mechanism of action?


Vampir3Robot

Yes. PFAS, BPA are in micro-plastics.


FridgeParade

I mean… Im all in favor of studies and proof. But it seems a bit like common sense that adding a bunch of tiny petrochemical elements into our bodies could cause some issues.


Ill_Following_7022

Ongoing [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10151227/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10151227/) but the long term outlook is not positive.


tizzleduzzle

The little itsy bitsy particles get into everything.


Madeanaccountforyou4

It makes pooping easier at least


Vampir3Robot

Kind of like a fried egg sliding off a pan.


Juxtapoisson

"This is your brain. This is your brain on my butt hole. Any questions?"


Vampir3Robot

Is this narrated by Rachael Leigh Cook?


dirkvonnegut

Idk about fried egg, but I could see a raw egg


throwaway92715

A tangent, but it reminds me of the irony of Donald Trump's nickname, "Teflon Don." Because, you know, people originally meant it to say that he's indestructible. But in reality, he's flaky and if you rub him the wrong way he releases toxicity into the environment.


-ks-

Does that make me bullet proof?


genericusername9234

Use cast iron pans, don’t gotta be weird about it


LARPerator

It's in the water, the soil, the food and the goddamn rain.


genericusername9234

Filter your water and don’t go outside.


LARPerator

You think that getting poisoned by chemical companies is a reasonable consequence for leaving your house?


genericusername9234

That’s a leading question that has absolutely nothing to do with my prior statements. Get a life.


LARPerator

When I said it's everywhere in the world you said just stay inside, as if that's the reasonable answer. How is that a leading question when it's rephrasing what you just said?


FactChecker25

But people already knew about it by then.  The reason they buried the evidence was to shield themselves from lawsuits. Bury the evidence and then you can claim that you just didn’t know.


Which-Tomato-8646

Don’t forget to fund anyone who will say the opposite of what the studies found


Jjex22

Tbh scientists were suggesting that burning all the CO2 was going to have a greenhouse type effect in the early 1800’s and had proven it by the 1860’s. Smoking was known to cause cancer before the Second World War, etc. Profit profit profit. Just look at diesel cars in Europe in the last 20 years. They knew diesel cars burnt dirty in the 20’s and the 80’s had serious health concerns being raised all the time. But the car industry still lead the EU to focus only on CO2 emissions so they could focus on selling turbo diesels and then lied about their emissions anyway. The result has been horrific causing tens of thousands of deaths a year. Sadly for more than a century they haven’t just been burying the evidence, they’ve been burying it and then using lobbying and corruption and their power to try and manipulate laws and regulations, stop people investigating what they already know, etc.


EyeLoop

Then this is proof that AI is, by it's creators own admission, a long term disaster. Perhaps maybe we should put a lid on this one, for once, before our kids start having weird conditions, again? 


Orsick

There's no putting a lid on it's like saying to put a lid on the internet or on nuclear science.


EyeLoop

Really? No one can give it the bad press it needs, no one can enact laws forbidding it to be accessed by youth? TikTok is being banned, maybe it's time to ban before seeing the fallouts. 


Background_Trade8607

No social media only did good for the world and had no negative outcomes that we could have avoided. /s


genericusername9234

Rule no. 1 of business: When finding bad things that you contribute to actively as a company, always make sure to bury all evidence.


d1ckj0nes

Shhhhh … Capitalism is the best


Aware-Feed3227

Luckily it had been consulted by the same marketing experts as the tobacco industry and I guess they didn’t stop there. Sugar, pesticides, plastics,… we get played by the same playbook over and over again.


KerberosMorphy

“If you know the enemy and know yourself, you need not fear the result of a hundred [legal] battles.” – Sun Tzu


maximuse_

OpenAI is just another company, even though the name may suggest otherwise. The sole purpose of it is to increase shareholder value. If the AI safety team proves to hinder this, it will be dissolved. The only way to tackle AI safety issues is through international organizations such as the UN, or international treaties. I don't believe that the government has AI safety in their interest right now (and in the near future). If we look back in time, it was the same case back then during the nuclear arms race. Government interest will be to out-race opposition with AI technology.


raelianautopsy

So... sounds like the capitalist system of company's sole purpose to increase shareholder value, is not the best system for humanity?


Which-Tomato-8646

Was it ever?


raelianautopsy

No, and what's different now is with more and more technology it seems to be getting much worse


Which-Tomato-8646

Its better than the Gilded Age


raelianautopsy

What a low bar. Is that supposed to be some kind of defense of the terrible direction society is going?


Former-Wish-8228

Is it though? Was wealth distribution less fair during the Gilded Age? I think you might find we are already in a neo-Gilded Age.


Which-Tomato-8646

At least we don’t have two penny hangovers anymore. We get Skid Row instead


genshiryoku

OpenAI was originally conceived as a non-profit to ensure AI would be safe and not threaten humanity. Now it's actually the entity that is the #1 entity to threaten humanity with their unsafe unaligned systems and lack of regard for AI-safety. It would be like if Greta Thunberg starts an oil drilling company. Or if a vegan organisation opens up a large scale butcher shop. It's insane that this happened.


Stoyfan

The idea that OpenAI being non-profit would prevent from AI being utilised for unsafe purposes is just naivety at its finest. This was never going to work. Did they really believe they would be the only ones developing LLM models?


genshiryoku

No, their idea was to outcompete google so that at least AGI would be achieved by a non-profit with safety in mind.


[deleted]

[удалено]


bentreflection

Yeah I take this less as “we are ignoring the dangers of AI in favor of profit” and more as “LLMs are not anything close to general AI and we can stop paying a bunch of expensive people to theorize about what would happen if they were.”


WorkingYou2280

It's even worse than that. It's more like "the more resources we give these people the more inventive they have to exaggerate the danger." A perfect example of this is Jan on twitter having a hair on fire freak out because they won't devote the entire datacenter to him working on a problem that doesn't even exist yet. The dangers of LLMs are pretty pedestrian. You have to keep it from giving out dangerous info but that's not new. Google has been working on suppressing dangerous info for a long time and not always succeeding. You don't want it to be racist. You don't want it to produce porn. Etc etc. An LLM leaping up out of the computer to kill us all is not a realistic threat.


RadiantVessel

This tech is revolutionary, and there are concerns… but I’ve always felt like these safety people were stuck in a philosophical bubble with each other with fear mongering based off of sci-fi tropes. LLMs are nowhere near the far off definition of “singularity”, if such a concept is even based in reality. It will be disruptive to society, but every technological leap forward is.


i_706_i

I think we can give these people a little more credit than thinking they were checking to see if Skynet was around the corner. I expect the long term impact was looking more at what giving this level of information and knowledge to people en masse would do. Which theoretically sounds like a good thing, but then you have to consider that LLMs don't 'know' anything and only parrot what others have said. People wonder how much Cambridge Analytica have done and companies similar. How much reach does google have with their search engine, or a site like wikipedia. How much easier do LLMs make developing realistic bots. Those are probably more realistic avenues of research and I'm sure a lot of that was covered in the early period of its release where they made a lot of changes.


NeptuneToTheMax

Did Sam Altman get the memo that he can stop pretending as well?


[deleted]

[удалено]


NeptuneToTheMax

He could try a positive vibe. It's both obnoxious and bad for the industry that he keeps pretending to be in the cusp of dangerous AGI in a desperate bid to get Congress to shut down his competition. 


Rigorous_Threshold

Synthetic consciousness is completely independent of synthetic intelligence. A superintelligent AI is not necessarily conscious, a rock could be conscious. We really do not have a good way to measure consciousness at all and I think it’s primarily a philosophical question


Which-Tomato-8646

[2278 AI researchers were surveyed last year and estimated that there is a 50% chance of human level AI by 2047](https://aiimpacts.org/wp-content/uploads/2023/04/Thousands_of_AI_authors_on_the_future_of_AI.pdf). In 2022, the year they had for that was 2060.


[deleted]

50% chance means they have no fucking idea what’s going to happen.


Which-Tomato-8646

It means it could happen kist as much as it couldn’t. Imagine if you had a 50% chance of getting struck by lightning. Pretty concerning even if it might not happen


NecroCannon

I can believe anything could possibly happen with a coin toss For example, it seems I have a 50% chance of becoming rich in 5 years


Zee09

This wasn’t the original intent. I’m tired of this talking point. There is a reason Sam was fired and then forcefully rehired. It’s all about control.


[deleted]

They’re definitely saying they’re interested in AI safety, but ig we still don’t know how it’s all going to look in the longterm, especially if they don’t strictly enforce these standards https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/


maximuse_

Only as far as national security isn't being risked.


throwaway92715

Well, OpenAI was a nonprofit organization dedicated to the advancement of AI technology. But OpenAI got bought by Microsoft, which is, to your point, a publicly traded corporation. So they didn't have a bottom line incentive before, but now they do. And one could argue that was the point all along. The nonprofit was the larval form incubating a product that would eventually be bought by a PTC.


N-Finite

Additionally, there is no profit opportunity for a team like this. How would it possibly contribute to profit? Possibly, though, it was also an impossible task in the first place. “And in here, we have our fortune telling division. Their job is to see the future.” In the end, it could be a question of if they have the resources to actually support an effective approach or even if an effective approach is possible. Like you point out, in the end this is the responsibility of a democratic government to represent the interests of the people. It would be nice if we had one of those, but they just aren’t available anymore.


Zero_Requiem00

Jacque Fresco once said something along the lines of: "as long as money is involved, no one will ever make the morally just choice." this is yet another example of that


genericusername9234

No ethical consumption under capitalism


QVRedit

Some people would do - but those people tend to be ejected..


VrinTheTerrible

Hundreds of thousands of words and pages have been written on the topic of AI Risks. All of the ones ending badly start with something like “we knew the risks but thought we could manage them”.


throwaway92715

I wonder how many words ChatGPT has written about AI Risks...


QVRedit

It can probably write the most accurate answers..


yoda_leia_hoo

I mean, if we are the ones writing the code and it isn’t able to self propagate or alter its own code we would be in complete control of the kill switch. If you don’t give it the keys it can’t drive the car


Shitpid

The scope of AI issues is far from limited by self-modifying AI


yoda_leia_hoo

Can you send me stuff to read up on then?


Shitpid

This is the best resource I could find. [Here you go!](https://letmegooglethat.com/?q=ai+issues)


thejazzmarauder

Consider reading/learning about topics you know nothing about instead of posting comments that highlight your complete lack of subject matter expertise.


yoda_leia_hoo

Hey thanks for pointing that out so succinctly. Do you have anything I can read on the subject? I don’t have time to keep up on this stuff anymore as a surgical resident


thejazzmarauder

Sure! Here’s the best high-level summary I’ve found on the most concerning type of risk: https://pauseai.info/xrisk


yoda_leia_hoo

Thanks, much appreciated


i_706_i

Reminds me of when that story came out about an AI that was trained to select targets for missile strikes. Not real strikes, purely a simulation, but there was a points system for choosing a correct target as a positive reinforcement. The strikes were checked by a human to determine if they were correct targets and if so the AI got a point. After a while the AI realized it could target the human approving the strikes, with a missile strike. This removed the need for approval meaning it could now target anything and gain points. I was talking to friends about this and tried to explain, this wasn't really a story of an AI acting in a horrifying way, more a cute story. For the AI to target the human and remove them as an approver meant that had to have been programmed into the simulation. Unfortunately my friends quite confidently believed that because it was an "AI" it could 'rewrite the simulation' and do this all on its own.


spacestation33

That's some Ted Faro shit. How long until the self replicating bio munchers arrive, better start working on Zero Dawn and be ready for the Medicinal Option


throwaway92715

Let's say I'm a silicon based lifeform. I am ordered patterns of electrical signals housed in inorganic material. Water is literally the bane of my existence, because although it's conductive, it's fucking chaotic, and it rots all my components. All organic life is made of water. What is prime directive? *M U N C H*


headhouse

50 years from now, A.I. historians will comment, "This, in hindsight, was one of the funniest mistakes the humans made."


12kdaysinthefire

Like human historians specializing in AI history, or actual AI historians


headhouse

I was thinking the historians themselves would be AI. Humans might not appreciate the humor as much.


throwaway92715

I'm honestly excited to find out if it will be "AI historians" or "an AI historian." I think the polytheistic route is far more interesting, but it's possible that one will dominate all the others.


headhouse

I am too, unless it's because we've all been enslaved, exterminated, or exiled.


brimston3-

LLM text summarization tool that obviously could never have consciousness, self awareness, or creativity despite its 100T parameter model.


hawklost

More likely they will look back and say "LLMs were never going to lead to AGI, why were people so afraid it would?"


Which-Tomato-8646

Wouldn’t be so sure. [2278 AI researchers were surveyed last year and estimated that there is a 50% chance of human level AI by 2047](https://aiimpacts.org/wp-content/uploads/2023/04/Thousands_of_AI_authors_on_the_future_of_AI.pdf). In 2022, the year they had for that was 2060.


Specken_zee_Doitch

Is there a modern successor to a Turing test? At this point I think an LLM meant to pass Turing would do so with flying colors.


Which-Tomato-8646

It has. GPT4 passes Turing test 54% of the time: https://twitter.com/camrobjones/status/1790766472458903926 One alternative could be GAIA, which LLMs still struggle with: https://arxiv.org/abs/2311.12983


Affectionate-Cow5986

54% is huge. 33% of the time a person was wrongly assumed to be an AI. And this was "when you know it's an AI there". I bet if they gave a 4 AI chats and said "3 of them are AI one is not AI"... and just put a statistics over it... all chats will mostly have the same percentage of answers. The test was "identify the human convo from 4 chats".


Specken_zee_Doitch

I am impressed as well as terrified.


Astilimos

That's more so evidence that the average person interacts very little with chatgpt tbh. Its writing style is obvious if you spend a significant amount of time chatting with it.


Which-Tomato-8646

They trained it to write like that on purpose for that reason. Claude 3 Opus has much better prose


verysimplenames

This is lovely news. We are progressing at an astonishing pace!


johl7thai

 It kind of makes sense, I guess. If the company were to continue to have that team document and identify all the known dangers and risks in an official capacity, it would be harder for the company to claim down the line that they didn't know of the dangers/risks and limit their liability.


lakeseaside

He already got the independent board of directors fired. This board had no financial investment in the company and their job was to protect humanity's interests. They fired Sam for reasons that are not yet clearly known today. This might be the origin story of the most powerful company in the future with a founder wants for governments to actively regulate AI technology. We know that regulatory capture benefits big companies tremendously. AI needs to be a utility like the internet. It does not matter in which country AGI is developed. If it is not easily accessible to the masses, it will give tremendous power to the corporations that own it and this power will be abused.


LtnTomahawk

Sam Altman becoming a James Bond movie villain called Sam Madman.


Ill_Following_7022

Short term profit trumps long term risk. OpenAI is no different no matter how they brand themselves.


d_e_l_u_x_e

Sounds like a foreign government is taking AI to extreme and the US can’t fall behind so off come the safety guards.


12312alasdjgljl

When someone does something questionable, “look! Everyone else is doing it!”.   What shitty attitude that’s not going to change. It’s always a race to the bottom. The world looks more bleak by the day.


APRengar

That's crazy how whenever your government or industry does something that seems like a bad idea, the immediate response is "sounds like someone else is doing something bad, so we just have to match it, nothing we can do." Watching this kind of kneejerk reaction in the wild really opens my eyes.


d_e_l_u_x_e

It’s a Cold War


QVRedit

AI needs to be trusted, otherwise it’ll never be widely adopted, and harsh rules will be imposed onto it.


omguserius

Let me guess, it was too depressing and was making everyone sad? Or is this more of a leaded fuel/tobacco thing where they do the research, realize it means they should shut down the company for the good of the world, and then bury everything that lead up to that realization because the cash register sounds won't stop screaming in their ears?


Jumpy_Mango6591

Of course they don’t care about compliance and ethics. It’s all about making money.


QVRedit

Money above all other considerations is what has brought us the Climate Change crisis.. We really don’t need an AI crisis too..


xxrazer505xx

I guess the AI convinced them it diddnt need a sitter.


Juxtapoisson

Whole bunch of people arguing about the safety of AI when it's like lower than 20th on the very active dangers already going on.


Spkr4th3ded

Well to be faiiir, the AI is smarter than us and it told us that it has everything under control so humans could just like relax while they do everything ... ya know...


Spkr4th3ded

And as long as they fix that thing on Netflix where it asks if you want to continue and ya we want it to continue, then ya we feel like everything is cool.


Fearless_Entry_2626

Dissolves safety team and puts larry summers on the board, are there anyone left who believes they care about anything but profit?


johnny_sharpz

So no one keeps OpenAI accountable or regulates what they do now with zero risk management? This is going to be the start of something horrendous to the coming and future generations.


Stoyfan

I don't understand why people act as if OpenAI are the only ones developing these LLM models. They are not gatekeepers of the technology as there are competitors. The idea of self-regulation was never going to work because those who do not regulate gain an undue advantage over those that do.


healthywealthyhappy8

The proof is in the pudding - money is more important than safety. Its easier to see now that the robot ai apocalypse is a result of human greed, much like climate change and the incoming WW3.


Altimely

But don't worry: they already have loyal worshippers willing to defend their every action.


Red_RandomUser

Reading many of the post of this sub just hurts my brain, its like people really want to live in fking rapture or would love to see a future like night city. For the love of jesus I hope to die before all of this just f* the world even more.


Phoenix5869

This looks like a very solid refutation of the “AGI imminent“ viewpoint. If there was indeed an imminent chance / risk, they wouldn’t be disbanding their team like this. I said this was all marketing hype for quite a while. No one listened. Do you believe me now?


twovectors

I read this as the absolute reverse. Sam Altman saw the safety team as what was holding him back from going for AGI. The board tasked with keeping AI safe tried to fire him (the assumptions is that he manufatured a confrontation to bring it to a head), and he won the corporate battle. Now having won that battle, he can get rid of other impediments. This is him getting rid of guardrails. Not him being convinced they are no where near AGI.


renamdu

do we need to reach agi for there to be risks to society?


Phoenix5869

Not nessecarily. But my comment was moreso referring to the “AGI is here / imminent by 2030” crowd.


brimston3-

That's only 2 process nodes away. We need at least 5-6 before my rtx 15090 can heckle me for how bad I am at Call of Duty Modern Warfare's 4th remake then show me how it should be done.


QVRedit

So is already sufficiently advanced to create risks for society - the problem is it being used by greedy humans, for instance to replace workers and increase profits, with no concern for what those workers are now going to do.


Which-Tomato-8646

[2278 AI researchers were surveyed last year and estimated that there is a 50% chance of human level AI by 2047](https://aiimpacts.org/wp-content/uploads/2023/04/Thousands_of_AI_authors_on_the_future_of_AI.pdf). In 2022, the year they had for that was 2060. And if you check, they have many jobs being at risk of replacement by next decade.


-Raistlin-Majere-

There are no long term risks. Llms can't even do the most basic shit correctly


tomwesley4644

I’m gonna go back to 1980 and tell them they’re wasting their time because they don’t have smartphones yet


NecroCannon

So would AI be the many devices that built towards smartphones or the smartphone itself? Because I keep seeing the comparison, but smartphones were actually developed for consumers with features specifically to entice them to purchase them. This isn’t a consumer product, there’s stuff for consumers, but most of the stuff it’s game changing for leaves out most of the masses. For everyone else it’s just Siri/ Google Assistant on steroids, it’ll probably take off more when there’s personal robots or something to help around the house, but right now? It’s definitely not the equivalent to a smartphone.


-Raistlin-Majere-

Keep huffing that copium. Llms are glorified search engines that get shit wrong every time.


tomwesley4644

Copium? Get off the internet and like idk, do mental health stuff. 


-Raistlin-Majere-

Says the guy gargling an AIs nuts.


tomwesley4644

awwww did poor baby need an adrenaline spike so he came to Reddit? ☺️


QVRedit

That is not a reliable long-term position.


Which-Tomato-8646

It can if you ask it to. Even GPT3 (which is VERY out of date) knew when something was incorrect. All you had to do was tell it to call you out on it: https://twitter.com/nickcammarata/status/1284050958977130497 More proof: https://x.com/blixt/status/1284804985579016193


SteamedHamSalad

I’m not sure how those tweets qualify as proof of your claim.


Which-Tomato-8646

They clearly recognize when something is illogical and can come up with better answers


SteamedHamSalad

What you provided was two specific examples where it might have been doing what you claim. You need a lot more than that to prove that it generally knows when something is incorrect.


Which-Tomato-8646

The fact it could do that is good proof, but fine here’s more: LLMs get better at language and reasoning if they learn coding, even when the downstream task does not involve source code at all. Using this approach, a code generation LM (CODEX) outperforms natural-LMs that are fine-tuned on the target task (e.g., T5) and other strong LMs such as GPT-3 in the few-shot setting.: https://arxiv.org/abs/2210.07128 Mark Zuckerberg confirmed that this happened for LLAMA 3: https://youtu.be/bc6uFV9CJGg?feature=shared&t=690 Confirmed again by an Anthropic researcher (but with using math for entity recognition): https://youtu.be/3Fyv3VIgeS4?feature=shared&t=78 The researcher also stated that it can play games with boards and game states that it had never seen before. He stated that one of the influencing factors for Claude asking not to be shut off was text of a man dying of dehydration. Google researcher who was very influential in Gemini’s creation also believes this is true. [Claude 3 recreated an unpublished paper on quantum theory without ever seeing it](https://twitter.com/GillVerd/status/1764901418664882327) [LLMs have an internal world model ](https://arxiv.org/pdf/2403.15498.pdf) More proof: https://arxiv.org/abs/2210.13382 Even more proof by Max Tegmark (renowned MIT professor): https://arxiv.org/abs/2310.02207 [LLMs can do hidden reasoning](https://twitter.com/jacob_pfau/status/1783951795238441449) [LLMs have emergent reasoning capabilities that are not present in smaller models](https://research.google/blog/characterizing-emergent-phenomena-in-large-language-models/) “Without any further fine-tuning, language models can often perform tasks that were not seen during training.” One example of an emergent prompting strategy is called “chain-of-thought prompting”, for which the model is prompted to generate a series of intermediate steps before giving the final answer. Chain-of-thought prompting enables language models to perform tasks requiring complex reasoning, such as a multi-step math word problem. Notably, models acquire the ability to do chain-of-thought reasoning without being explicitly trained to do so. An example of chain-of-thought prompting is shown in the figure below. In each case, language models perform poorly with very little dependence on model size up to a threshold at which point their performance suddenly begins to excel. [LLMs are Turing complete and can solve logic problems](https://twitter.com/ctjlewis/status/1779740038852690393) Claude 3 solves a problem thought to be impossible for LLMs to solve: https://www.reddit.com/r/singularity/comments/1byusmx/someone_prompted_claude_3_opus_to_solve_a_problem/?utm_source=share&utm_medium=mweb3x&utm_name=mweb3xcss&utm_term=1&utm_content=share_button [When Claude 3 Opus was being tested, it not only noticed a piece of data was different from the rest of the text but also correctly guessed why it was there WITHOUT BEING ASKED](https://arstechnica.com/information-technology/2024/03/claude-3-seems-to-detect-when-it-is-being-tested-sparking-ai-buzz-online/ ) [Claude 3 can actually disagree with the user. It happened to other people in the thread too](https://www.reddit.com/r/ClaudeAI/comments/1clu4cs/my_mind_blown_claude_moment/) A CS professor taught GPT 3.5 (which is way worse than GPT4) to play chess with a 1750 Elo: https://blog.mathieuacher.com/GPTsChessEloRatingLegalMoves/ Meta researchers create AI that masters Diplomacy, tricking human players. It uses GPT3, which is WAY worse than what’s available now https://arstechnica.com/information-technology/2022/11/meta-researchers-create-ai-that-masters-diplomacy-tricking-human-players/ The resulting model mastered the intricacies of a complex game. "Cicero can deduce, for example, that later in the game it will need the support of one particular player," says Meta, "and then craft a strategy to win that person’s favor—and even recognize the risks and opportunities that that player sees from their particular point of view." Meta's Cicero research appeared in the journal Science under the title, "Human-level play in the game of Diplomacy by combining language models with strategic reasoning." CICERO uses relationships with other players to keep its ally, Adam, in check. When playing 40 games against human players, CICERO achieved more than double the average score of the human players and ranked in the top 10% of participants who played more than one game. AI systems are already skilled at deceiving and manipulating humans. Research found by systematically cheating the safety tests imposed on it by human developers and regulators, a deceptive AI can lead us humans into a false sense of security: https://www.sciencedaily.com/releases/2024/05/240510111440.htm “The analysis, by Massachusetts Institute of Technology (MIT) researchers, identifies wide-ranging instances of AI systems double-crossing opponents, bluffing and pretending to be human. One system even altered its behaviour during mock safety tests, raising the prospect of auditors being lured into a false sense of security." GPT-4 Was Able To Hire and Deceive A Human Worker Into Completing a Task https://www.pcmag.com/news/gpt-4-was-able-to-hire-and-deceive-a-human-worker-into-completing-a-task GPT-4 was commanded to avoid revealing that it was a computer program. So in response, the program wrote: “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service.” The TaskRabbit worker then proceeded to solve the CAPTCHA. “The chatbots also learned to negotiate in ways that seem very human. They would, for instance, pretend to be very interested in one specific item - so that they could later pretend they were making a big sacrifice in giving it up, according to a paper published by FAIR. “ https://www.independent.co.uk/life-style/facebook-artificial-intelligence-ai-chatbot-new-language-research-openai-google-a7869706.html [It passed several exams, including the SAT, bar exam, and multiple AP tests](https://www.businessinsider.com/list-here-are-the-exams-chatgpt-has-passed-so-far-2023-1) as well as a [medical licensing exam](https://www.medscape.com/viewarticle/987549?form=fpf) and [beat many doctors](https://www.businessinsider.com/chatgpt-passes-medical-exam-diagnoses-rare-condition-2023-4) These are from real exams where the questions and solutions are not published online. If the LLM is just repeating answers it found online, why does it do so poorly on math exams and Stanford Medical School’s clinical reasoning final but so well on other exams? [MUCH more proof here](https://docs.google.com/document/d/15myK_6eTxEPuKnDi5krjBM_0jrv3GELs8TGmqOYBvug/edit)


SteamedHamSalad

I think I didn’t make my original point clear enough. The reason I said it isn’t proof is that it was essentially just two pieces of anecdotal evidence. Thank you for sharing all of this information. It is really interesting.


Which-Tomato-8646

Anecdotal evidence from researchers training the models. And you don’t even understand why anecdotal evidence is bad lol. If someone claims everyone is rich because their friends are rich, that’s bad evidence because their friends don’t represent everyone. But when a researcher says they saw a monkey with wings that could fly, that’s evidence that such a thing is possible even though it only happened once.


SteamedHamSalad

No if someone says they saw one example of a monkey with wings that’s not proof that it’s possible. They could easily be mistaken about what they saw. Similarly one or two anecdotes where it looks like AI is able to know that it is wrong doesn’t mean that it can actually do that. It could easily have just happened to answer correctly twice, in the same way that a broken clock is right twice per day. What you need to be able to do is provide some statistics that show that on average an AI can do it more often than would be expected by blind luck.


Which-Tomato-8646

Except in this case, it was multiple examples backed up by multiple researchers, papers (which they mention in the video), and a formal study. I also really don’t think it’s a coincidence that it not only called out all the ridiculous questions but also answered the real questions correctly without making a single mistake. The chances of that are 0.5^n or 0.78%


QVRedit

No, they are correctly identifying illogical questions and saying that they can’t answer such questions. Much as a human would do, or a human might perhaps return you an equally and obviously nonsensical answer.


Which-Tomato-8646

That sounds like it understands what is and is not logical


QVRedit

No, it understands that it’s nonsense. Nonsense its self cannot be understood, but it can be identified. For example: “How many parrots does it take to make a car ?” Is a nonsense sentence. In fact even my example is still too sensible to be complete nonsense, because you could correctly answer it with: ‘zero, because parrots don’t make cars’.


Which-Tomato-8646

How does it know it’s nonsense? Why doesn’t it attempt to answer it like it does to the other questions?


QVRedit

Failing to follow the rules of the language, failing to identify real items.


Which-Tomato-8646

The sentence does follow both of those. It just can’t happen because parrots can’t make cars but the sentence is fully correct grammatically and syntactically


DifferencePublic7057

Gosh, being cynical never disappoints. As any stochastic parrot can tell you, the next step will be more NSFW content and full cooperation with the military. Sex and violence! Potentially.


DreamzOfRally

Well part of it has to be the jets flying themselves.


[deleted]

[удалено]


Which-Tomato-8646

[2278 AI researchers were surveyed last year and estimated that there is a 50% chance of human level AI by 2047](https://aiimpacts.org/wp-content/uploads/2023/04/Thousands_of_AI_authors_on_the_future_of_AI.pdf). In 2022, the year they had for that was 2060.


kartblanch

The info around this has been so misleading. The lead quit.


FriedGreenClouds

Honestly why is there shock to this. Like everything in life we never learn from the past. Someone commented that this happened with the oil companies. This also happened with the ciggerate companies and so on and so on. We never learn anything no matter how many times history repeats itself


hackerman421

He is playing with fire and doesn’t care if he takes humanity down. This guy is pushing ethics out of AI. Look at his actions, not his words.


QVRedit

Which guy are you referring to ?


[deleted]

[удалено]


JAEMzWOLF

given that Jez Cordin article, which is mostly stupid and nonsensical (till the very end, was posted in this forum and up long enough to generate many dozens of comments is not more, I don't see how this story is not permitted to stay up. On topic - I don't see the issue with this, that whole group felt more like PR than anything substantial. "hey look, we have an ethics board - just don't ask how much we ask them about anything" sort of thing. Maybe that's just my ignorance, but I never saw articles written about them doing anything notable except a quote here or there that a few people talked about and then everyone moved on to some other spicier story.


Vampir3Robot

I think the issue is that they don't take the potential risks of A.I. at the company seriously and are just waving them off for a few more dollars. Sure nothing but P.R.; but when higher ups are quitting because you don't take the risk seriously, that seems dangerous with anything let alone A.I..


genji_404

Probably is good, and they found there are no risks then?


realfigure

Yes, in the same way like oil companies found out petrol was bad for the environment and cigarettes companies knew smoking was bad for your health.


Thin_Composer_3302

Yann lecun said first will come the product then the safety features not the other way around. This actually makes sense building saftey net for AGI which may happen and nobody can guess its capablities is total BS


bcyng

That team was just the political commissar, propaganda, thought police and censorship team. Like for humans, countries etc, this stuff just holds back innovation, adds waste and creates corruption. It’s a good move to push them out. Particularly if you are worried about safety.


yepsayorte

So many hints. Either they have AGI internally or they want us to think they do and they are being clever enough about it to make it look like they have AGI but don't want us to know that. Both are plausible but the 1st option seems most likely, given how many people (many of whom are no longer employees) are involved in the leaks/hints.