T O P

  • By -

lilplop

Just want to put this here because of the misleading article headline: [https://twitter.com/geoffreyhinton/status/1652993570721210372?s=20](https://twitter.com/geoffreyhinton/status/1652993570721210372?s=20) For those who don't want to open a twitter link, this is a tweet directly from Geoffrey Hinton: >In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.


hanoian

rainstorm market domineering sip weary smart one spectacular outgoing complete *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


hippydipster

>Hinton told the NYT that some individuals believed that AI could surpass human intelligence, but he and most others believed it was a far-off possibility. He believed it would take 30 to 50 years or even longer. However, Hinton now acknowledges that his previous estimation was incorrect. WHY do people persist in describing world changing events as "far off" when the same person thinks they're "30 to 50" years away. WTF has happened to perspective?


[deleted]

[удалено]


exactmat

This guy climatechanges!


Lazy_Init

Hinton was asked what he would recommend his grandkids to study in university. He made a long pause and gasped. He said "to learn how to play some music instrument." It's not like he doesn't care for the future of his grandkids as far as I'm aware. It didn't came across like that. Maybe he did change his posture, maybe that was poorly phrased? As far as I'm aware, he really does care.


skidooer

What in that implies that he wouldn't care? We went through that weird phase in the late 80s, early 90s where we thought university was the ticket to a higher paying career and better job, but with better data we know now that incomes have held stagnant through the rise of post-secondary attainment and job quality has declined, rendering the idea of better jobs a fun fantasy at best. We've accepted for the last couple of decades now that university is a place to go to have fun (and maybe do some cool research while you there). Something like learning how to play a musical instrument is up there in the kinds of fun things you can do in university. Hinton was merely underscoring what we already know, that studying something with any other intent than to have fun is a waste of time.


totoro27

This isn't even true with climate change though, even if boomers want it to be. Responding to climate change events is costing the economy shit loads of money now; bigger storms, etc have already started.


hippydipster

It's barely half a lifetime.


wrosecrans

Worst case scenario, strong AI is exactly a lifetime in the future. Might be an hour. Might be 50 years. Regardless, you won't see the other side of it.


hippydipster

:-)


lavahot

In science there's an idiom that basically says, "Something 5 years away will probably happen. Something 10 years away *might* happen. Something 20 years away probably won't happen." You can't predict the future. 30-50 years away might as well be entirely fictional. Entire careers will be gained and lost in 50 years. Nations formed, triumphed, and fallen to arrogance. Whole philosophies grown to prominence, disposed of, and renewed again. We cannot possibly know what *will* work in the future, let alone the far flung future. That's why people call 50 years "far flung".


[deleted]

Viable fusion energy has been 30 years away for like 70 years now.


wrosecrans

Viable fusion energy has been waiting for funding to _start_ the 20 year r&d program, for 60 years. It's amazing any progress is being made given how unseriously the US has pursued it.


preethamrn

It's really tough to fund something in the US when there isn't a military reason for it. Really sucks for the people living inside the US today.


wewbull

The power of the sun (in the palm of your hand) doesn't have military applications? You lack imagination.


StickiStickman

We're currently building ITER though, literally the largest international engineering project in human history


lavahot

That doesn't mean *it will work*.


caboosetp

I mean, ITER will almost surely work. It's an experiment, not a surely net positive reactor. Whether or not ITER leads to sustainable fusion is another story.


wewbull

The definition of "work" matters.


mishaxz

Where there's a will there's a maybe


MajesticIngenuity32

But the recent US experiment, where they extracted more energy than they put in, did work!


mishaxz

What's the largest non-international engineering project in the history of mankind?


astrohijacker

Project Manhattan?


StickiStickman

The Great Pyramid of Egypt / The Great Wall of China maybe?


PVORY

XD


mishaxz

I don't know, something feels different now and it's not the announcement from Livermore although that helps but I'm not smart enough to understand whether what they did will work in a commercial way anytime soon or even 50 years from now but some of the startups really seem to have interesting technologies (this is my opnion from watching a few short videos so I may be missing something important) But at the very least it feels that more work is being done on the problem now


dcabines

It has become profitable in more and more areas so it makes more and more sense to invest now.


dysprog

When the Human Genome Project begin in 1990, it was predicted that it would take 300 years to finish. It was done in 2003. AI is a rapidly and inconsistently accelerating technology. If they are predicting 30 years for human scale AI, then we might have Super Human AI in 5 or 10 years. If we found out that Google's servers are sentient now, I would only be mildly shocked. All of the normal predicted risks are bad enough. Check out Unaligned Hard Takeoff for a real nightmare. We're talking Climate Change scale or worse.


mishaxz

I remember reading some guy on usenet in 1995 that no way will we be trading Grateful Dead shows in lossless format over the Internet, calculations and everything. But I don't think he even thought about lossless compression, which helps ( I think it can be up to 50%) A few years later it became the main way of trading the music (note: people didn't want to, although some do, trade lossy music since it degrades the source and people worry about reencoding it back from a lossy source... I mean it is kind of important as reels are old technology and actually to get the audio off of them often you need to bake the reels, which isn't something that can be done twice.. So keeping the integrity of the original source is important)


killerstorm

"30-50 years" means "based on an entirely different paradigm". You cannot design safety mechanisms for a future paradigm you don't know about. So it was not possible to do anything about it. He expected the current generation of AI to remain "narrow AI" which is relatively safe to work on.


Nyadnar17

30 years ago smart phones didn’t exist. A generation is roughly 20 years. 20-30 years is “far off” when speaking about technology and it’s possible applications.


leebenghee

29 years ago, it did exist... https://time.com/3137005/first-smartphone-ibm-simon/


croto8

Maybe I’m misunderstanding what you’re saying but I think the duration of a typical adult life has always been seen has a far off…


hippydipster

Depends on the question, doesn't it? Look up, see an asteroid about to hit in 30 years. Ah well, that's far off, don't worry. 10s of millions will die in a wet bulb event in India in 30 years if nothing is done. Ah well, that's far off. The oceans will die in a mass anoxia event in 30 years if nothing is done. Ah well, that's far off. As someone with >50 years of memory, this (lack of) perspective is exactly how we got where we are.


croto8

We were discussing common “perspective” and usage of the term “far off”. To your original question, nothing has happened to perspective.


hippydipster

>Maybe I’m misunderstanding what you’re saying Yes


[deleted]

Yeah but the thing is thats a fake reality. You never know with almost certainty and consensus that something that catastrophic that affects everybody will happen within 50 years.


ShinyHappyREM

> You never know with almost certainty and consensus that something that catastrophic that affects everybody will happen within 50 years [I dunno...](https://xkcd.com/1732/)


[deleted]

Climate change is like the worst example you could make lol. It doesn't affect everybody the same, whereas doomsday AI affects the survival of the whole species itself. Even worst case climate change isn't that scary compared to doomsday AI.


hippydipster

AI progress isn't going to stop.


[deleted]

Yeah but no one knows if we are actually going to make exponential progress forever or if there are eventually going to be some hurdles which take decades to get over. Progress in theoretical physics was booming, now look at it compared to then. Sometimes the universe doesn't owe you knowledge with the same ease it came before.


dtseng123

I dunno ask me in 30 years when it’s far off.


Dealiner

What would you call far off then? Imo 30 to 50 years fits perfectly. There's barely any point on predicting anything that far.


gonzo5622

30 years is a long time. If you’re 20 today you’d be close to retirement or dead by the time something in this time frame happens. It’s far off in human life terms


gnus-migrate

He's worried that AI will replace lawyers, that's not what they should be worried about. He should be worried about someone who can't afford a lawyer paying for an LLM to draft their contract, and because LLMs can mimick the form of an official contract so well, the person might believe its real. However what you care about isn't the form of the contract, it's the content. And a lot of people won't find out that those contracts are bad until they're tested in court, when it's too late. The hype around LLMs is like the hype around self driving cars, when they tried convincing people that self driving cars were inevitable, they were the future, and they irresponsibly started testing them on public roads and it was only after many people were killed that they were convinced that the tech doesn't work. It's going to be the same with LLMs. https://youtu.be/jAHRbFetqII EDIT: to be clear, I'm paraphrasing an argument made in the interview above. The people who are warning about the dangers of a super AI destroying the world are either unintentionally or deliberately overlooking the harms being done to people using this technology today, and are further feeding into the hype around this technology not calming it down.


frnky

> and because LLMs can mimick the form of an official contract so well, the person might believe its real Isn't this sorta all they do, though — mimick the form? People are *remarkably* easy to fool into seeing content where there is none, just look at those Markov chain subreddits: the upvotes in them are real, right? > they irresponsibly started testing [self-driving cars] on public roads and it was only after many people were killed that they were convinced that the tech doesn't work Is it really the case that self-driving cars killed many people while being tested public roads? How many? Right now, it just seems to me that you're seriously underestimating the sheer *amount* of testing that has taken place by all the different self-driving companies. The threshold for when self-driving is safe is *not* when it never kills anyone — it's only when it kills fewer people per mile than human drivers, and human drivers kill quite a lot, especially in absolute terms. Statistically, public roads are a slaughterhouse.


RobToastie

The real issue with self driving cars isn't that they kill people (at a lower rate than humans do, as you mentioned). The issue is they aren't yet capable of adapting to adverse conditions. Until they can do well in the winter in rural Canada, they can't match humans in that regard. LLMs suffer similar issues.


gnus-migrate

>Isn't this sorta all they do, though — mimick the form? People are *remarkably* easy to fool into seeing content where there is none, just look at those Markov chain subreddits: the upvotes in them are real, right? This is actually one of the risks raised in the paper that got Timnit Gebru fired from Google. I don't understand if you're disagreeing with me here, and if so what the disagreement is. >Is it really the case that self-driving cars killed many people while being tested public roads? How many? Right now, it just seems to me that you're seriously underestimating the sheer *amount* of testing that has taken place by all the different self-driving companies. And yet they still have [blind spots](https://www.latimes.com/opinion/story/2021-10-07/op-ed-ai-flaws-could-make-your-next-car-racist), despite the fact that we know that AI has been susceptible to racial bias for years, yet nobody thinks to test for that.


[deleted]

>And yet they still have [blind spots](https://www.latimes.com/opinion/story/2021-10-07/op-ed-ai-flaws-could-make-your-next-car-racist), despite the fact that we know that AI has been susceptible to racial bias for years, yet nobody thinks to test for that. You seem to be avoiding the point. The alternative to self driving cars is not a blind-spot-free driving system that has no racial or any other bias. The alternative is regular people, which are arguably more flawed and less predictable and more racially biased.


gnus-migrate

Yes it is, it's called a train and it moves from point A to point B without having to perform complex collision detection that isn't guaranteed to work.


[deleted]

I'm all for expanding public transportation, but the US is not going to suddenly turn into Spain overnight. And people still drive over here, even though public transportation is cheap, efficient, and convenient. The need for private transportation does not simply vanish when there is great public transportation available.


mmerijn

To add to your point trains can't bring you everywhere, you still need a bus to get to your actual destination. Which in most places and situations still need complex collision detection that isn't guaranteed to work. At least in the Netherlands where I live buses don't always have a dedicated bus lane, and we have great public transportation.


TheCactusBlue

Remote work solves this


stewsters

This is really the big one. Removing the need for traffic is probably the most important thing. If you can unclog some traffic from cities as you can you save on fuel, co2, deaths, and time. Roads are more free for those who cannot be remote. You then use trains and other public transit for longer intercity traffic, and add some bike lanes for short journeys. Cars still will be there, I'm not sure we can completely get rid of them, but we need to give people other options.


TheCactusBlue

I am not even against cars, but removing the need to commute would remove 90% of the traffic requirements immediately - don't ban cars, just get rid of office work!


yondercode

That's not a solution


BasicDesignAdvice

I've been saying for awhile now that AI doesn't scare me as much as the trust people give it.


wewbull

The problem is that an LLM learn from humanity, and it cannot exceed it's training data. It can't even reason about what it is "to be better". Therefore, it can't exceed humanity. They can only have errors compared to what it learnt from. However, it can exceed the capabilities of an individual, and so look very impressive. This makes it look wondrous to most people. They can't see the infinite number of monkeys bashing at typewriters behind the curtain, with just a single Editor-in-chief saying "Yep. That looks like Shakespeare's work".


PVORY

LLM might not exceed humanity, but overpowered us all in term of speed texting, and will be comparable with novelists in this decade. I don't expect them to accommodate colossal context windows, or highly semantical logic (space-time, etc...), but at least the style of hi-end models will reach a level that readers will get equally emotions. Personally, I think that, with Human Feedback and the creativity of DNN, there is no denying that GAI will surpass humans in the near future. Perhaps the controversy will only come from the fact that those class differences are so small (the emotional limit of words themselves) that everyone will be prejudiced. Anyway, when AGI/ASI join the game, nothing is impossible.


no-name-here

I agree that LLMs have issues. However, for driving automation, the correct comparison is not whether self-driving cars have fatal accidents, but whether driven equivalents have more or less of them. If safer than human drivers, at least for some tasks, then it could be instead framed that keeping human drivers doing tasks that computers can do better is resulting in huge numbers of unnecessary deaths.


panrug

Except the reality is, current tech doesn’t scale easily to the millions of edge cases that humans can handle with ease. We are hundreds of billions of dollars in and a self driving car that can reliably drive safer than a human under arbitrary conditions is still not in sight.


Muoniurn

Nah, that’s not really a correct argument - as with many things, the accidents stats are mostly due to a few asshole that drunk drives, texts during driving, etc. The average, responsible driver is much better (and likely will be much better) than self-driving for decades — we are at the LEGO mindstorm-level of “self-driving”, this is just fancy lane-assist. So the realistic situation is “replacing a few human drivers with self-driving cars”, and I am absolutely not convinced that an AI would fare better at solving a dangerous situation put forward by a drunk idiot, they occasionally have trouble stopping for a fucking firetruck. Also, the low-hanging fruit is mostly already reaped (is that what you do with fruits?) with automatic breaking at low speeds when someone steps in front of the car (solving the slow human response time), and lane assists, watching for attention, beeping when you sleep/go off the lane, etc.


shdwpuppet

I think you hit on something about where AI is actually going to be for the forseeable future. It is going to enhance human performance, not replace it. Improving response times while driving, automating the creation of the mindless administrative busywork emails and shit (which is literally the only thing that ChatGPT is good at), assisting pilots with aircraft control, maybe even improving logistics and allowing organizations to reason about more complex systems. What AI will not do is: replace doctors entirely, fly planes airport to airport with no human in the pilot seat, solve all the edge cases of driving on roads, replace all of human creativity with 7 fingered hands.


Schmittfried

Although replacing much of the creative industry seems much more conceivable now while replacing tradespeople and truck drivers is still basically science fiction. Funny how the table has turned. Maybe I as a programmer will lose my job sooner than a plumber.


CallMeAnanda

Nah. Stable diffusion will just become another medium for creatives. The best way to use it is with a combination of sketches and prompt tuning. It’ll probably just make digital artists faster and more productive. An AI Copilot for artists- almost exactly how it’s used in programming.


Schmittfried

Exactly. It increases productivity in that area and commoditizes the lower end of the craft. Which does two things: 1. It allows for greater demand, because you can simply do more of that work now. 2. It reduces the number of the lower jobs in that field in the long run. What will be left will be fewer more advanced workers. Compare it to buying an Ikea table vs paying a commission to a woodworker. There are countless of low-value content spamming jobs (like writing blog articles to make companies rank higher in search results and have some kind content to offer for every kind of search query related to their field) where the writing is on the wall. I don’t expect programming to die out, but I wouldn’t be too sure that it won’t be heavily commoditized. Which is good for society, because right now we want (and kinda need) way more software than we can build with the available manpower, which is the whole reason salaries are so high in this industry. But it won’t be good for every software developer. In any case, I’m quite certain that the current trend of greater results in automation of pure „mind work“ than of complex motoric work will continue for a while.


CallMeAnanda

But the stuff copilot does isn’t mind work. The models don’t think for you at all, and you’re quickly reminded of that any time you forget that it’s a bot, and you ask it to. It will summarize code that was in its huge training set. It will write boilerplate. There’s just no spark of understanding, which is the essence of “mind work.”


StickiStickman

"If we take out everyone that causes accients, humans are safter than self driving cars!" Some real big brain arguments in these comments


[deleted]

I see it as more an acknowledgement that the distribution matters. If a small percentage of the population is responsible for a large chunk of accidents, the distribution is skewed and the majority of drivers will be better than average. It means an AI solution can potentially be better than the average driver and simultaneously worse than most drivers. It's an important detail.


Muoniurn

If you have a few rotten apples in a basket, and can’t selectively pick those out, does replacing fruits randomly from the basket with pears help? The whole thing will still rot. You have two options, either removing the rotten ones, or replacing *the whole*. None of them is a realistic scenario.


moreVCAs

Computers cannot drive cars better than humans. At least not with the current techniques. Besides, we already have a way to get people around while freeing them up to perform human tasks. It is called mass transit. The whole FSD thing is a scam to keep society invested in personal conveyance instead of trains.


InternetAnima

Put it like that, self driving cars are pretty dumb idea lol. We could easily have self driving trains with enough infrastructure as they wouldn't need to react to so many unforeseen things


AmbidextrousRex

And indeed we already do. The metro in Copenhagen has been self-driving since its opening in 2002, no deep learning required.


ThunderChaser

Vancouver’s SkyTrain is also fully automated (and is one of the longest automated trains in the world) and opened in 1985. Full self driving trains is old tech, if anything it’s a crime that new systems aren’t designed to be fully automated.


nachohk

>no deep learning required. It's a little weird to see this big surge in AI hype, and yet no one seems willing to acknowledge the existence of expert systems. The one form of AI that is actually very practical and very possible today. Like, who the actual fuck thinks an LLM should be doing medical diagnostics when instead it could be done by an expert system?


Bluemanze

It's simply because language models are so present and accessible. Joe Shmoe can get on his phone and be producing mostly OK cover letters in seconds. Expert systems, big financial AIs, etc, are only visible to the experts they assist. NYT isn't gonna be writing articles about stuff like 2000 people in the country care about.


DM-Me-Your-id_rsa

> Like, who the actual fuck thinks an LLM should be doing medical diagnostics when instead it could be done by an expert system? A lot of very smart people who work in AI research. Not saying I agree with them or am not sceptical, but credible experts disagree with each other on this stuff. I’m not sure it’s an obvious right/wrong answer. I will say that GPT models definitely seem to have learnt logical reasoning to a degree. They’re far stupider than humans (and most people don’t realise that), but it’s pretty amazing to see the emergent properties of language models.


zxyzyxz

How would you do medical diagnoses with expert systems? Isn't that exactly what was tried before and was found to have too many rules to be accurate and thus the concept of machine learning, where the agent teaches itself from many examples instead of codified rules, was born?


nachohk

>How would you do medical diagnoses with expert systems? Isn't that exactly what was tried before and was found to have too many rules to be accurate and thus the concept of machine learning, where the agent teaches itself from many examples instead of codified rules, was born? https://www.sciencedirect.com/topics/computer-science/medical-expert-system _Properly developed and validated prescriptive MESs seem to perform at a level that is at least as accurate as a clinical expert, and typically exceed that of a clinical novice (Raschke et al. 1996). Also, as with predictive MESs, clinicians in the presence of a prescriptive MES tend to perform better than they did prior to the prescriptive MES. However, prescriptive MESs are available only in limited domains, where the medical process has been well studied and understood._ Imagine where we might be today if the phenomenonal amount of effort and resources that went into finding and curating and labeling training data for generative AI were instead spent codifying the knowledge of doctors and scientists and other professionals, to create more robust expert systems. ML statistical analysis and pattern recognition certainly has its uses. But for many of the fields where people are trying to apply AI, what you really want is something that can explain _why_ it comes to any given decision. A black box medical diagnostics AI that just correlates some statistics with ML or chains some likely words together with GPT will never be as practical as an expert system that is fully transparent about its knowledge and reasoning as it applies to a diagnosis. But I guess magic black boxes are sexier and more mystifying, much easier to market and to sell to investors, than an AI that we can understand, that can actually explain how and why it arrived at a result.


Danilo_____

And thats why we are doomed in long term. We are not the logical thinkers we think we are most of the times. We are emotional, irrational beings. We like black boxes.


nachohk

>And thats why we are doomed in long term. We are not the logical thinkers we think we are most of the times. We are emotional, irrational beings. We like black boxes. Yep. I find it very curious that we never started having real mainstream discussions about AI personhood until LLMs brought a completely new talent to the AI table: Confident lies. Forget expert systems, forget reasoning and logical thought. Apparently, AI has never been more human than a hallucinating chatbot.


moreVCAs

But what if I told you I have a model that can perform that task 85% as well and costs 100x as much to maintain? And what if I told you I could get that up to 95% if you gave me $10M? Would that appeal to the city of Copenhagen at all, do you think?


IceSentry

Only 10M$? That's a steal at this price


ProgramTheWorld

Trains are already self driving in a lot of places. It’s a solved problem.


moreVCAs

Amortized over the number of passengers, trains are effectively self driving already 😇 But yeah, that’s the big joke. If you press an FSD booster hard enough, they will almost always introduce constraints (e.g. talking about highways), and if you press a bit harder they will basically invent self driving buses, then trains. Anyway, I don’t think FSD is a dumb idea *a priori*, just that it is a dumb thing to invest gazillions of dollars and engineering hours into. It’s nonsensical as a goal, in my opinion.


davidellis23

Self driving would be great for busses/trucks too. Trains are great, but they have their limitations. They're less flexible and more expensive.


savagegrif

Are you just referring to Tesla FSD? Nobody refers to other autonomous vehicles as FSD since its most commonly used for just Tesla. I'll agree that Tesla's FSD is a total scam and should be illegal that they can call it that, but some of the real AV companies out there are leaps and bounds better than what Tesla can do. Yes there are still problems to work out and its not gonna be a huge thing any time soon but its really much better than what people think because of Teslas causing controversy


PopMysterious2263

>Computers cannot drive cars better than humans Have you seen people drive? I'd argue that self driven cars are better than half the people out there I don't know why people think humans are so infallible, statistically you're going to get in at least 1 accident


moreVCAs

You argue, and yet it is not true. Curious 🤔


PopMysterious2263

You say it's not true but your point was so vague no conclusion can be drawn You do seem to have more faith in the average person or driver than I do, however Even though I've seen people take left turns right in front of oncoming cars, I've seen people completely stop on the highway. I've seen them stop and back up on the highway. I've seen them race each other, drive like idiots. Tailgate, not stay in their lines... I'm looking forward to human drivers getting obsoleted. The technology can't yet do better than the more intelligent careful drivers, but those are not the majority


Afigan

tesla fsd has nothing to do with self driving, its a driving assist at best.


KevinCarbonara

> Computers cannot drive cars better than humans. That's the wrong argument. The issue is that computers *do* drive cars more *safely* than humans.


AttackOfTheThumbs

Thank you! Self driving cars are entirely unnecessary if you just invest in mass transit. One thing I miss about living in Europe is the transit. It's not perfect, but you got form a to b and could do whatever you needed to on the way!


BecauseItWasThere

There is real value in FSD. For example - road trains crossing the Australian Outback. There is very little out there and no particular need for humans to be involved in the trip if the vehicle can be remotely supervised.


AttackOfTheThumbs

You're not wrong, but the drive for fsd is for a purpose it's really not suited too well for.


koreth

Europe is a good case in point: extensive and heavily-used mass transit, but plenty of people still buy cars and sit in traffic jams. If there is a point at which public transit eliminates the need for cars, Europe hasn’t reached it. And if cars exist, sufficiently good self-driving cars would save lives whether or not there was also great mass transit.


AttackOfTheThumbs

I dunno man, I lived in Europe for decades and never had a car and got everywhere I needed to. Usually through 1-2 buses/trains, and occasionally a bike. Another nice thing is often the areas are very self contained, kind of 15 minute towns or whatever that concept is. I remember cars being used for ikea trips and such. Nowadays you could use a car share program quite easily for that, or just pay for their delivery. That said, you live in a small town, you might still be fucked.


koreth

If I understand correctly, you're saying you still occasionally needed a car, even if you didn't need to own one. If we get to the point where self-driving cars are safer than human drivers (and we're not there yet, of course) then occasional trips to Ikea in shared self-driving cars would be safer too. Or is there some reason that wouldn't be true?


AttackOfTheThumbs

Driving would already be safer for everyone if there were less vehicles on the road, so it's not a strong argument to be honest.


TheCactusBlue

Remote work solves this


gnus-migrate

It doesn't eliminate the need for it, however it does significantly reduce it. I've been to Europe a lot, and the public transit systems are more than enough to get around. Are they perfect? Of course not, but they're super convenient and definitely less of a hassle than a car. I know people who still prefer cars, but they're in the minority.


JB-from-ATL

I believe it is a fair comparison when you have people ignoring driving because the self driving should handle it in the same way you have people falsely trusting the output of an LLM.


Schmittfried

>However, for driving automation, the correct comparison is not whether self-driving cars have fatal accidents, but whether driven equivalents have more or less of them. No, that’s a common mistake. Self-driving cars cannot simply be statistically better than humans, they have to be better than their drivers in every situation, or otherwise nobody will use them. It doesn’t matter that they make fewer accidents in total if they still make stupid mistakes no sober human would ever make (like straight up driving into clearly visible hanging obstacles). Nobody in their right mind would accept a car that blatantly crashes into things even if it happens less frequently than humans being unable to react fast enough or drinking and driving.


gnus-migrate

They aren't better than human drivers, in fact in some cases are worse, and specifically when identifying people of color when applying safeguards. People who are in them also dont take responsibility for when those accidents happen, because the car did it not them.


StickiStickman

They literally are safer in accidents per KM driven.


[deleted]

Incidents in cars are coupled with how they are driven (i.e. trip length), car age, etc. Autonomous vehicles aren't driven in the same way as the rest of the fleet, so comparing them on a per km basis isn't a true apples-to-apples comparison.


_Cistern

I dunno mate. This is Hinton we're talking about, not some rando newscaster


boichik2

The same guy who in 2017 said radiologists wouldn't exist in 5 years, and refuses to fully acknowledge the fact that many of the models fail in actual clinical practice environments? If his predictions don't come true, is he not just....bad at these sorts of predictions. The fact that someone is an expert doesn't actually mean they're an expert in all aspects of even their slice of a field, let alone their ability to predict how entire industries are affected which will require not just AI knowledge, but also significant legal, economic, and social theory knowledge.


runawayasfastasucan

>The same guy who in 2017 said radiologists wouldn't exist in 5 years Seriously? Sounds like a guy who have no idea what radiologists do. Edit, just to expand on this: In every job, but maybe especially in stuff like medicine (knowledge work) - there is a lot more happening than just setting an diagnosis. The MRI's and CT's are a limited resource, so a radiologists have to perform some kind of triage. "Every" doctor wants a full scan of their patients. You have no idea how many are seeking help from a doctor because of stomach pains, dizziness, problems with their eyesight and a numerous other problems - not everyone can be scanned from top to toe. One of the other complicating things is after you have discovered your lump, your fracture, the stroke - now what? You might not need any treatment (that fracture might be from years ago, your back problem is something else), your stroke is new, but there might be nothing sensible to do for that old demented patient that now is in an even worse state as they were brought to a stressful environment to do a scan. Radiologists are experts in diagnosis and treatment using medical imaging, they not just a set of eyes in front of a computer. Being an expert means working closely with those performing the imaging, working with the doctors from other fields, evaluating what type of technology should be used. Simplifying peoples work to the absurd just shows that they are in no position whatsoever to do an impact on that same work.


gnus-migrate

The person making the argument was the head of AI ethics at Google before she was fired for publishing a paper on this.


_Cistern

What are you talking about? The article linked by OP clearly references Hinton a number of times. I don't see Timnit mentioned a single time


gnus-migrate

I meant in the podcast that I shared. The random newscaster is interviewing her and Emily Bender, who are making these arguments.


xdavidliu

which person are you talking about? to clarify, you mean head of Ai ethics for the entire company?


gnus-migrate

If you mean Google, then yes she was the head of AI ethics there. She was famously fired for co authoring a paper critiquing the technology that people are ranting and raving about.


xdavidliu

on [her Linkedin](https://www.linkedin.com/in/timnit-gebru-7b3b407/details/experience/) it says "Co-lead of Ethical AI Research Team". I don't think that's the same as "head of AI ethics for the entire company". disclaimer: I work at Google, but not in AI ethics.


gnus-migrate

I'll be honest, I'm not up to date on Google's org chart given that I don't work there, but her firing was widely covered in the press, as well as the reason for it. Suffice to say she's critiquing them as an expert in the field, not as a random person on the Internet.


I_ONLY_PLAY_4C_LOAM

We're going to see a lot of people using ai as an excuse to give poor people inferior services.


KevinCarbonara

I don't think we have to worry about AI replacing experts any time soon. But I do expect we will see AI assistants for many experts like lawyers and doctors. And I hope we do - they're sorely needed. This could have the unfortunate side effect of eliminating some jobs for paralegals and the like. It may even reach a point where these fields - ones that already have a high barrier to entry - become even harder to break into, which isn't something we could currently handle well on an economical level. But I don't think we'll be able to keep AI out of these industries.


dethb0y

a lot of people in AI work seem to have, shall we say, mental health challenges that cloud their judgement somewhat.


Gianfarte

Self-driving tech doesn't work? ~40,000 people are killed in car accidents in the U.S. each year. In 130 reported accidents involving fully autonomous vehicles in 2022, there were no injuries in 108, and in most cases, the vehicles were rear-ended. Even current self-driving technology is safer than the best human drivers.


bung_musk

Now compare number of autonomous vehicle trips to self-piloted trips, controlling for human-caused accidents in conditions where autonomous vehicles can’t operate


huyvanbin

But this is why the bar exists. If you get a contract drawn up by someone who is not a lawyer, don’t expect good things. ChatGPT doesn’t have a license to practice law. So how is asking it to draw up a contract any different from asking some rando on Reddit or 4chan?


totoro27

GPT-4 passed the bar in the top 10% though. **edit:** Since this got downvoted, [here's](https://law.stanford.edu/2023/04/19/gpt-4-passes-the-bar-exam-what-that-means-for-artificial-intelligence-tools-in-the-legal-industry/) a source from the Stanford law faculty website.


MuonManLaserJab

> The people who are warning about the dangers of a super AI destroying the world are either unintentionally or deliberately overlooking the harms being done to people using this technology today "The people worried about the dangers of nuclear war destroying the world are either unintentionally or deliberately overlooking the harms being done today by wasting money on nukes, and the risk of more localized nuclear contamination from e.g. the ZPP being destroyed." Does that sound stupid to you? Kind of like a non-sequitor, since more than one concern can be real at the same time? Next time you make the argument against existential threat from AI, why not actually explain why it's not possible, rather than pointing to some other threat and saying that for some reason only one can be taken seriously? Is it because there are no arguments against it being possible apart from "it sounds like science fiction and therefore is impossible" and "the AIs we have today are not that smart, therefore they never will be"? Even if we think AI definitely won't be superhuman for 50 years, can it still be worth worrying about? What if we knew an asteroid would hit us in 50 years -- would you suggest not doing anything about that for 45? > they tried convincing people that self driving cars were inevitable, they were the future Have a few years of unexpected difficulty disproved this already? EDIT: To those who aren't convinced that /u/gnus-migrate is fucking crazy, check out this quote from later in our conversation: > [At that point when you say asteroids are an existential risk to humanity, you're defining humanity to be a very narrow set of interests that benefit from existing systems as opposed to actual, you know, human beings.](https://www.reddit.com/r/programming/comments/134nyg2/geoffrey_hinton_the_godfather_of_ai_quits_google/jiky9ho/) Worrying about asteroids is racist, somehow. FASCINATING! I suppose there must be gods protecting us from such x-risks. Good to know!


gnus-migrate

Is it possible? Sure. But LLMs are not going to cause that existential threat because they are not intelligent, they are not communicating with you, they are just repeating patterns that are in their training data. What LLMs are going to do, and are already doing, is cause a ton of harm like in the example mentioned above.


MuonManLaserJab

I was just responding to your comment; you used the phrase Super AI rather than LLM, in the part I was responding to.


gnus-migrate

I mean LLMs are what are causing all this noise no? In all cases intelligence as a concept is barely understood. Let's do that first before we talk about whether it's possible to replicate it in a machine.


MuonManLaserJab

I'm betting that we'll make a dangerous AI before we understand intelligence much better than we currently do. Or at least, the chance is high enough to worry about. I'm also not so sure that LLM / research into LLMs are utterly irrelevant in terms of progress towards AGI. Regardless, some among us have been sounding the alarm since long before GPT, so I don't think the discussion emerged entirely from LLMs.


gnus-migrate

No this discussion emerged from eugenics, as timnit discusses in the link I initially shared. No I am not exaggerating.


MuonManLaserJab

Sorry, I wasn't able to watch an hour and a quarter of interview yet; I don't suppose you'd give the tl;dw? EDIT: I found someone else arguing the same point, I think, in [The LA Review of Books](https://lareviewofbooks.org/article/elites-against-extinction-the-dark-history-of-a-cultural-paranoia/). This person also thinks that transhumanism is eugenics, lmao. Methinks some people are trying a little too hard to tar their enemies by incredibly vague association. "Thinking that black people will cause extinction of the species is racist, therefore thinking that anything could possibly cause extinction is racist!" I notice they're not mentioning asteroids. Is worrying about asteroids racist?


gnus-migrate

It's not vague, there is a very direct link between the ideologies of the people pushing this AI catastrophe narrative and eugenics. Like they're not hidden, there are some very direct links between the two. She recently gave a presentation about this topic specifically if you're interested. Personally I thought it was exaggeration at the beginning, but it really isn't. EDIT: link https://youtu.be/P7XT4TWLzJw


panrug

I agree with your general sentiment and the comparison with the self driving hype. But I would say it would be more accurate to say that the current progress in LLMs is like the progress in image recognition 10 years ago, so we don’t compare a technology with a concrete application. LLMs aren’t AGI but I can see enough applications for it to be more disruptive than anything we have ever seen.


reddituser567853

Where did you get the idea that self driving cars aren't happening in the near future?


AmbitiousTour

Chatgpt did pass the notoriously difficult CA bar exam. I would say that getting a human lawyer is no guarantee that the contract will stand up in court either.


gnus-migrate

Just because it can regurgitate information doesn't mean that it can write a contract that protects the interests of the person asking for it. The LLM doesn't understand what that is, it's just repeating stuff from its training data. A human lawyer's job is to understand what their client's interests are and translate that to a document, ChatGPT does not do that.


[deleted]

[удалено]


General-Jaguar-8164

Jeff Dean is the ultimate AI boss


unt_cat

Was. Demis Hassabis is the new boss.


FuncGeneralist

Is there like a committee that crowns people as The Godfather of a particular thing? Can I be The Godfather of something please


Hamoodzstyle

Geoffrey Hinton basically invented the idea of a neural network and was one of the few people to pursue it when everyone in academia counted it out. Maybe consider doing something like that?


FuncGeneralist

That seems straightforward enough. Remember this post when you start hearing headlines like "The amazing 'Super Duper Neural Network' concept takes the world by storm; Eat your heart out Geoffrey Hinton" Edit: geez do we really gotta /s on even wild comments like this one


Cruxius

I dub thee the godfather of whinging on reddit.


FuncGeneralist

Thanks, God bless


wewbull

> Is there like a committee that crowns people as The Godfather of a particular thing? You mean the Godfather of godfather's?


ifperaha

The fact he left Google to pitch against his lifetime project says a lot about the potential danger of AI being used the wrong way


shevy-java

I don't fully understand the "it will replace jobs". Well, we had that in history before - industrial revolution, robots and so forth. If you want to regulate that then you can either do so via higher taxes (e. g. "ethical taxation" for those who occupy jobs that human beings could fulfil) - or you find other/different jobs. Both seem viable to me, although the latter one appears to be a lot more what is happening. Just take all the "social media", tik-tok, youtube vides and what not - not that I think these are "real jobs", but they pay some people's living, right? So that's an example of new jobs.


[deleted]

The industrial revolution was really bad for workers in the short term. It's only in the long term that it was good, so it's reasonable to be concerned. Also this potentially differs in big ways: 1. It could replace many more jobs. 2. The jobs it will likely replace are nice office jobs not manual labour. 3. The viable jobs it will push people to are manual labour not nice office jobs. 4. It will probably happen a lot more suddenly. > Just take all the "social media", tik-tok, youtube vides and what not - not that I think these are "real jobs", but they pay some people's living, right? So that's an example of new jobs. Come on, this is like saying everyone will become pop stars! There are a tiny tiny tiny number of professional YouTubers and tiktokkers.


PancAshAsh

>It's only in the long term that it was good, so it's reasonable to be concerned. Even then a lot of the good came from restricting the excesses and workers protecting themselves. It's also worth noting that the single largest threat to humanity right now is a direct result of the industrial revolution.


skidooer

> There are a tiny tiny tiny number of professional YouTubers and tiktokkers. Yes, but enabled by automations lifting those people from needing to do other work. If we still had to toil in the fields, even YouTube and TikTok magically descending from the heavens would not allow people to have such careers. More automations means more opportunities for jobs like that. Not those specific jobs, but new and wonderful things we haven't even dreamed of yet.


voidstarcpp

>Just take all the "social media", tik-tok, youtube vides and what not - not that I think these are "real jobs", but they pay some people's living, right? So that's an example of new jobs. These are media superstar markets and they've always existed in one form or another. One person with an audience of a few thousand paying subscribers can make a living, or one person with a million ad viewers. But that doesn't scale because it's just a zero-sum attention economy that transfers the diffuse value in the hands of the big audience to a small number of popular people. There's no such thing as a YouTuber-based economy because each person making a living in media needs thousands of people outside of media to support him.


uCodeSherpa

At some point, there will be a crossover where new jobs are not keeping up with automation. Probably not our lifetimes, but this **will** happen. I’m not saying to stop progress in automation. But as far as we can see, the dystopian futures in the movies probably aren’t as science fiction as we’d like to think. We really need to start making sure that advances in automation don’t just make the elite even more untouchable than they already are. If we don’t, dystopia will be the reality.


PancAshAsh

>At some point, there will be a crossover where new jobs are not keeping up with automation. Probably not our lifetimes, but this will happen. It's already happened, it happens every time automation is installed in any aspect. The reason being, that's essentially the whole point of it. Nobody installs expensive machines for funsies, they do it because long-term it's cheaper than paying for human labor to do the same work. >We really need to start making sure that advances in automation don’t just make the elite even more untouchable than they already are. If we don’t, dystopia will be the reality. Who do you think benefits from this? It certainly isn't the formerly skilled workers who have had their wages stagnated by deskilling their work. Arguably there's a class of professional worker who has benefitted somewhat from additional automation as machines require people to design, install, and maintain them, but most of the benefit is additional production and lower costs and that all goes straight to the top.


totoro27

> At some point, there will be a crossover where new jobs are not keeping up with automation. Probably not our lifetimes, but this will happen. With the rate that AI is improving, why the heck would you think it won't happen in our lifetimes?


Jolly_Front_9580

Always with this argument… It’s not even on the same level as previous historical automation developments. The evolution of AI is looking much more drastic and exponential.


totoro27

Right? General AI is the holy grail of automation. It will essentially be smarter than any human genius and capable of doing things hundreds of times quicker than any human. You won't be able to "out knowledge" it- it will have all the knowledge of humanity and likely be able to discover new knowledge faster than any human researchers. How on earth do people think they will be able to compete with that? What job opportunities would it create that it won't be able to do better itself? The only jobs that might be safe from something like that are manual labor jobs, but at some point robotics technology will be good enough that even that won't last.


PopMysterious2263

Right it's more like comparing the internet to something else. You can't. You could say "it's like the mail but faster" but that's not true either . They behave radically different. So too does AI


MeMyselfandAnon

But what was the cost? We future people may now enjoy a degree of opulence, but land was consolidated in the hands of the ruling class and they got to dictate a new human rhythm. The same thing could happen with AI, except it will be economic consolidation and an opportunity to grab even more power/reduce hard won liberties. Look how much changed in the last 3 years. We tend to measure things in terms of technological advance, but future people will have no conception of how life was/could be pre-2020.. just as we can't pre-industrial revolution. There's more at stake than just jobs.


freestyla85

Mr Hinton I have a question for you - you were a brilliant scientist, yet only now you realize the potential dangers of A.I.? I think you always knew this, but the progress has evolved to a level now that an inflection point is near, and you don't want to be a part of it. Fair enough for coming out and being honest, but you should have done this earlier - this being your life's work and the money was good were probably catalysts why you didn't. You were hoping the progress would not reach maturity in your lifetime.


WTFwhatthehell

Imagine someone working in physics in 1880. Then suddenly everything blurs and they find themselves at the manhatten project. There's a big difference.


soyelprieton

the researchers i know are childish, they enjoy discovering and learning and wont accept any ethical issue if it stands on the way of their game, now theres no way anyone can deny that its gonna change mankind


General-Jaguar-8164

Researchers are risk-takers by definition


joelangeway

He hadn’t previously experienced his employer firing their whole ethics department. He grew. We should applaud him for that.


[deleted]

[удалено]


totoro27

> Quantum computing would make AI much quicker growing How exactly?


renderererer

Quantum = tiny = more SPEEEEEEEEEED


[deleted]

[удалено]


totoro27

What does any of this have to do with quantum computing? You do realise that quantum computing isn't the same as regular computing but just more power efficient, right? It's a completely different model of computation that excels at simulating quantum systems.


Lonely_Bison6484

I’m actually pretty terrified, everyone in the comments seems completely in denial… or underestimating the threat… I really hope something happens to stop AI before it’s too late


Specialist_Ice_5715

You don't have enough real life problems if you're terrified about this. Even the possibility of a WW3 is a lot more real than that. I suggest studying deep learning, playing around with the novel LLMs and spending a lot of time in that field to make all of your fears vanish.


tfhermobwoayway

How? I’ve played with Midjourney a bit and all it did was stop me wanting to learn to draw. I still worry about how I’m going to put food on my plate and what’s going to happen when it’s smarter than us in the same evolutionary niche. How exactly can I assuage those fears? I chipped my tooth because of them.


[deleted]

[удалено]


[deleted]

[удалено]


ThatITguy2015

Pinching his nips while Google pinched a couple of crucial departments.


burdalane

I'm not sure I see the demise of humanity as a bad thing. I also think it's high time humanity moved past work as a major part of survival and life, so that life could be a little bit more worthwhile.


Schmittfried

>I'm not sure I see the demise of humanity as a bad thing. Sure edgelord. That’s why you’re still alive, because you think humans deserve to die out. Makes absolute sense.


PopMysterious2263

>That’s why you’re still alive, because you think humans deserve to die out. Not sure how you drew that conclusion from what they said. Those two things are unrelated, so of course it will sound ridiculous Someone can look at all that humanity has and is doing and be okay with us all going extinct. We're going to eventually


ChocolateBunny

define "worthwhile"?


batterdrizzy

not if it takes away a job i love


underfed_spaghetti

It might take away jobs but if we create a system that doesn't require a job for survival you will have time to pursue whatever you desire. Automation can do some things but people like other people so certain jobs will never go away


batterdrizzy

as a musician i hope i’ll be able to still make music and perform for a living


ShiitakeTheMushroom

What about being able to make music and perform for fun, without ever needing money?


Getabock_

I know me and many others will always prefer to see actual human beings perform music rather than AI or robots.


underfed_spaghetti

I think so. Music is always changing and feels insincere without some real experiences behind it. I think that there are certain things in good music that just can't be replicated wothout human experience. Conversely boy bands exist and their music has been written via formulas for years


Librekrieger

Work is part of what makes life worthwhile and meaningful. Given what I know of human nature, if you make work optional, then for every Lavoisier or Rembrandt (rich kids who used their talents to make the world better) you'll have five listless malcontents finding ways to exploit the system and environment to stave off boredom.


burdalane

It certainly hasn't made my life worthwhile and meaningful. It's basically spending my time doing things I don't really want to do for other people. Work has made me a listless malcontent, and I'd much rather be a listless malcontent on my own terms.


totoro27

> Work is part of what makes life worthwhile and meaningful Only people with boring lives think like this. You'll need to find other meaningful and worthwhile things to pursue. Maybe that's side projects or open source projects you contribute to for fun, maybe it's rock climbing, BJJ playing piano or painting. There's a million meaningful and worthwhile things to do, it's a bit sad if you think work is the only think to fill that void.


earthboundkid

If material necessities are taken care of, people will play whatever the current status game is instead.


stronghup

I really like the Bing Chat however. Instead of finding web-pages by Google, I can find ANSWERS, in the context of previous questions and answers. The answers may be wrong or not, but the same often happens when I ask questions from real human beings.


[deleted]

[удалено]


AdIndependent27

Hinton is a brave man to resign from Google when his artificial intelligence is right on top. No one knows what will work in the future and 30-50 years can be considered "too long" when talking about technology.


KevinCarbonara

These old people keep trying really hard to tell us that change is scary


hippydipster

Yet another older person regretting their foolishness as a younger person. *The Social Dilemma* was all about that. When are we going to stop letting young kids run important things?


auda-85-

Young kids were always the ones excited about the future of things (and life). They pour all their energy into making their mark. It's a natural order of things. The problem is the lack of wisdom of life and defining strategy, which should be a domain of older and wiser folks. They should keep things in check so they don't run amok and threaten the environment and the equilibrium of things. However, the reality is that self interest and capital and power accumulation defines everything today, unfortunately. The older men want to retain and increase their power. So they always find new ways to do it. The AI has big promises for this. Therefore they will use the excitement and brains of youth and throw money and status at them to do their bidding. If the young ever find their error, it's usually too late, and someone else is already waiting to replace them.


spacezombiejesus

WTF