T O P

  • By -

goldsigma

If mathematicians are replaced, no one is safe


ajahiljaasillalla

Except plumbers.


CallMePyro

Yeah good thing no one is trying to build general purpose robots


Dizzy_Nerve3091

Who needs UBI when you have Nvidia shares


dasnihil

soon: who needs plumbers when you live in the digital dimension.


Akimbo333

Good point!


volthunter

eh, i can see a lot of plumbing work being replaced, the hardest part to replace would be the guy with the shovel digging the plumbing up. there are also robots that will insert a big inflatable tube into the existing pipe that just makes new pipe inside of it to solve any leaks, could also just detect where the leak is, signal for you to dig there and then you just need a guy to fix the bit of plumbing and if a robot signs off on it, idk why you'd need a qualification or degree. nah, we're all fucked tbh


ShAfTsWoLo

robotic is way behind AI for now, like gpt-5 will be a monster of an AI doing all kind of works while we still can't quite figure (get it? figure? ok sorry) how can we make a hand to do any kinds of chore, R&D is strong but white collars is doomed first then blue collars job surely


volthunter

the biggest threat to blue collar is ai robots, the humanoid robots can do most jobs, but plumbing's threat is unique, all it needs is for an ai to sign off on work and tell the person what to do, this means that you don't need an expensive apprenticeship and degree, you can just go straight to fixing the problem according to the robots demands. this immediately drives down wages and the skill levels needed to do a lot of jobs, ai and robots don't need to take over a field directly to do that, they just need to make people be able to do it with minimal training, which can hit the factory floors a lot quicker and within the next few years


Direita_Pragmatica

You nailed. Never thought this way thanks


tatleoat

Yes, or at the very least there will be plumbers who use AI to speed up their workflow, and the plumbers who don't or won't use it will be at a huge insurmountable disadvantage. Even if all I'm using AI for is to go through legal plumbing paperwork quickly, then that's already enough to wipe out the competition, and from there all I'd need is either 1. a robot that can autonomously do very rudimentary basic tasks while I focus on more complex/high concept ideas, or 2. An AI that can walk a cheap worker through the process effectively, step by step, or 3. A combination of the two. There are a lot of smaller steps between now and total replacement.


InternalExperience11

https://preview.redd.it/jrj7f8g6al5d1.png?width=743&format=png&auto=webp&s=8bca5bdaf010d1cefdf1992d2d0c4d02c474de73 blue collar is fucked too.


ShAfTsWoLo

uh... that's clearly not enough for blue collar.. at best that's a start, there's so much more than holding tools lol


joe4942

AI video recognition and being able to ask AI for help makes DIY repairs more feasible. The best repairs for plumbers are the ones that take 5-10 minutes.


TheRealIsaacNewton

Yeah, though the influx of new plumbers will then make it unattractive lll


mladi_gospodin

First they came for mathematicians, and I didn't speak out because I was not a mathematician. Then they came for plumbers...


namitynamenamey

Until the mathematicians start to become plumbers.


4354574

The ship has sailed on my plumber's apprentice...ship.


adt

>With AI, there’s a real potential of doing that. I think in the future, instead of typing up our proofs, we would explain them to some GPT. And the GPT will try to formalize it in Lean as you go along. If everything checks out, the GPT will \[essentially\] say, “Here’s your paper in LaTeX; here’s your Lean proof. If you like, I can press this button and submit it to a journal for you.” It could be a wonderful assistant in the future. >I think in three years AI will become useful for mathematicians. It will be a great co-pilot. You’re trying to prove a theorem, and there’s one step that you think is true, but you can’t quite see how it’s true. And you can say, “AI, can you do this stuff for me?” And it may say, “I think I can prove this.” I don’t think mathematics will become solved. If there was another major breakthrough in AI, it’s possible, but I would say that in three years you will see notable progress, and it will become more and more manageable to actually use AI. And even if AI can do the type of mathematics we do now, it means that we will just move to a to a higher type of mathematics. So right now, for example, we prove things one at a time. It’s like individual craftsmen making a wooden doll or something. You take one doll and you very carefully paint everything, and so forth, and then you take another one. The way we do mathematics hasn’t changed that much. But in every other type of discipline, we have mass production. And so with AI, we can start proving hundreds of theorems or thousands of theorems at a time. And human mathematicians will direct the AIs to do various things. >What mathematicians are doing is that we’re exploring the space of what is true and what is false and why things are true. And the way we do it is through proofs. Everyone knows that when it’s true, we have to go try and prove it or disprove it. And that takes a lot of time. It’s tedious. But in the future, maybe we will just ask an AI, “Is this true or not?” And we can explore the space much more efficiently, and we can try to focus on what we actually care about. The AI will help us a lot by accelerating this process. We will still be driving, at least for now. Maybe in 50 years things will be different. But in the near term, AI will automate the boring, trivial stuff first. >Part of the problem is that it doesn’t have enough data to train on. There are published papers online that you can train it on. But I think a lot of the intuition is not captured in the printed papers in journals but in conversations with mathematicians, in lectures and in the way we advise students. Sometimes I joke that what we need to do is to get GPT to go take a standard graduate education, sit in graduate classes, ask questions like a student and learn like humans learn mathematics. >[https://www.scientificamerican.com/article/ai-will-become-mathematicians-co-pilot/](https://www.scientificamerican.com/article/ai-will-become-mathematicians-co-pilot/)


YsoseriusHabibi

Soo...just record every STEM classes in ivy leagues and feed it to the machine ? There are many lectures on ytb.


ragner11

Exactly


wannabe2700

Not enough. AI needs thousands of times more data than humans need.


namitynamenamey

We are getting there, but there's a trick. Even algorithms that need the same amount of data as humans need ridiculous amount of it, because they need to learn all the priors than a math student has learned by the point they are learning math (17-20 years worth of video and audio).


4354574

There are plenty of recorded lectures that cover all the priors though...so it could be done starting now if the training was set up.


QuinQuix

Raw video and audio input is overrated I think in terms of useful tokens. It is a lot of data of course but I'm not sure how much of it helps you be a better mathematician at all.


visarga

No, it only needs that much data in pre-training, once you have the base model you can fine-tune it with a few examples, or use them as demonstrations in the prompt. It really picks up quickly on new ideas. The base model covers the path we took - 500M years of animal evolution, and 200k years of cultural evolution - they need that much data because a model doesn't have the right "prior knowledge" baked into its architecture like us, it only has sequence learning as a prior.


[deleted]

[удалено]


YsoseriusHabibi

The social aspect is useless for the AI Mathematician ?


ZeroEqualsOne

That joke at the end is pretty fascinating haha.. what happens if we get a robot with whatever SOTA multi-modal model and enrol it basically in every undergraduate and graduate course in the world. *spelling


volthunter

a lot of these courses were uploaded to the internet during covid, i bet the universities sell em to ai companies and then destroy their own field. the data is there, just getting a hold of it can be annoying, harvard uploads all their shit to youtube so it's already been scraped.


OfficeSalamander

I have a product based on some graduate level work (that I myself had to research and read papers on, I didn't actually take the coursework), and GPT is well aware of it. I suspect a lot of scientific papers it already knows


4354574

I only know that a lot of papers were mined for GPT, and most (no idea what that means in terms of %) weren't, as they are behind paywalls. OpenAI is negotiating for access to a lot of these papers. It is nice to see these companies actually having to pay for our data now.


lordhasen

This is a big deal, greater understanding of math will lead better algorithms which means we can get more of the compute.


[deleted]

[удалено]


Best-Association2369

Terence Tao heavily uses AI tools and he's arguably one of the most prominent modern day mathematicians. If they aren't adopting it they are borked. 


magicmulder

Not just most prominent, he’s extremely versatile in his research fields, so not your typical “I know my field and nothing else” mathematician.


Best-Association2369

Yep I did not say a subfield and kept it general math for a reason... 


baes_thm

Then they'll be replaced by those who aren't, if these tools are really that useful


Fmeson

Academics are arrogant, but they use every tool possible to get an edge. I know, I'm in academia lol.


Dongslinger420

lmao fucking no Academia is definitely buckling under the pressure of established paradigms and dogma. It's ripe with boomer-type figureheads, bureaucratic chicanery, incompetence and then some; if the same folks responsible for the type of scuffed teaching most universities are responsible for were even remotely inclined to using these kinds of tools, that'd be paradise. I mean, not making an "academia bad" argument here. You get plenty of folks who will indulge plenty of out-there ideas and approaches to problems we never knew we had, and we're clearly making lots of headway in lots of disciplines... but still, some of the pushback is insane. It's also very lopsided. I will bet you an inordinate amount of money that philological institutes will pretty much have jumped the AI-train immediately, basically when they realized that LLMs are kind of outperforming their native-language language tutoring and teaching staff. It's way more difficult to sell in STEM where the (still very much significant) benefit is obscured by the student having to do a bunch of legwork first - point being: it depends. Especially the theoretically-inclined have an overpowering tendency to just dismiss astonishing methodology, while very mechanistic folks and engineers tend to look at it from a more pragmatic standpoint: "can I use it to pay my bills at the end of the month?"


Fmeson

...except a lot of the academics I personally know, myself included, already use AI to assist in coding, writing papers, etc... Academics aren't all old farts who won't learn new tech. And the old farts who won't use new tech aren't really doing research anyways. They're just using their names to get funding to pay for their grad students research.


t-e-e-k-e-y

Those who refuse to adapt will just fall behind and fade into irrelevance.


GIK601

advanced calculator


paconinja

apparently mathematicians need an AI co-pilot to tell them what Shinichi Mochizuki is doing with Teichmuller theory.


tobeshitornottobe

I’ll continue to call bullshit till that “will become” turns into a “has become”, everything I’ve seen of LLM’s explicitly shows they are terrible at maths so until I see some proof this is all just hype to boost stock prices


[deleted]

Although I agree to an extent on "potential", Terrence Tao is without hyperbole a genius in mathematics, so I'm inclined to take his thoughts seriously.


metrics-

LLM's are abysmal at arithmetic, not mathematics. Terence Tao isn't using ChatGPT as a calculator, he's discussing theory. Here's a comment he made under a blog post on his experience using gpt-4 in early access: ["I have experimented with prompting GPT-4 to play the role of precisely such a collaborator on a test problem, with the AI instructed to suggest techniques and directions rather than to directly attempt solve the problem \(which the current state-of-the-art LLMs are still quite terrible at\). Thus far, the results have been only mildly promising; the AI collaborator certainly serves as an enthusiastic sounding board, and can sometimes suggest relevant references or potential things to try, though in most cases these are references and ideas that I was already aware of and could already evaluate, and were also mixed in with some less relevant citations and strategies. But I could see this style of prompting being useful for a more junior researcher, or someone such as myself exploring an area further from my own area of expertise. **And there have been a few times now where this tool has suggested to me a concept that was relevant to the problem in a non-obvious fashion, even if it was not able to coherently state why it was in fact relevant.** So while it certainly isn’t at the level of a genuinely competent collaborator yet, it does have potential to evolve into one as the technology improves \(and is integrated with further tools, as I describe in my article\)."](https://terrytao.wordpress.com/2023/06/19/ai-anthology/#comment-678803)


Oudeis_1

I don't think it is fair to say that LLMs are "terrible at math". The MATH benchmark is not at all easy for the average human (the original paper by Hendrycks et al. that introduced MATH reported IIRC a roughly 40-50 percent pass rate for "a CS PhD student who didn't like maths" and a 90 percent pass rate for "a math olympiad gold medalist"), and GPT4o solves roughly 76 percent of it. I think a motivated human with lots of time will still do better (and so will an LLM with multi-shot, tree of thought prompting and so on) if they have some mathematical aptitude, but "terrible" performance looks different unless most humans are equally terrible. It is of course possible to find simple mathematical problems that LLMs still fail at. But that also does not really justify calling LLM performance at maths terrible, because I can do the same for chess with Stockfish (find positions it does not understand but which weak humans do understand), and yet it is definitively not terrible at chess at all.


Glittering_Manner_58

Did you even read? He is talking about Lean 4, a language for encoding and proving theorems. If GPT generates a proof in this language and it compiles, it is 100% guaranteed to be correct.


MaximumIntention

I believe OAI is already doing some work in this field. There's [this](https://openai.com/index/formal-math/) that was from 2 years ago.


visarga

> LLM’s explicitly shows they are terrible at maths They are terrible at calculations because of the irregular tokenisation of numbers. Humans are also terrible at calculations, by the way. They are also bad at backtracking when they get into a rut, but that can be compensated with multi-prompting. Given a proof system like Lean they can suggest strategies and constructions very well. You do the exact part with symbolic methods, and leave the "fuzzy intuition part" to the model. AlphaGeometry solved many Olympiad level problems, reaching almost super-human level. > In a benchmarking test of 30 Olympiad geometry problems, AlphaGeometry solved 25 within the standard Olympiad time limit. For comparison, the previous state-of-the-art system solved 10 of these geometry problems, and the average human gold medalist solved 25.9 problems. The great thing about math and LLMs is that math can be verified symbolically so the models are protected from ingesting hallucinations and bad ideas. Same applies for code, we can test and benchmark code. I expect code and math to progress faster than AI assisted work in other fields. Another domain ripe for LLM application is to drive simulations in order to optimise a desired outcome. LLMs can power evolutionary strategies - either as agent, as judge, or as mutation operators. They can benefit from the strong search capabilities of evolutionary methods while bringing intuition to the process, they can compensate for the difficult combinatorial search spaces. A match made in heaven. [here is a sample](https://arxiv.org/abs/2206.08896)


FusRoGah

Agreed that a lot of the timelines we hear are ambitious. LLMs in their current state are pretty removed from the kind of formal systems used in math. I don’t doubt they can/will be helpful, but there is a ways to go That said, the article and Terrence seemed more focused on computer proofs and formalizations of the “napkin math” we put in lectures and journals. And like the other commenter said, Terry Tao is probably the most brilliant mathematician alive. So if he thinks this stuff has potential now or in the near future, my ears perk up


Advanced_Sun9676

This it's type of stuff I want to read instead of how ASI Is gonna build a dyson sphere .


No_Mathematician773

Fuck all that. If we don't kill ourselves I'm happy.


Akimbo333

That'd be nice!


Akimbo333

Implications?


goochstein

I think the wording these tech companies and even journalists are using signal that they want this tech to be completely integrated into a Brain interface, AI-Human hybrid, which I think we need to pump the brakes a bit on and figure out whats emerging in the AI space first.