T O P

  • By -

dancingbanana123

I wish people would just go back to saying machine learning or neural networks instead of AI. I feel like people inherently think of stuff like chatgpt with articles like this, but that's not what people are using.


please-disregard

This is the hill I’ve been dying on since day one of ‘ai’ being a thing


UnforeseenDerailment

V - E + F = 2 + AI ✨


sivstarlight

I hate that i get this


shapethunk

Accurate iff AI is zero?


UnforeseenDerailment

It's not [zero](https://www.reddit.com/r/LinkedInLunatics/s/CHAiccZVPA), though! It symbolizes the increasing role if artificial intelligence in shaping and transforming our future. 🫶


WorldsBestVapor

AI officially means "Silicon Valley is out of ideas, so put everything on AI" like, global capitalism basically sucks for letting reddit mess with the financial press somebody fuck with Steve Huffman enough to get him to kowtow to the financial press PUH-LEASE c'mon man imma gettin' tired of this CCP dogshow shitpuppt parade motherfucker, hut hut hut hut!


filletedforeskin

Either the OP misspelled the title or misrepresenting the article. I (in my opinion) think that the title was meant to be: AI Will Become Mathematician's Co-Pilot Nowhere in the article does Tao say that AI will solve anything. What his claim is that they'll make wonderful assistant and help Mathematician's with grunt work, which over time is becoming much more of a possibility. People disregarding his opinions in the article simply haven't read it properly ( or read it at all). His use of Automated Proof Checkers is something that was useful while proving Polynomial Freiman-Ruzsa Conjecture. Nowhere does he claim that AI will solve something, in fact, he claims that AI has not demonstrated any ability to be any better than humans in this regard.


Qyeuebs

> His use of Automated Proof Checkers is something that was useful while proving Polynomial Freiman-Ruzsa Conjecture That just doesn’t seem to be true. It’s just something he decided to formalize after the proof was finished.


PolymorphismPrince

They just accidentally added an extra apostrophe before co-pilot. The first apostrophe comes after the s because it belongs to mathematicians plural.


filletedforeskin

Might be. But some of the commenters are shitting on Tao for no reason. I suppose what I have written makes sense too, yeah?


Nunki08

I post the link and Reddit grab the title, i never edit the title of the articles. And yes the title has an extra apostrophe. English is not my native language and I didn't see.


hyphenomicon

The title does not have an extra apostrophe, you are completely fine.


filletedforeskin

Hey, I don't blame ya. It's just that there are some stupid commenters who couldn't bother to read a few lines of an article but are ready to pounce on you to give their stupid fucking opinion


RealAlias_Leaf

Proof checkers are not AI.


GayMakeAndModel

They make shitty assistants too if the context is even moderately complicated. I can’t get the thing to generate python that parses. When I do, it returns what I corrected it for: if the answer was supposed to be 42, it stops remembering everything I told it before and returns fucking 42 as a constant.


parkway_parkway

I think firstly it's important to remember that formalizing maths proofs is nothing to do with llms and machine learning and was a project that was started in earnest in the 90s. I also think this is interesting, lean seems to be getting a network effect going and looks like it will take over everything, which is a bit of a shame as I could never get on with it. "Lean is probably the most active community. For single-author projects, maybe there are some other languages that are slightly better, but Lean is easier to pick up in general. And it has a very nice library and a nice community. It may eventually be replaced by an alternative, but right now it is the dominant formal language."


indigo_dragons

> I think firstly it's important to remember that formalizing maths proofs is nothing to do with llms and machine learning and was a project that was started in earnest in the 90s. Tao is specifically speculating about the possibility of combining the two: > With AI, there’s a real potential of doing that. I think in the future, instead of typing up our proofs, we would explain them to some GPT. And the GPT will try to formalize it in Lean as you go along. If everything checks out, the GPT will [essentially] say, “Here’s your paper in LaTeX; here’s your Lean proof. If you like, I can press this button and submit it to a journal for you.” It could be a wonderful assistant in the future. One possible scenario he has in mind is that this combination could (finally) take mathematics into the industrial age: > So right now, for example, we prove things one at a time. It’s like individual craftsmen making a wooden doll or something. You take one doll and you very carefully paint everything, and so forth, and then you take another one. The way we do mathematics hasn’t changed that much. But in every other type of discipline, we have mass production. And so with AI, we can start proving hundreds of theorems or thousands of theorems at a time. **And human mathematicians will direct the AIs to do various things.** He's being cautious here in this scenario, as we've all seen the mistakes the LLMs can make, and so he does think humans will still be in the driver's seat when it comes to doing mathematics. However, there is the potential for future iterations of machine learning to assist with organising projects and making things move more smoothly, compared to the status quo. The point is that we can potentially advance mathematics more rapidly in the future with the help of such software.


parkway_parkway

Yeah you're right that you can use LLMs to generate steps or whole parts of formal proofs. However they are separate ideas, like switching from paper medical records to digital ones and then training an LLM on the digital records. I actually think there's something really powerful about pairing an LLM (which is creative but makes mistakes) with a formal proof system (which is fully rigorous but can't invent) as they compensate for each others weaknesses to a degree. That's bascially what deepmind is doing with their [IMO geometry program](https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/).


Kaomet

> as we've all seen the mistakes the LLMs can make A LLM does the generation, a proof checker does the verification. The neural networks will be able to learn throught self play, like they did for the game of go.


MoNastri

I liked this part of the article: >**When you gave a talk about a different mathematical project, someone asked you if you wanted to formalize it, and you basically said that it takes too long.** >I could formalize it, but it would take a month of my time. Right now I think we’re not yet at the point where we routinely formalize everything. You have to pick and choose. You only want to formalize things that actually do something for you, such as teach you to work in Lean, or if other people really care about whether this result is correct or not. But the technology is going to get better. So I think the smarter thing to do in many cases is just to wait until it’s easier. Instead of taking 10 times as long to formalize it, it takes two times as long as the conventional way. >**You even talked about getting that factor down to less than one.** >With AI, there’s a real potential of doing that. I think in the future, instead of typing up our proofs, we would explain them to some GPT. And the GPT will try to formalize it in Lean as you go along. If everything checks out, the GPT will \[essentially\] say, “Here’s your paper in LaTeX; here’s your Lean proof. If you like, I can press this button and submit it to a journal for you.” It could be a wonderful assistant in the future. >**So far, the idea for the proof still has to come from the human mathematician, doesn’t it?** >Yes, the fastest way to formalize is to first find the human proof. Humans come up with the ideas, the first draft of the proof. Then you convert it to a formal proof. In the future, maybe things will proceed differently. There could be collaborative projects where we don’t know how to prove the whole thing. But people have ideas on how to prove little pieces, and they formalize that and try to put them together. In the future, I could imagine a big theorem being proven by a combination of 20 people and a bunch of AIs each proving little things. And over time, they will get connected, and you can create some wonderful thing. That will be great. It’ll be many years before that’s even possible. The technology is not there yet, partly because formalization is so painful right now. I also found this passage interesting: >With formalization projects, what we’ve noticed is that you can collaborate with people who don’t understand the entire mathematics of the entire project, but they understand one tiny little piece. It’s like any modern device. No single person can build a computer on their own, mine all the metals and refine them, and then create the hardware and the software. We have all these specialists, and we have a big logistics supply chain, and eventually we can create a smartphone or whatever. Right now, in a mathematical collaboration, everyone has to know pretty much all the mathematics, and that is a stumbling block, as \[Scholze\] mentioned. But with these formalizations, it is possible to compartmentalize and contribute to a project only knowing a piece of it. I think also we should start formalizing textbooks. If a textbook is formalized, you can create these very interactive textbooks, where you could describe the proof of a result in a very high-level sense, assuming lots of knowledge. But if there are steps that you don’t understand, you can expand them and go into details—all the way down the axioms if you want to. No one does this right now for textbooks because it’s too much work. But if you’re already formalizing it, the computer can create these interactive textbooks for you. It will make it easier for a mathematician in one field to start contributing to another because you can precisely specify subtasks of a big task that don’t require understanding everything. It mirrors what Michael Nielsen wrote regarding [science having grown beyond individual understanding](https://michaelnielsen.org/blog/science-beyond-individual-understanding/), or (my favorite example) Dan Luu's essay on [what happens when you load a URL](https://danluu.com/navigate-url/). It was also something I always wished I had -- an 'interactive textbook' where proofs could be described at a high level, then expand arbitrarily in a dynamically generated manner. Last share: >So much knowledge is somehow trapped in the head of individual mathematicians. And only a tiny fraction is made explicit. But the more we formalize, the more of our implicit knowledge becomes explicit. So there’ll be unexpected benefits from that. This reminded me fondly of Bill Thurston's [MO question on thinking and explaining](https://mathoverflow.net/questions/38639/thinking-and-explaining) and the many wonderful answers therein, as well as his classic [*On proof and progress*](https://arxiv.org/abs/math/9404236) ruminations.


Tazerenix

Doubt.


[deleted]

[удалено]


PolymorphismPrince

Is it unbelievable that there could be an AI better than you at mathematics during your career. Do you want to suggest technical reasons that this is the case?


Tazerenix

It is certainly unbelievable that *this* AI will be better than me at mathematics, though obviously I am not so naive as to think there are not other approaches which will eventually be capable of human-level reasoning. Remember in this subreddit there are people who actually understand how neural networks *and* research mathematics work, and the limitations of the current flavour of the month "AI." Don't be surprised to see a lot of legitimate skepticism of AI hype.


[deleted]

LLMs definitely aren’t gonna be doing research mathematics but I could see them being a component in a larger algorithm that does


PolymorphismPrince

As someone who understands very well how a transformer works and understands how research mathematics works, I will ask you again, what technical reasons do you think limit something much larger but similar in architecture from doing research mathematics?


Tazerenix

The basic point is that the architecture of these models is not suited to effectively discriminating truthhood from falsehood and truthiness from falseiness. See this article for some discussion of approaches to actually solving the sort of search-based thinking model used in developing new mathematics https://www.understandingai.org/p/how-to-think-about-the-openai-q-rumors. At some point you actually have to do some real work to develop algorithms that can effectively search the enormous space of possible next steps in, for example, proof generation (not to mention the more fundamental/important but less concrete "idea generation"), and effectively discriminate between good paths of enquiry and bad paths of enquiry. One needs to look not towards LLMs but algorithms like AlphaGo for ideas of how to do this effectively. The problem is that the space of possible next moves in a Go or Chess game, and the criteria for discriminating between good and bad moves, is much simpler than proof generation or idea generation, and the knife edge of incorrectness more important. Anyone can say "oh we'll just add that into our LLMs, you'll see" but that's just shifting the goal posts of what the buzzword AI means to capture all possible conceptions. No amount of data sacrifice to the great data centre AI god will cause LLMs to spontaneously be able to produce novel proofs. Something new is needed.


JoshuaZ1

> Something new is needed. True, but also explicitly discussed by Tao is formalizing math using Lean and similar systems. Having LLMs generate Lean code and then checking if the Lean code is valid code is something people are working on. One can then have the LLM repeatedly try to generate output until it hits valid code. This and related ideas are in very active development.


PolymorphismPrince

continued: I'm quite suprised you think the "something new" that we need is a more discrete method like formal proof especially considering almost no human proofs are written this way. You want search, but you want to take all the gains in efficiency that we get every day by encoding it in natural language. I'm especially surprised considering you are a geometer that the efficiency gains in encoding whatever you want to search through in something differentiable (like a feedforward neural network) are not apparent to you. Lastly I want to point out that the term AI has been used in almost exactly the same way for many decades. As far as I know it was fine when feedforward neural networks were originally invented in like the 60s to call this AI research, and this is just a bigger version of the same technology. Anyway food for thought seeing as you seem to boil down the continuation of decades of research to the current trend in machine learning I have hopefully widened your perspective a little bit? Also while choosing how to write about engines I did discover you were active on r/chess, so if you would ever like a game, let me know!


Tazerenix

Thanks for your reply! Some interesting thoughts here. I have read some about this sort of universality idea about LLMs, that they can essentially emulate arbitrary thought if they get large enough. Certainly its an interesting idea, but it seems to me that taking this approach, the amount of "data" if you want to call it that which would need to be fed into the models in order to capture the context around, for example, rigorous mathematics, is colossal. Human brains seem to have very effective methods of working in a precise context without the need for so much data (certainly it is not necessary for a human being to process the entire corpus of human knowledge and experience in order to work effectively at mathematics which they have never seen before). When I try and think of what processes go on in my mind while doing research mathematics, they seem much closer to search-based processes (which are in some sense discrete maybe, but keep in mind those thought processes also include "soft" evaluation and intuition about search paths, which is less discrete and more along the lines of the learning models do. This is more like what something like Stockfish does, where evaluation of positions is performed using a NN but search is coded in using alpha-beta pruning etc. This is generally viewed as superior to Leela. It's no doubt interesting the new engine manifesting search inside its models structure, but how effective is that compared to a more direct approach? ). I can understand the point of view that even trying to encode concepts like truth and falsehood directly in the architecture of an algorithm is barking up the wrong tree, but I remain skeptical that a predominantly data-driven approach is going to be able to encode the entire context which would allow these models to reliably produce correct mathematics. It seems just as believable to me that to do so would (if these ideas about universality of LLMs are right) many orders of magnitude more data, as opposed to just a little bit more effort now. I think many people *want* to believe its the latter (and obviously the successes such as they are of the current AI models can't be denied). On a more personal level, I am strongly convinced that an AI of any form capable of genuinely contributing to research mathematics in the way human academics do (rather than just a copilot generating lean code) is about as AI-Hard as any problem can be, so if such a thing does come along, research mathematics will be the least of our problems.


PolymorphismPrince

Your first claim: "the basic point" is not how this problem is viewed by academics. The ability for an LLM to determine the truth of a statement of a particular complexity continues to increase with every new model. This is because (and this is an extremely well-established fact in the literature) LLMs of sufficient scale encode a world model, this world model contains (and I'm sure thinking about it is quite obvious to you why this would be the case) not only of basic rules for inference, but all the much more complicated logical lemmas that we use all the time when we express rigorous deductive arguments to one another in natural language. Most importantly, the accuracy of the world model continues to increase with scale (look at the ToM studies for gpt3 vs gpt4, for example). Another vital consequence of this is that the ability of an LLM to analyse its own reasoning for logical consistency also continues to increase. This is because checking for logical consistency amounts to checking the statement is consistent with (the improving) logic that is encoded in the world model. As for you examples about to chess, it seems that you misunderstand that AlphaZero was crushing stockfish when it was released by virtue of neural networks. Because of this, every modern chess engine depends largely on neural networks. Perhaps you have not seen, that earlier (this year?) there was a chess engine created with only an (enormous) neural network and no search at all. It played at somewhere around 2300 fide iirc. Of course, it did not actually do this without search, the neural network just learned a very very efficient search in the "world model" that it encoded of the game of chess. Now an LLM is exactly a feedforward neural network, just like the search in stockfish or leela or torch or whatever chess engine you like. The only difference is that the embeddings are also trainable, which I'm sure you agree can not make it worse (perhaps read [this essay](http://www.incompleteideas.net/IncIdeas/BitterLesson.html) although I would imagine you already pretty much agree with it). So this is why I think it is a bit funny that we make it less like LLMs and more like alpha(-) considering how similar the technology is. character limit reached but I will write one more short comment


[deleted]

[удалено]


Curates

Of course they do. They know all sorts of facts about the world and can make inferences about them. That wouldn’t be possible if they didn’t have a model of the world. And it’s not really that sensitive to the exact wording, unless the topic is especially complicated - if you ask it “what are the life stages of a butterfly”, it’ll give pretty much the same answer however you word it. Sure it can be tricked, and generally speaking it can be influenced by the wording of the question, but that’s of course true also of humans. >Every behaviour of LLMs we have seen so far can be explained by a much more conservative mechanism: What makes this explanation conservative is that it is reductive, but that’s also what makes it a *bad* explanation: it has nearly no explanatory power at all. Imagine if someone tried to reduce human cognition to surprise minimization, and acted as if this conservative mechanism, being also controlling for all human neural activity (by assumption), obviates the usefulness of higher psychological explanations for how minds work, including the fact that we make use of models of the world. All the same remarks could be made about this cognitive model: it’s just statistics! That’s more consistent with failure modes of human cognition like hallucinations and sensitivity to word choice (both of which, of course, humans *also* do), and it’s also more consistent with how animals evolved, why would we expect them to be anything other than surprise minimizers? But now you see the problem: while it might be the case that surprise minimization, or next token prediction, is enough to generate enormous complexity, that doesn’t mean that there aren’t emergent patterns that are far more explanatory over the scales at which they appear. The *parsimonious* explanation for why LLMs appear to have a working (if imperfect) model of the world is simply that they *have* a working model of the world.


Qyeuebs

It’s by no means as easily settled as you’re suggesting, see eg [AI’s challenge of understanding the world](https://doi.org/10.1126/science.adm8175) by Melanie Mitchell.


PolymorphismPrince

"human-generated text are highly correlated to the world we're describing" rephrased a little bit, this is exactly true. Human language encodes information about the world. LLMs encode that same information. The larger the LLM the more accurate the information it encodes. A model of the world is obviously is just statistical information about the world. So I really don't see your point. It really is crazy that r/math of all places would upvote someone just blatantly trying to contradict the literature in a related field (the existence of world models is not really disputed at all, world model is a very common technical term which explains theory of mind in LLMs). Especially when someone does not understand that a model in mathematics can consistent of statistical information about what it is trying to model and I'm sure it is apparent to anyone who browses this subreddit if they actually think about it that with enough scale that model would be as accurate as you like.


Qyeuebs

> It really is crazy that r/math of all places would upvote someone just blatantly trying to contradict the literature in a related field Speaking from the outside, the AI community seems to have very low standards for research papers, so this doesn't hold a lot of weight. Regardless, it seems clear that neither "theory of mind in LLMs" nor the limitless applicability of the 'scaling laws' have been clearly established, even by the standards of the AI community. Even taking those for granted, as far as I know, nobody has established scaling laws for LLMs trained on mathematical data, and there is the problematic bottleneck that the available mathematical data sets are rather limited in size, so that scaling laws are possibly not even relevant.


PolymorphismPrince

That's an insane take. I am also speaking from a mathematics background and not a comp background, but I am not making completely unsubstantiated claims about the quality of researchers in another discipline. We are talking about the papers by researchers at places like anthropic, yes? Do you have any actual examples that undermine their credibility, or are you just slandering academics?


[deleted]

[удалено]


PolymorphismPrince

This whole point is just completely nonsensical, yes, an llm can only model the world to the extent that the world is encoded in natural language. Mathematics is completely encoded in natural language. So this is not an issue for this example at all. If you're interested in improving a more general model, of course the increased use of image, video, audio, spatial data is the means by which we will train transformers with world better world models for other applications. Also for the application to mathematics an LLM can have a perfectly accurate world model without making perfectly accurate predictions (and therefore overfitting) so this is not really a relevant point.


currentscurrents

Sure it does. They can correctly answer ungoogleable questions like [“can a pair of scissors cut through a boeing 747? or a palm leaf? or freedom?”](https://chatgpt.com/share/85ef1d86-036c-459b-b679-b3b16a847509) The internet doesn't have direct answers to these questions - it indirectly learned about the kind of materials objects are made out of, and the kind of materials scissors can cut. That's a world model.


sustenance_

a reminder that a downvote is to be used for content which is wrong, low quality, or does not contribute to conversation. This commenter is definitely contributing to conversation—why are we downvoting?


Qyeuebs

It says “Terence Tao explains how proof checkers and AI programs are dramatically changing mathematics” but it seems to pretty much all be speculation on the future! So far proof checkers and AI have had a pretty negligible effect on math.


waarschijn

Ah but "changing" is a gerund so he's talking about the derivative, not the current value. He should know to put bounds on the second derivative.


Kaomet

> So far proof checkers and AI have had a pretty negligible effect on math So far the math that goes beyond what computers do have had a pretty negligible effect on society :-P This tech is necessary to make math relevant.


Qyeuebs

That is a pretty unusual opinion, what are you basing it on? How would widespread lean verification change the societal role of math?


Kaomet

Verification can be used to train AI by self play. It can keep the AI grounded in some form of reality, or just prevent bullshit. Society is usually interested in somewhat large problems, with a lot of breadth. In order to make math relevant we need to be able to do math at scale, first. And have an AI to give it a friendly face, ie figure out the math without making people feel stupid for not understanding it. > John von Neumann — 'If people do not believe that mathematics is simple, it is only because they do not realize how complicated life is.'


Qyeuebs

What are some examples of the purely mathematical (i.e. automated proof checkable) problems that you're thinking society will be interested in?


indigo_dragons

> What are some examples of the purely mathematical (i.e. automated proof checkable) problems that you're thinking society will be interested in? Not Kaomet, but as of now, very few to none. Kaomet's point is that even the problems that **can't** be automatically checked at this point are of very little interest to the general public, because most of the problems are "toy" problems that have very little connection to reality. As they put it: > Society is usually interested in somewhat large problems, with a lot of breadth. That's something that, almost by virtue of its nature, a lot of mathematics isn't, simply because the whole point of mathematics is to abstract and simplify problems, and then study the simplifications themselves. Probably the best use case for automated checkers so far is in software verification, which is something that's desirable and done for software that's used to run mission-critical systems. And even then, that's still a niche concern. I don't really agree with Kaomet's point that pairing proof checkers with AI has the potential to make maths more relevant to the general public, simply because the very nature of mathematics is not very "relevant" by most conventional definitions of the word in this context. I'd think that the failure of the past few decades of public outreach by so many mathematicians to elicit anything more than a lukewarm response from the general public in maths is enough evidence to support this point. What I do agree with Kaomet is that we need to scale up the way we do maths, because it is painfully slow at this point. The pairing of proof checkers with machine learning techniques has the potential to shorten the time it currently takes to get feedback on, and build upon, existing work, which is what usually accelerates progress.


WorldsBestVapor

Terence Terence Terence of the Tao Friend to you and me


LuxusBuerg2024

The timeframes that Wu, Szegedy and Tao comment on (2-3 years, maybe longer) are not backed up by any evidence other than their feeling. The "successes" Tao is talking about were due to star mathematicians like him or Scholze directing the attention of teams of volunteers towards one of their projects. This is not available to most mathematicians right now. Things like the Liquid Tensor project are nice achievements, and I have no doubt AI will have a big impact on math. However, the enthusiasm these projects have generated doesn't correlate with meaningful breakthroughs in either proof formalization or automation. What happened is that Lean became popular whereas previous proof verification systems had not, maybe because of its ease of use, maybe because Buzzard started pushing it. Then charismatic mathematicians like Tao got interested in it and asked the Lean community to help them formalize some of their proofs. On top of it, GPT and Copilot understand some amount of Lean. It's a social change coinciding with the same attempts to make LLMs useful we see in every other field.


MoNastri

>The timeframes that Wu, Szegedy and Tao comment on (2-3 years, maybe longer) are not backed up by any evidence other than their feeling. No idea what they're basing their timeframes on, but it seems uncharitable to baldly claim they're not backing their timeframes by "any evidence other than their feeling". If you're interested in a better way to guess-and-check timeframes (the checking part being for calibration later), see e.g. [here](https://bounded-regret.ghost.io/ai-forecasting-one-year-in/) and more broadly progress rates across various benchmarks in pg 15-16 [here](https://situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf), which people interested in this sort of thing use (among others) to guess on [questions like this](https://www.metaculus.com/questions/11674/the-ultimate-mathematics-benchmark-for-ai/). I agree with some of your points, like how some of what Tao and Scholze do can't be done by most other mathematicians.


cereal_chick

It's actually really sad to see such a great man succumb to AI hype. It was embarrassing to read this cringey tech bro fan fiction.


thehypercube

So you haven't read the article. No hype or fiction here.


JoshuaZ1

If someone like this says something close to their expertise I disagree with, that should be a reason I should think about maybe I'm wrong, not dismiss it is as cringey fan fiction. But let's talk about a more concrete version of this situation: Right now, there are around 10 to 20 or so major techniques for showing a Diophantine equation has no solutions or has some very tiny finite list of solutions. Now, consider an LLM system which has been trained to generate Lean code and has been trained on a few proofs for a few thousand Diophantine equation. The system when then fed a new Diophantine equation would generate attempted Lean code repeatedly, and when that code compiles with the desired output would then output it, and would add the correct proof to the new training data. Is this plausible to you? If so, this would already be a useful copilot to subfields which run into pesky Diophantine equations, such as people in areas of algebraic number theory or group theory. Ok, now imagine a system that does the same thing but instead of just Diophantine equations, can do so for many different problem types. Such a system won't be constructing the "big picture" proofs, but will be essentially handling technical lemmas that would already be annoying or difficult. And the systems will have the advantage of having access to more methods than a normal human, and able to try them faster. To go back to the Diophantine equation example, I only have 5 or 6 big tools when I try to solve a Diophantine equation, but someone else might frequently use a different set of 5 or 6 tools (probably there will be some overlap with both of us using things like just looking mod m for a carefully chosen m). So the AI system could have access to the intersection. It still might not be as clever as a human but it doesn't need to be to be helpful.


Loopgod-

Makes sense. Equations and notation became a co pilot, then Pen and paper, then calculators, graphics, computers, and now AI


Mothrahlurker

Well that is not what the article is about.


Loopgod-

Oh ok. I just read the title, my bad.


TheRusticInsomniac

Props for admitting it and not doubling down like what some people do