• Articles
  • AI
  • 2024
  • 10
  • "We are entering uncharted territory of mathematics" — Terence Tao, mathematician, Fields Medalist

"We are entering uncharted territory of mathematics" — Terence Tao, mathematician, Fields Medalist

Terence Tao, one of the greatest living mathematicians, has his own view on artificial intelligence.

Disclaimer 1: this is a free translation of an interview from The Atlantic magazine. The translation was prepared by the editorial staff of "Technocracy". To not miss the announcement of new materials, subscribe to "Voice of Technocracy" — we regularly talk about news about AI, LLM, and RAG, as well as share useful must-reads and current events.

Discuss the pilot or ask a question about LLM here.

Terence Tao, a mathematics professor at the University of California, is a true superintelligence in real life. "The Mozart of mathematics," as he is sometimes called, is considered the greatest living mathematician. For his achievements and proofs, he has received numerous awards, including the equivalent of the Nobel Prize in mathematics. Currently, AI is nowhere near his level.

But tech companies are trying to achieve this. The latest generation of artificial intelligence, even the almighty ChatGPT, was not designed to work with mathematical reasoning. Instead, they were language-oriented: When you asked such a program to answer an elementary question, it did not understand and solve the equation or formulate the proof, but gave an answer based on what words might appear in sequence.

For example, the original ChatGPT cannot add and multiply, but it has seen enough algebra examples to solve x + 2 = 4: "To solve the equation x + 2 = 4, subtract 2 from both sides..." However, OpenAI is now openly promoting a new line of "reasoning models," known collectively as the o1 series, for their ability to solve problems "like a human" and tackle complex mathematical and scientific tasks and queries. If these models prove successful, they could radically change the slow, solitary work that Tao and his colleagues are engaged in.

After Tao published his impressions of o1 online - he compared it to a "mediocre but not entirely incompetent" graduate student - I wanted to learn more about his views on the potential of this technology. In a Zoom conversation last week, he described an unprecedented type of "industrial-scale mathematics" supported by AI, in which AI, at least in the near future, will be more of an assistant for implementing mathematicians' hypotheses and approaches rather than an independent creative collaborator. This new kind of mathematics, capable of opening up terra incognita of knowledge, will fundamentally remain human, considering that humans and machines have entirely different capabilities that should be seen as complementary rather than competing.

Matteo Wong: What was your first experience with ChatGPT?

Terence Tao: I tried to play with it almost immediately when it came out. I posed several complex mathematical problems, and it gave rather silly results. It was coherent English, it mentioned the right words, but it had very little depth. Early GPTs were not impressive with anything truly advanced. They were good for fun things - for example, if you wanted to explain some mathematical topic in the form of a poem or a story for children. That is very impressive.

Wong: OpenAI claims that o1 can "reason," but you compared the model to a "mediocre but not entirely incompetent" graduate student.

Tao: This initial formulation received wide publicity, but it was misinterpreted. I did not say that this tool is equivalent to a graduate student in all aspects of graduate education. I was interested in using these tools as research assistants. A research project consists of many tedious stages: You may have an idea, and you want to do the calculations, but you have to do it manually and work everything out.

Wong: So, it's a mediocre or incompetent research assistant.

Tao: Yes, it is equivalent, in terms of performing the functions of such an assistant. But I envision a future where you conduct research by interacting with a chatbot. Suppose you have an idea, and the chatbot supports it and fills in all the details.

This is already happening in some other areas. Many years ago, AI conquered chess, but chess thrives today because now a good enough chess player can guess which moves are good in certain situations and use chess engines to check them 20 moves ahead. I see something similar happening in mathematics over time: You have a project, and you ask, "What if I try this approach?" And instead of spending hours and hours making it work, you direct GPT to do it for you.

With o1 you can do this. I gave her a task that I knew how to solve and tried to guide the model. At first, I gave her a hint, and she ignored it and did something else that didn't work. When I explained this, she apologized and said, "Okay, I'll do it your way." Then she followed my instructions well enough, and then got stuck again, and I had to correct her again. The model never came up with the most difficult steps. She could do all the routine things, but was very weak in terms of imagination.

One of the key differences between graduate students and AI is that graduate students learn. You tell AI that its approach doesn't work, it apologizes, maybe temporarily corrects its course, but sometimes it just goes back to what it tried before. And if you start a new session with AI, you go back to square one. I am much more patient with graduate students because I know that even if a graduate student doesn't fully cope with the task, they have the potential to learn and self-correct.

Wong: According to OpenAI's description, o1 can recognize its mistakes, but you say that this is not the same as continuous learning, which actually makes mistakes useful for humans.

Tao: Yes, humans have growth. These models are static - the feedback I give to GPT-4 can be used as 0.00001 percent of the training data for GPT-5. But this is not the same as with a student.

AI and humans have such different models of learning and problem-solving that it is better to consider AI as an additional way of solving problems. For many tasks, the most promising approach will be to use both AI and humans, performing different tasks.

Wong: Earlier you also said that computer programs can change mathematics and facilitate human collaboration with each other. How so? And does generative AI have any contribution to this?

Tao: Technically, they are not classified as AI, but proof assistants are useful computer tools that check whether a mathematical argument is correct or not. They allow for large-scale collaboration in mathematics. This is a very recent development.

Mathematics can be very fragile: if one step in a proof is incorrect, the entire argument can collapse. If you are doing a joint project with 100 people, you break the proof into 100 parts, and each contributes their part. But if they do not coordinate their actions with each other, the pieces may not fit. Because of this, it is very rare to see more than five people in one project.

With trial assistants, you don't need to trust the people you work with because the program gives you a 100 percent guarantee. Then you can do math on an industrial scale, which doesn't exist now. One person focuses on proving certain types of results, such as a modern supply chain.

The problem is that these programs are very picky. You have to write your argument in a specialized language - you can't just write it in English. Perhaps AI will be able to translate from human language to programming language. Translating from one language to another is almost what large language models are for. We dream that you just communicate with the chatbot, explaining your proof, and the chatbot converts it into the language of the proof system along the way.

Wong: So the chatbot is not a source of knowledge or ideas, but a way of interacting.

Tao: Yes, it can be a really useful glue.

Wong: What problems can this help solve?

Tao: The classic idea of mathematics is that you choose some very difficult problem, and then one or two people, locked in an attic for seven years, just beat their heads against it. The types of problems you want to solve with AI are the opposite. The naive way to use AI is to feed it the most difficult task in mathematics. I don't think this will be very successful, besides, we already have people working on these problems.

Most of all, I am interested in mathematics that does not actually exist. The project I launched just a few days ago is dedicated to an area of mathematics called universal algebra, which is about deriving other statements from some mathematical statements or equations. In the past, people studied it like this: they picked one or two equations and studied them to death, like a craftsman making one toy at a time, and then worked on the next one. Now we have factories, and we can produce thousands of toys at once. My project has a collection of about 4,000 equations, and the task is to find connections between them. Each of them is relatively simple, but there are a million interconnections. Among these thousands of equations, there are 10 points of light, 10 equations that have been well studied, and there is a whole terra incognita.

There are other areas where a similar transition has occurred, such as genetics. In the past, if you wanted to study the genome of an organism, it was a whole doctoral dissertation. Now we have gene analysis machines, and geneticists are studying entire populations. This way you can engage in different types of genetics. Instead of narrow, deep mathematics, where an expert works very hard on a narrow range of problems, you could solve broad, crowdsourced tasks with AI, which may not be as complex but are much larger in scale. And this can become a very complementary way of obtaining mathematical knowledge.

Wong: This reminds me of how Google Deepmind's artificial intelligence program called AlphaFold figured out how to predict the three-dimensional structure of proteins, which for a long time had to be done one protein at a time.

Tao: True, but that doesn't mean protein science is obsolete. You need to change the problems you study. One hundred and fifty years ago, the main benefit of mathematicians was solving differential equations. Now there are computer packages that do this automatically. Six hundred years ago, mathematicians built tables of sines and cosines that were necessary for navigation, and now computers can generate them in seconds.

I'm not very interested in duplicating what people are already good at. It seems inefficient. I think that at the forefront we will always need both people and AI. They complement each other. AI is very good at turning billions of data into one good answer. A human is good at taking 10 observations and making really inspiring guesses.

Comments