Imaginarium of Mr. Altman: beautiful far away?

Vegetarian pasta on the table, wine served, loud music playing — everything to distract from the eventful Thanksgiving Day 2023. OpenAI head Sam Altman realized that he was not in the flow and not in the resource, so he fled to his ranch in Napa. This happened after he almost lost control of the company that "holds the future of humanity in its hands."

However, everything worked out: Altman was fired, then reinstated in the company five days later, his face appeared on the cover of Time. And OpenAI continued to shake the information space with new neural network wonders under his leadership.

Futurists and forecasters began to make predictions about what is to come. Interestingly, it was writers like Jules Verne and Herbert Wells who once painted the future, describing incredible technologies that have now become a reality. However, the role of technological prophets is increasingly being taken on by top IT company managers — those who are directly involved in creating the future.

Of course, their predictions may have shades of PR, as everyone sees the future through the prism of their products. But when someone like Sam Altman speaks, it's worth listening. After all, legions of visionaries and techno-evangelists rolling out neural network forecasts do not have the insights that the key figure of the most important AI company on the planet has.

Recently, Sam Altman published an essay on his website. And there are a number of interesting points that I would like to ponder. Now, armed with "hints" from the text of the head of OpenAI, I will throw in some of my thoughts.

Judging by the reviews, it seems that the techies are dissatisfied — a large amount of "water" and enthusiastic odes to what a wonderful future with AI awaits us. And there are no specifications, no data on parameters, nothing precise and concrete! Pure literature! Yes, this AI manifesto mostly repeats the traditions of science fiction writers with the only difference that it may contain hints about where OpenAI is heading.

The first thesis is about the emergence of an AI team consisting of virtual experts in various fields who will work together. Here, it is probably about the development of the GPT-agent system — extensions that allow the chatbot to perform specific tasks. There is already an extensive database of chat plugins for various tasks, and chatbots can be connected to external services and to each other. OpenAI clearly has Apple-like ambitions — the Apple AppStore revolutionized the mobile device industry. As a leader in the industry, OpenAI is building an ecosystem of a new market and has already offered many companies to create their chatbots within ChatGPT agents.

Altman further writes that children will have virtual tutors for any discipline, similar improvements in healthcare, and the ability to create any software.

There is a nuance — the concept of virtual tutors looks promising. But right now, the cost of computing in AI is still quite high, and therefore a full-fledged learning process through ChatGPT may not be as profitable. At the same time, Sam does not say how the cost of computing can be reduced, but writes that it definitely needs to be done. "If we do not create sufficient infrastructure, AI will become a very limited resource."

As for learning, AI has one important feature — you can ask the neural network as many stupid questions as you want! It sounds funny, but I am convinced that many have done this — wrote requests to explain something that is embarrassing not to know in polite society =) Of course, you can also google it, but it is much more convenient to write your request, even if it is clumsy, and the neural network will understand, not judge, and explain everything. This ability to answer silly (but actually important) questions gives ChatGPT an undeniable advantage over "flesh" mentors.

Regarding healthcare — here the Overton window, it seems, has not yet opened to the state that a person would trust a neural network in serious cases. Yes, the diagnostic capabilities, as the media write, of the neural network are improving. And it is believed that people may well turn to the chatbot for initial diagnostics. But it is also known that ChatGPT can still invent and fantasize, so there are reasons to believe that even human error in "flesh" doctors will not deter visits to clinics. Another thing is that the doctors themselves, especially if they are not very qualified, can consult with the same ChatGPT. Well, maybe in this case it is for the better?

“The possibility of creating any software” through prompts — it is easy to believe, although familiar programmers gloat in disbelief. At the very least, making edits to already generated code can be problematic. Although I may be wrong. I can't confirm or deny it here. Programmers, what do you think?

By the way, Altman acknowledges that AI “can significantly change the labor market (both positively and negatively) in the coming years”, but “most professions will change more slowly”. So far, I can clearly see that AI does not threaten plumbers and plasterers in any way. The head of OpenAI also writes that new professions will be created that we do not even consider work now. Here we cannot help but recall the “new black” among info-gypsies — prompts. It is incredibly interesting to watch how the ability to put thoughts into words and the ability to describe a request in detail is presented as a know-how that needs to be learned (and buy a course!), since this is the profession of the future. However, with the advent of the o1-preview model, there has been a trend towards simplifying prompts, and neural networks themselves can be used to write prompts, so the concept of prompting as a profession of the future is under threat.

«Thanks to these new capabilities, we will be able to achieve universal prosperity», writes Sam. And he adds that with the development of AI, the discovery of all physical laws and access to virtually unlimited sources of energy will be possible. Oh, it seems that few people believe in this populist rhetoric. Already, AI is replacing a number of low-level professions, forcing many to "retrain as building managers." It is obvious that AI is a platform for a new technological arms race. Moreover, in the literal sense of "arms": after all, artificial intelligence is obviously "dual-use" technology. What weapons will be controlled through the ChatGPT API or the "military" analogue of ChatGPT in the near future remains to be guessed.

«Perhaps we will achieve superintelligence in a few thousand days; perhaps it will take longer, but I am sure we will get there». This phrase may contain the most important insight: artificial general intelligence (AGI) is inevitable. A few thousand days is approximately 5.5 years. If there is more to this prediction than vague reflections, then relatively soon a technology may be introduced that could rival human intelligence in performing many tasks without the need for special training. Sam Altman has long spoken about the inevitability of AGI. There are rumors that the leadership of OpenAI was frightened by what the new AI could do and hastily fired Altman, but then reconsidered. And here it is unclear — either everything is so revolutionary and Altman's colleagues fear a machine uprising, or new AI models, instead of discovering all physical laws, may through "jailbreaks" provide new recipes for bioweapons...

So, from this essay, the following directions for the development of the near future emerge:

  • Development of the GPT-agent ecosystem and the creation of an AppStore-like ecosystem, with chatbots becoming new applications.

  • Development of additional models capable of "self-reflection", like o1.

  • And these reflexive models are the basis for mass proto-AGI

  • In the meantime, infrastructure needs to be built, because without a large number of chips and data centers, all this beauty will be very far away.

Comments