What is it like to create ChatGPT? An OpenAI employee talks about corporate culture

In recent weeks, OpenAI has been in the spotlight due to Meta's attempts to lure top employees from the company by offering record amounts for the switch — up to 100 million dollars. More than a dozen leading experts have left OpenAI — despite Sam Altman’s statement that not everything is bought with money and that the company's culture and innovations keep employees there.

In recent weeks, OpenAI has been in the spotlight due to Meta's* (recognized as extremist in Russia) attempts to lure the company's top employees, offering record sums — up to 100 million dollars — for the transition. OpenAI has already lost more than a dozen leading specialists, despite Sam Altman's assertion that not everything can be bought with money and that the company's culture and innovation are what keep employees around.

Against this backdrop, it's especially interesting to hear about OpenAI firsthand: just a day ago, Calvin French-Owen published his story about working at the company — it's an impressive text, from which I'll summarize only part about the corporate culture. A former CTO of Segment (a platform for data collection and analysis), Calvin worked on the Codex programming agent at OpenAI. Calvin left the company without conflict, citing that after leading his own startup, he couldn't fully adapt to working in a large company.

Calvin worked at OpenAI for just over a year, during which the company grew from 1,000 to 3,000 employees. According to him, such rapid growth is felt everywhere: different teams have different working styles, and in management, it's almost impossible to find someone doing what they did 2-3 years ago.

A positive aspect is that initiative at OpenAI is driven from the bottom up: when Calvin first started, he immediately asked for the roadmap for the next quarter — and received the answer that there wasn't one (now, according to him, there is one). New ideas come from everywhere, and the company has a strong meritocracy — leaders are chosen primarily for having good ideas and the ability to implement them, not for skills in corporate politics. Globally, OpenAI can easily change direction if new knowledge shows that it's the best approach.

The downside of this approach is that many projects get duplicated. Calvin managed to work on an alternative to ChatGPT Connectors, and there were at least 3-4 alternative versions of Codex in the company. This usually happens because small teams at OpenAI start new projects on their own without approval from anyone. According to Calvin, at OpenAI, spending on specialists, as well as offices and software, is tiny compared to how much the company spends on graphics processors for training and launching models.

Challenges arise when it’s time to turn numerous “bottom-up” initiatives into a full-fledged product. This is a task for very productive managers, and such specialists are always in short supply.

What is it like to create ChatGPT? An OpenAI employee talks about corporate culture

After working in the b2b sector at Segment, Calvin was surprised by the pressure OpenAI is under. X users employ automated bots to check employees’ accounts for new announcements. Meanwhile, the press regularly publishes articles about something new at OpenAI that hasn't been announced within the company. It's even hard to simply say you're working at OpenAI — the person you're talking to will always have a biased opinion.

OpenAI is obsessed with secrecy. Calvin was unable to tell almost anyone in detail what he was working on for a long time. But the reason is not only due to the attention of the press and users. OpenAI aims to create AGI, and that technology's mistakes can mean a lot. Plus, the product must work well for hundreds of millions of people who ask ChatGPT various questions. Furthermore, OpenAI is under constant scrutiny not only from its competitors, Google and Anthropic, but also from the authorities of the largest world powers, who are paying increasing attention to AI.

Calvin thinks of OpenAI as something similar to Los Alamos during the Manhattan Project—at the start, it was a group of specialists studying cutting-edge technology. Completely unexpectedly, they managed to create an application that went viral worldwide, and then it was time for the ambition to sell the technology to governments and large companies. Therefore, there are different worldviews within the company, but usually, the longer someone spends at OpenAI, the more they see the company as a non-commercial project working for the good of humanity.

What the author especially values is that OpenAI doesn't just talk about the benefits of artificial intelligence, but gets things done. The company distributes its achievements fairly, not hiding advanced models behind contracts with corporations—anyone can go into ChatGPT and ask the questions they are interested in (even in the free version). New features are quickly added to the API, so even small teams can use them.

Calvin also touched upon OpenAI's approach to security. At OpenAI, they primarily think not about abstract scenarios of "AI threats" (although there are specialists in this area), but about everyday problems, such as using models to create weapons, cause harm to anyone, political manipulation, and so on. The author believes the approach to security is serious, but criticizes OpenAI for rarely publishing the results of these studies openly. He thinks this could help other AI developers.

In the end, the author calls OpenAI a frighteningly ambitious company that is now striving to compete in dozens of areas: scientific papers on AI topics, products for programmers and regular users, image generation, and much more. He also praises the leadership, mentioning the names of Greg Brockman (President of OpenAI) and Sam Altman (CEO)—according to Calvin, the bosses never hide in their offices but actively participate in work at all levels.

P.S. You can support me by subscribing to the channel "escaped neural network", where I talk about AI from a creative side.

Comments