Big product investigation. So what is Mira Murati really doing?

Money loves silence. Former CTO of Open AI, Mira Murati, raised another 2 billion in a seed round with a startup valuation of 12 billion , while no one knows what exactly they do at Thinking Machines Lab?

Background

She left OpenAI in September 2024 and quickly assembled a star team for a new project, which included: John Schulman, co-founder of OpenAI and co-author of the RLHF approaches used in ChatGPT, Barrett Zoph, former vice president of research at OpenAI, specializing in AI safety and robotics, as well as several other strong researchers and engineers.

Murati has a decisive vote on the board of directors, and the founders have enhanced voting rights (100 times stronger than ordinary shares). This ensures their control over the company's development directions.

Big product investigation: hidden aspects of Mira Murati's work.

The Thinking Machines Lab team has experience creating popular AI products such as ChatGPT, Character.AI, Mistral, as well as open-source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.

By the way, the second major startup where former OpenAI people are also working in stealth mode is SSI by Ilya Sutskever.

Investigation

After going through several publications, reviewing articles on arXiv, and analyzing who they are hiring, I think I understand what they are creating. But let's go step by step. Here are a few highlights that led me to certain conclusions:

Partnership agreement with Google Cloud (mentioned in financial press) hints that the launch will possibly be in the form of a platform with compute credits and integrated MLOps, something like Vertex AI for custom LLM, but with an open-core.

Product twin: open-source core + "pro" cloud. Murati emphasizes the "significant open-source component".

Built-in "safety-lab" as a service. The phrase about regular "publications of research reports for better understanding of the frontier" and the focus on empirical safety suggest that a tool for testing malicious behaviors will be part of the product: think "OSS + rule-engine for Red-Teaming", which can be plugged into a CI/CD pipeline.

Showcase scenarios instead of universality. To emphasize the possibility of deep customization, the initial release will likely come with several demo verticals. Each scenario will show how the layered architecture adapts to the task, rather than the other way around.

Currently, the company is looking for employees with experience in "building successful AI-based products from scratch".

In recent tweets, Murati hinted that the demo day will take place as soon as they close the round, tentatively in the fall.

It's hardest to make predictions on a short horizon because the uncertainty factors are maximal here, decisions are being made right now, and all the key details remain behind closed doors, changing literally on the fly, often without public signals.

But I love solving puzzles with many unknowns. I delved into thoughts, research, analysis, and here is what I came up with.

So, what characteristics should this product have?

The legendary volume of investments sets the bar...

When a startup takes the stage with a seed round of $2 billion and a valuation of $12 billion, the market perceives it not as just another interesting team, but as a future system-integrating player. Therefore, the first release must immediately demonstrate infrastructural scale and long-term viability.

Here is a list of characteristics and conditions that such a product ideally should meet. Of course, I'm idealizing a bit here, but nevertheless.

  1. Instant scalability. Enterprise clients expect the system to withstand sudden spikes in load and be ready for global traffic "out of the box".

  2. Multimodal core-architecture. Text, image, sound, and structured data should be processed by a unified stack; otherwise, the product will lose its value as a "universal engineering layer" for new services.

  3. Real customization at the layer level. Businesses want custom models, "stitched" with their KPIs. The ability to quickly rebuild and retrain individual layers guarantees a competitive advantage over closed API-based monolithic models.

  4. Built-in security and compliance contours. New and upcoming regulations make "guardrails-by-default" mandatory. The product must provide built-in red-teaming, log auditing, and explainability of decisions.

  5. Cloud-agnostic + on-premise ready. Large corporations want deployment freedom: today in the public cloud, and tomorrow in their own data center or on edge infrastructure for latency-sensitive scenarios.

  6. Open foundation + commercial managed layer. The community will get an OSS core for experimentation, while businesses pay for access with SLA, support, and personalized training pipelines. This model reduces the entry threshold and simultaneously generates a revenue stream.

  7. API marketplace. Developers will want to exchange ready-made "micro-modules" for quick deployment: from pre-trained embedders to toxicity inspectors. An integrated marketplace will accelerate the ecosystem effect and shorten time-to-value.

  8. Transparent "pay-as-you-grow" pricing. The lesson from OpenAI and Anthropic: unexpected price jumps annoy CTOs. The product must offer predictable billing and the option of fixed corporate packages.

  9. Integration with CI/CD and MLOps chain. For R&D teams, it is important that the new platform "fits" into the current processes without breaking: Git triggers, automatic testing, rollout strategies, and rollback mechanisms.

  10. Community engine and content strategy. A library of examples, regular tech blogs, public security report kits – all of this turns the product into an industry standard, not just a tool.

Why will the market buy exactly this set?

  1. Technological Independence
    Companies are tired of being held hostage by two or three API providers. A platform that allows "pulling" the model into its own stack and fine-tuning it to meet specific needs provides strategic autonomy.

  2. Compliance without pain
    Embedding compliance functions into the product removes a huge legal and reputational burden from the customer.

  3. Direct correlation with business metrics
    When customization is built into the architecture, the model owner sees accuracy improvement and cost reduction already in the first iterations of fine-tuning — this is easily defended at the ROI level.

  4. Network barrier effect
    The API marketplace stimulates the exchange of developments: the more modules external developers upload, the higher the value of the ecosystem and the harder it is for competitors to lure users away.

This is what it could be!

I lean towards it being either a platform for creating custom AI solutions, in demand in business, research, and everyday life, or a framework for hybrid human-machine collaboration. Now I will explain both options.

  1. Multimodal platform for customizing AI models

    • Description: A tool that allows users (startups, researchers) to build and customize multimodal AI systems that integrate text, image, and speech. Open-source for quick adaptation to tasks like teamwork.

    • Why is this likely? Murati directly mentioned "multimodal AI for natural interaction" and an open-source element for custom models. This will solve the "elitism" problem of AI, making it more accessible.

    • Demand: High, the custom AI market is growing, especially for businesses that need tailored solutions without huge costs.

  2. Framework for hybrid human-machine collaboration, essentially for working with hybrid teams

    • Description: An open platform for creating "collaborative" AI agents that assist with creative, business, or scientific tasks, adapting to the "chaotic" work style of humans (e.g., through visual interfaces and conversations).

    • Why is this likely? Statements about "a bridge between human experience and AI" and a focus on applications in science/engineering. This is an evolution of her ideas from Open AI, but with more emphasis on partnership.

    • Demand: Enormous, in business, creative industries, and R&D, where AI can accelerate innovation without replacing humans.

The cycle from idea to product is exactly a year, even though they officially incorporated only in February 2025. Sources unanimously call this round the largest seed round in Silicon Valley history. The round was organized by Andreessen Horowitz, with investors including Nvidia, Accel, Cisco, AMD, and several others. Some consider the upcoming product release the most promising, even despite market expectations for GPT-5.

In a couple of months, we will find out if I guessed the product correctly.

***

On new business models and AI startups: Aiventor and Fred

Comments