- AI
- A
What do engineers at OpenAI, Microsoft, and AWS think about the future of AI: honest answers from the AI Engineer World's Fair 2025
Hello everyone! I spent three days at the AI Engineer World's Fair in San Francisco together with 3,000 of the world’s top AI engineers, CTOs of Fortune 500 companies, and startup founders. This is the third year of the conference, and it has become a place where leading AI labs, companies, and engineering teams showcase their latest work.
I talked to engineers and leaders from OpenAI, Microsoft, AWS, Pydantic, and YC startups. I want to share their candid opinions and key insights that will shape how we build AI systems in 2025 and beyond.
The conference had 18 tracks covering everything from MCP to reinforcement learning, AI, and robotics. The dominant theme was AI agents, but specifically agents ready for production use. The conversation shifted from "what agents can do" to "how we deploy them reliably and at scale."
If you prefer video format, you can watch them here and here on my channel. I'd be happy to connect with you through LinkedIn so you don't miss new updates from the AI world.
Part 1: What Engineers Are Saying — Q&A with the Frontline
Here are the most interesting moments from our conversations.
Question: Do you feel responsible for the jobs that your code will eliminate?
Samuel Colvin, creator and CEO of Pydantic:
"No, I don't think that's true. Programmers have been proudly sitting for the last 50 years replacing people, and we seemed to be perfectly fine with that. So I think it's a little bit of karma that now we might be replaced by written code.
Look, I don't think we're going to achieve AGI anytime soon. We have these things with huge memory, huge contextual windows. They're not very smart, but they have extremely large memory. They can write text that feels human. And we think they're smart because, until now, when we met people with vast knowledge, they were often smart. These things aren't smart.
I don't think AI will replace software development. But there is a huge amount of routine code that we all wrote. Some made careers writing it — no longer necessary for a person to write that. I don't regret it. I think it's good. It's exciting.
Right now, we have a massive need for writing more and more software. So I don't think many people will lose their jobs. At least, not yet."
Question: If AI makes every developer 10 times more productive, does that mean we need 10 times fewer developers?
Dominik Kundel, Developer Experience Specialist at Open AI:
“I don’t think so. I think it will change the way we work.
Read also:For me personally, the reason to become a developer was always that I had a lot of ideas and I wanted to turn them into reality. I think a lot of developers feel that way. But for many, the job turned into fixing bugs and working through backlog tickets.
I discovered this myself working on agents SDK — I delegated a lot of tasks I didn’t want to do to agents, so they would run in the background, and I could focus on the things that really matter, that I want to tune my own way.
There’s so much demand for software, and I actually think this will just enable more people to build more things, instead of us building the same amount of things with fewer people.”
Question: What is overrated vs underrated in AI?
Banjo Obayomi, Senior Solutions Architect at AWS:
Overrated — I think a lot of people are excited about agents, but don’t know how to use them properly. We’re seeing a lot of hype around agents, but we need to slow down a bit, think about what we’re trying to build.
Underrated — voice conversational AI. I think people made a lot of progress here, but I haven’t seen many cool applications. I’m waiting to see more conversational AI agents apps. So, it’s a mix.”
Shachar Azriel, Vice President at Baz:
“Overrated — vibe coding. I hope people won’t kill me when I leave the conference.
Underrated — making sure that the generated code really matches the team’s best practices, and that the code going into the product actually works.”
Samuel Colvin, Creator and CEO of Pydantic:
“Underrated — the importance of type safety, static typing will only grow. There are a few things happening:
In the JavaScript/TypeScript world, there’s no debate. Nobody thinks you should write JavaScript. Everyone writes TypeScript. I think the same will happen in Python. Type checking from Astral, giving us a really fast open source static type checker and language server, will again move the needle for Python.
Third, if you have AI agents writing code, whether it’s cursor style, or more autonomous coding agents like Claude Code, they love static type checking, because they can check the accuracy of the code and catch a huge amount of errors.
As for overrated — I don’t think MCP is overrated. I’m a big fan. I think it’s really valuable. Today I talked about ‘MCP is all you need.’ It’s grown fast in adoption, and in some ways, people probably suggest using it for things it’s not suitable for. You have to be careful to use it for the right things, not for everything.”
Question: How much do you personally spend on AI tools per month?
Rene Brandel, Co-founder and CEO of Casco (YCombinator x 25):
“I’d have to count. I pay for Cursor, Crunchbase — not sure if that counts, but they now have AI search. And Warp, Gemini, Claude, OpenAI. I’d say the bill is about $1000 at this point. The top versions of most of them.
Yeah, definitely. We’re a startup, so we’re trying to build as quickly as possible. Our approach — when we need to do something quickly, we run it on every platform at once, then go back and pick the best results. It works pretty well.
Read also:Here’s a startup idea: if you can build something that runs all of this at once for me and just gives results at the end, I’d pay for that too.”
Tanmai Gopal, Co-founder and CEO of Hasura and PromptQL:
I would say we budgeted around $100–120 per person per month — that's our budget. That's how I like to think about it. I don't know how much we've used of this budget, but that's the rough budget.
For pure AI tools, we spend — the budget is somewhere between 100 and 300K. Something like that.
Question: What part of your work will be automated in a year?
Banjo Obayomi, Senior Solutions Architect at AWS:
"I would like some meetings to be replaced by automation. I'm trying to make an agent that can attend meetings for me, like a Banjo-agent for the first call. Because sometimes I come to a meeting, and they don't really know, they ask very basic questions. If the AI agent could do that for me, and I only joined when necessary, I think I would want that to be automated. And I am actively trying to build an agent to solve this for myself."
Shachar Azriel, Vice President at Baz:
"One of the things that annoys people — AI takes things away from them that they love doing. For sure. I draw and love to draw. So the fact that someone tells me I don't need to do this anymore and AI will generate it for me — I don't want that. I don't want AI to take that away from me. I would rather have it fold my laundry and clean my house instead."
Question: What advice would you give someone just starting in tech?
Jim Bennett, Lead Specialist at Galileo:
"AI is not going away. You need to learn it. Know about the tools. Don't be afraid of the tools, but understand — AI is just a tool, and it makes mistakes. So yeah, you can vibe-code an app, but that's not a production-ready app. Learn the tools, learn the basics too. When your vibe-coded app breaks, you'll understand enough to fix it."
Tanmai Gopal, Co-founder and CEO of Hasura and PromptQL:
"It's really important to go back to the first principles of computer science. It's really important — go back to them and get rid of the jargon.
A lot of development in the past 10–15 years has been about who knows the latest jargon at the right time. Who knows the latest tools, the latest frameworks. It's still there. Even at this conference — 'Oh, this is the latest tool, the latest technique, the latest framework, let's use it.' That needs to stop.
We need to understand the fundamental principles of how things work. Any work, any product you build that deepens that understanding of first principles is important. It's more important than ever, because everything on top will be done by AI. I don't even care what the latest frontend framework is right now, because I just say 'build this for me.' I don't care. It's not important anymore."
Priyanka Vergadia, Senior Director at Microsoft:
"You probably should spend more time getting first principles of what you'll take with you as a skill — the ability to learn, the ability to be creative, the ability to do business, and a skill set that’s more on the soft side — the ability to present yourself better. These things are skills that are never taught in school, and they fall into the category of soft skills that will become increasingly important in every job, even technical."
Question: What skill should every programmer learn in 2025?
Samuel Colvin, Creator and CEO of Pydantic:
“I think there are a lot of people who think they don’t need to learn the fundamental principles of good software development, which I’d broadly call computer science. I think that’s still incredibly valuable, even if you have an agent writing code for you. It’s very much like having an intern. So I wouldn’t point to any particular technology. I think everyone needs to use AI, but be cynical and assume it will make mistakes. Just like when you have an intern — it’s really valuable that they do work for you, but you’re not going to take it verbatim and assume it’s right.”
Question: What should kids starting out in tech study?
Darko Mesaros, Principal Advocate at AWS:
“There are two things. First—they need to learn how to learn, because that’s changing completely. In the past, you had to learn from books, put in effort. Today, you get knowledge for free, anything you want. I think the way kids learn, the way we learn, has to change. We need to take advantage of these tools, but also make sure the information stays in some core part of our skills. Otherwise we’re just getting information that might be wrong in the future. So learning how to learn, changing the way of learning will be different. I don’t have the solution—teachers, you are the ones who have to come up with the solution.”
Controversial opinion about Python:
“Number two—kids should stop using Python. End of story. Stop using Python. Hear me out. There are so many better languages, not because I’m a language fan, but because if you’re going to use coding assistants to write your code, you should use a very safe language.
What I’m saying is—you should use a language that is compiled, statically typed, safe. The reason is, if you give all coding tasks to an AI assistant, you might as well give it a language that is more robust and less prone to runtime failures. That means you catch errors early, at compilation, at linting, instead of it just crashing in the middle of runtime. It’s a warm take, but I highly recommend people do this because now you have the tools for it.”
Julie Gunderson, Director at Freeman & Forrest:
“You still need to learn to write, to think critically. We can use these models to change how we learn, but there are critical skills the models just can’t handle. And that piece of critical thinking, the piece of human interaction—you still need to focus on that.
But the other thing is adaptability. Everything is changing so fast now. You need—or maybe I should say, we need, maybe the new generation doesn’t need it as much—to step out of your comfort zone and be ready to try new things, and not be afraid.
When all else fails, learn a craft.”
Question: What is the biggest challenge in AI that needs to be solved?
Suman Debnath, Principal AI Advocate at AWS:
“There are many. To start, I personally see a lot of work happening around evaluation and observability. A year ago, we were very excited about these LLMs, and they could answer our questions. That’s a very general task. But now the models are so mature and can reason well, developers and clients want to know why they give a certain answer.
Observability and evaluation—how relevant is this answer to my request. I think these are the two things most clients, especially enterprises, look for, in addition to just a large model that can generalize well. There has been a lot of work with Langfuse, and Arize if you think about it. Everyone is working to help us get a clear understanding of what the model is responding and how relevant it is to our use case.”
Darko Mesaros, Principal Advocate at AWS:
“I think the main challenge of AI right now is—we’ve given everyone this amazing hammer, and now everything looks like a nail. Everyone is trying to use generative AI for everything. Even things that should not use generative AI start using it. The reason is, it’s just so easy—just tell the model ‘rename my files’. Models shouldn’t be doing that. You can write something else that does this, since it’s much more cost-effective, energy-efficient, and faster.
I think educating people on how and where AI should exist in what you’re building is a critical element. Frameworks need to exist that even help generate code for specific tasks that AI should not do. For example, if you have an AI model that parses JSON into CSV, instead of parsing, it should just generate a piece of code that will do it forever. This is a bit more on the optimization side, both from the tooling perspective and the human one. And people should use AI for the things that AI should be used for.”
Part 2: My Key Takeaways and Observations
After three days immersed in cutting-edge AI engineering, here are the key patterns I saw:
1. Specifications are becoming the product
Companies are putting their business logic into markdown files (claude.md
files), which tell AI agents how their systems work, what their APIs do, what their domain means. And these aren’t just prompts—they’re real specifications for AI-native companies. They version them, test them, improve them. Your advantage isn’t in the generated code. It’s in how precisely you express what needs to be built.
2. Evals are mandatory, not optional
LLM outputs are probabilistic results that drift over time. What worked yesterday might fail tomorrow. Companies are attaching pass/fail checks to every pipeline step, every retrieval, every tool call. Think of evals like unit tests for traditional code. If you skip them, your agent gets unpredictable.
3. GPT-4-level intelligence is 100x cheaper than in 2023, but we use 20x more compute per request
Million-token contexts
Reasoning models outputting 10x more tokens
Agentic workflows chaining dozens of calls
Pattern: as AI gets cheaper per token, we tend to use exponentially more tokens.
4. MCP creates a business-to-agent economy
Model Context Protocol allows AI agents to autonomously discover and use services. Amazon has joined the MCP steering committee. AI agents are becoming customers with budgets. If your product isn’t agent‑friendly, you’ll be invisible to automated buyers.
5. Teams shrink while output explodes
Solo engineers orchestrate what used to require entire departments
10-person startups compete with thousand‑person companies
It’s not about having fewer people. It’s that each person becomes dramatically more capable through AI orchestration
6. Models are turning into agents
We spent a year building scaffolding around LLMs—agent frameworks, routing logic, memory systems. Now much of that is moving into the models. New reasoning models plan, roll back, use tools, and maintain context internally. Focus on what’s uniquely yours—data, domain expertise, semantic layer—and let the models handle the agent logic.
7. Documentation is growing ever more important
AI agents can only use what they understand. Better documentation means better AI performance, happier customers, and more revenue. Companies are building semantic layers—structured domain knowledge that becomes training data, agent context, and operational guardrails. Your semantic layer is your moat.
8. Execution speed is the only sustainable differentiation and advantage
When AI replicates features in hours and regenerates codebases from scratch, speed matters—how fast you ship, iterate, and incorporate feedback. Build for speed—automated testing, continuous deployment, feedback loops. And make sure you don’t neglect quality, because quality comes first. But in the AI era you need both quality and speed.
Note: AI is a 6-year-old Einstein. It has a massive memory, but not necessarily deep understanding. Your role is to guide it correctly.
Hope you found this interesting. I have a lot more material after great conversations with people from Open AI, Stanford, Microsoft, and others. Make sure to subscribe to my channel. I’d be happy to connect with you on LinkedIn.
Write comment