OpenAI completes its “shareholding reform,” with Sam Altman and Satya Nadella appearing together to address all matters.

From the impact of OpenAI’s new organizational structure, to Microsoft’s subsequent cooperation with OpenAI, and finally to the future of AI, the two discussed numerous topics of public interest in over an hour.
For example:
- How can OpenAI, with $13 billion in 2025 revenue, commit to spending $1.4 trillion on computing power?
- What has Microsoft gained from its partnership with OpenAI?
- Nadella believes that a lack of electricity is more fatal than a shortage of GPUs.
- Is OpenAI’s IPO plan still undecided?
…
Below is the compiled full text. For more details on the conversation, please see—
Microsoft and OpenAI’s Cooperation
Brad Gerstner: Microsoft began investing in 2019 and has invested approximately $13 to $14 billion in OpenAI so far, acquiring about a 27% stake (on a fully diluted basis), down from an initial one-third. With the new round of financing last year, your stake was somewhat diluted. Is this ratio correct?
Satya Nadella: Yes, roughly so. Before discussing our shareholding, I believe the most unique aspect of OpenAI is that one of the world’s largest non-profit organizations emerged from its restructuring process. Inside Microsoft, I often say we are proud to be associated with two of the world’s largest non-profit entities: the Bill & Melinda Gates Foundation and now the OpenAI Foundation. That is the real news. This was not the outcome we anticipated when we initially invested $1 billion; at that time, we did not view it as a potential 100x return investment case, but reality has proven otherwise. Nevertheless, we are very pleased to have been early investors and partners. Frankly, this fully demonstrates Sam’s vision and his team’s execution capabilities. They saw the potential of this technology early on and excelled in turning it into reality.
Sam Altman: I think this is truly an amazing partnership at every stage. As Satya said, when we started, we had no idea where things would lead. But I believe this will prove to be one of the greatest partnerships in tech history. Without Microsoft, and especially without Satya’s firm conviction and decisive action back then, we would not have reached today. At that time, hardly anyone else was willing to bet under such conditions. We knew nothing about the direction of technology then; we simply believed in one principle—continuously advancing deep learning. We believed that if we could achieve this, we would definitely find ways to create excellent products and generate immense value.
At the same time, as Satya mentioned, we also established a structure that we believe will become the world’s largest non-profit organization.
I really like this structure because it allows the value of the non-profit entity to continue growing while enabling its affiliated public-benefit company to obtain the capital needed for continued expansion. Without this structure and like-minded partners, the foundation’s value could not have reached its current scale.
It has been over six years since our initial cooperation. The achievements we have made in these six years are astonishing, and there will be even more in the future.
I sincerely hope Microsoft earns a trillion dollars from this investment, not just one hundred billion.
The Historic Non-Profit Organization
Brad Gerstner: In this restructuring, you mentioned an architecture where the upper layer is a non-profit organization and the lower layer is a Public Benefit Corporation (PBC).
The non-profit portion currently holds $130 billion worth of OpenAI stock, joining the ranks of the world’s largest non-profits upon its inception, with potential for further growth.
This $130 billion asset will be entirely used to ensure that AGI (Artificial General Intelligence) benefits all humanity. You also announced that the first $25 billion would be directed toward healthcare, AI safety, and resilience.
Can you talk about why you chose “healthcare” and “resilience” as these directions? And how do you ensure the foundation does not fall into the pitfalls of bias or inefficiency like many other non-profits?
Sam Altman: First, I believe the best way to create immense value for the world is what we are already doing—building powerful AI tools and making them accessible to everyone. I think corporate mechanisms are excellent. Many companies are bringing advanced AI to more people, creating astonishing results.
However, there are indeed areas where market mechanisms cannot fully drive outcomes aligned with humanity’s long-term interests; in these places, different approaches are needed to push progress forward.
At the same time, AI brings unprecedented new possibilities, such as accelerating scientific discovery at an extremely fast pace and achieving true automated research. Therefore, we decided that our primary investment areas would include healthcare: if AI can help cure numerous diseases and allow related data and knowledge to be widely shared, it will be a tremendous boon for all humanity.
Regarding AI “resilience”—I believe there will certainly be complex situations in future development processes where not all problems can be solved by enterprises alone.
Therefore, we hope to fund relevant work through the foundation, such as cybersecurity defense, AI safety research, and social impact studies, helping society navigate this period of technological change more smoothly.
We are very confident in the long-term positive impacts brought by AI, but we also clearly understand that the path ahead will not be entirely smooth.
Microsoft Secures 7-Year Exclusivity for GPT Series
Brad Gerstner: Let’s continue discussing cooperation details—regarding models and exclusivity. Sam, OpenAI’s frontier models are currently distributed via Azure, but for the next seven years until 2032, you cannot distribute these models on other major cloud platforms unless AGI is officially verified beforehand. However, you can still distribute open-source models, Sora, Agents, Codex, wearable device-related technologies, etc., on other platforms. In other words, ChatGPT or GPT-6 will not appear on Amazon’s or Google’s clouds, correct?
Sam Altman: That is not the case. First, we and Microsoft will continue to cooperate in many aspects to create value together. We want to help Microsoft create value, and we also hope Microsoft helps us create value—such cooperation has already unfolded at many levels. We retained a good concept Satya previously proposed—“stateless APIs.” These APIs run on Azure, and this part is not fully exclusive (the agreement is valid until 2030). For other products and models, we will release them on different platforms as well. This naturally aligns with Microsoft’s interests too. So our products will appear in many places—some on Azure, where users can access them, which is good for everyone.
Brad Gerstner: Then there is the revenue-sharing component. OpenAI still needs to pay a share of all revenues to Microsoft; this sharing agreement also lasts until 2032 or until AGI is verified. Assuming—for illustrative purposes—that the share ratio is 15%, if OpenAI’s revenue is $20 billion, it would pay Microsoft $3 billion, which counts as Azure revenue. Satya, is this understanding correct?
Satya Nadella: Yes, we do have a revenue-sharing agreement. As you said, this agreement will remain in effect until AGI emerges or expires. Honestly, I am not sure whether this share ultimately gets counted under Azure or another department—that’s a good question; perhaps Amy, our CFO, should be asked.
Brad Gerstner: Since both the exclusivity agreement and revenue sharing will terminate early once AGI is verified, it implies that recognizing AGI is a very significant matter. From what I understand, if OpenAI claims to have achieved AGI, an expert review committee would rule on it; you both would jointly select a “jury” to decide in a relatively short time whether AGI has indeed been realized. Satya, you said during yesterday’s earnings call that currently “no one is close to AGI,” and it won’t happen in the short term. You also mentioned the concept of “spikes and imbalances in intelligence.” But Sam, you seem more optimistic than he is. So the question is: Are you worried that we might really need to convene this “jury” within the next two or three years to judge whether we have reached AGI?
Sam Altman: I know you want to create some dramatic conflict between us. But I believe it is very necessary to establish a formal adjudication process for AGI. Future technological development will certainly see some unexpected twists; we will continue to maintain our good cooperative relationship and jointly understand and judge its direction of development.
Satya Nadella: Completely agree. This is also one of the reasons why we established this process. I have always firmly believed that intelligent capabilities will continuously improve, and our true goal is—how to put this intelligence into the hands of people and organizations so they can derive maximum benefit. This was also what initially attracted me to cooperate with OpenAI: their mission is to make intelligence benefit all humanity. We will continue down this path.
Sam Altman: Brad, even if we truly achieve “superintelligence” tomorrow, we still hope for Microsoft’s help in delivering products to people.
OpenAI’s $1.4 Trillion Compute Commitment
Brad Gerstner: Obviously, OpenAI is one of the fastest-growing companies in history. Satya, you said on this podcast last year that every technological paradigm shift births a new “Google,” and this time’s new “Google” is clearly OpenAI. Without Microsoft’s bold bet back then, none of this would have happened. That said, external reports indicate your revenue for 2025 was approximately $13 billion. Meanwhile, Sam, in your livestream earlier this week, you mentioned a commitment to invest $1.4 trillion in computing power over the next four to five years—including a $500 billion investment in Nvidia, $300 billion each in AMD and Oracle, and $250 billion in Azure. So the biggest question on the market this past week has been: How can a company with $13 billion in revenue sign a $1.4 trillion expenditure commitment? You’ve also heard some skepticism.
Sam Altman: First, our actual revenue is far more than $13 billion. Second, Brad, if you really want to sell your OpenAI shares, I can help find buyers. There are many people who very much want to buy OpenAI stock now. I don’t think those people online who are making a fuss and worrying about our “compute spending” would actually rush in if they could buy OpenAI shares. So I feel that if you or other shareholders really want to sell shares, we can easily sell them quickly to those shouting the loudest on X (Twitter).
We do plan for revenue to continue growing rapidly—and it is growing fast right now.
We are making a forward-looking bet: believing it will continue to grow. Not just ChatGPT’s revenue; we will also become a significant AI cloud service provider, our consumer device business will become a meaningful and important segment, and additionally, technology that enables AI to automate scientific research will create immense value.
Sometimes I do think, if we were a public company, it might be quite interesting. Especially when people write absurd things like “OpenAI is about to go bankrupt,” I really wish I could say to them: “Then go short our stock,” and watch them get slapped hard.
But back to the main point, our planning is very prudent. We clearly know the direction of technological capability evolution, what kinds of products we can build around these capabilities, and what kind of revenue they will bring. Of course, we might also mess up—that’s a risk we voluntarily assume.
But one thing is certain: If we do not secure enough compute power, we cannot produce such models, nor achieve the corresponding scale of revenue.
OpenAI’s Execution Capability
Satya Nadella: So far, whether as a partner or investor, I have never seen an OpenAI business plan that they did not exceed themselves in executing.
In a sense, this is truly astonishing. Their growth rate and business execution are frankly unbelievable. Everyone talks about OpenAI’s success in usage metrics, but I believe overall, their performance in business execution is equally shocking.
Brad Gerstner: A few weeks ago on CNBC, Greg Brockman said that if we can increase compute power tenfold, revenue might not grow tenfold, but it will certainly grow significantly.
Brad Gerstner: Last night you also mentioned that you are similarly constrained by compute power; with more compute, growth would be higher. So Sam, please help us explain: How severely do you feel currently constrained by compute? Do you think there might come a day when we are no longer limited by compute after infrastructure construction is completed in the next two or three years?
Regarding Compute
The Future of Demand
Sam Altman: We often discuss this issue—whether there is “enough” compute power. I believe the best way to understand it is to view it as “energy.” You can discuss energy demand at a certain price level, but you cannot talk about energy demand in isolation from price. If the cost of compute per unit of intelligence drops by 100 times tomorrow, usage would grow far more than 100-fold. There are currently many people who want to use compute for various tasks, but at current costs, it is not economically viable.
If compute becomes cheaper, entirely new demand will emerge. On the other hand, as models become smarter—if these models can cure cancer, discover new laws of physics, or drive vast numbers of humanoid robots to build space stations, no matter how crazy that sounds—people will also be willing to pay a higher price for “each unit of intelligence.” Therefore, when discussing compute capacity, one must consider the relationship between “unit cost” and “unit capability.” Discussing this without combining these two curves is essentially an ill-defined problem.
Satya Nadella: If the value of intelligence is logarithmically related to compute power, then we must continuously improve efficiency. This means maximizing the number of tokens generated per dollar and per watt, along with the resulting socioeconomic value, while reducing costs. From an economic perspective, this describes exactly what Jevons Paradox outlines: you continuously lower costs and commodify intelligence itself, turning it into a true driver of global GDP growth.
Sam Altman: However, I believe that currently, the situation is closer to “intelligence being a logarithmic function of compute,” rather than the other way around. But perhaps in the future we will find better Scaling Laws; this area is still under exploration.
Brad Gerstner: Yesterday, we heard Microsoft and Google both state that their cloud business growth could have been faster if not for GPU supply constraints.
I also asked Jensen Huang on this show whether compute oversupply might occur in the next five years. He replied that it is almost impossible in the next two to three years. I think you two would agree with this assessment—although we cannot predict what will happen in five to seven years, at least for the next two to three years, compute oversupply is highly unlikely.
The Biggest Problem Is Not Compute Oversupply, But Insufficient Power and Construction Speed
Satya Nadella: I believe that in this specific sector, supply and demand cycles are almost unpredictable. The true long-term trend is continuous growth. Frankly speaking, the biggest problem we face right now is not “compute oversupply,” but rather the speed of power and infrastructure construction. If you cannot build data centers close to power sources quickly enough, then even if you have a pile of chips in hand, they may not be able to plug into electricity.
In fact, this is my current situation—the problem is not a shortage of chip supply, but a lack of available facility infrastructure for deployment. Therefore, some supply chain constraints are difficult to predict because demand changes so drastically. It’s not that we want to sit here complaining about “compute shortages,” but rather that we simply cannot accurately predict how high real demand will rise. Moreover, this is not just an issue for one country or a specific market; it is a global deployment challenge. Covering the entire world with compute infrastructure inevitably encounters various limitations. What we must do is find ways to navigate these constraints—and this path will certainly not be linear.
Sam Altman: There will come a day when compute definitely becomes oversupplied—whether that’s in two or three years, or five or six, I can’t say for sure—but it will happen, and likely multiple times. There are deep psychological factors behind this and “bubble cycles.” The supply chain is extremely complex; all sorts of strange things happen, and the technological landscape changes violently.
For example, if a new type of energy source with massive scale and extremely low costs suddenly comes online, companies that signed long-term contracts will be severely burned. Or again, if the cost per unit of intelligence continues to drop at an astonishing rate—for instance, averaging 40 times lower each year—this is actually a terrifying exponential trend from an infrastructure construction perspective.
Of course, our bet is: as intelligence becomes cheaper, demand will continue to explode. But I do worry that if we keep pushing forward and everyone can run their own personal AI models locally on laptops, it’s like we’ve done something “crazy,” and some people will definitely get hurt in this cycle—just as has happened repeatedly in past waves of technological infrastructure.
Brad Gerstner: Well said—you have to accept both truths simultaneously. We went through a similar bubble in 2000 and 2001, but the internet ultimately became far larger than anyone at the time predicted and brought deeper value to society.
Satya Nadella: Yes, I think there is one point Sam just mentioned that isn’t discussed enough outside: for example, OpenAI’s optimizations on the inference stack specifically for GPUs. We often talk about hardware performance improvements driven by Moore’s Law, but in reality, efficiency gains at the software level are what show stronger exponential growth.
OpenAI’s Consumer Devices
Sam Altman: There will come a day when we create an amazing consumer device capable of running a model close to GPT-5 or GPT-6 locally with low power consumption.
Brad Gerstner: That would indeed be a miracle. And I think that is precisely what makes those building large centralized compute centers uneasy.
You have discussed many times: compute needs to be distributed both at the edge and for globally distributed inference.
Satya Nadella: Yes, my own thinking focuses more on how to build a fleet of interchangeable compute resources. In cloud computing infrastructure business, the two most critical points are actually quite simple: First, you need an efficient “token factory”; second, you must achieve high utilization rates. To achieve high utilization, you must be able to schedule various different AI workloads—including pre-training, intermediate training, post-training, and reinforcement learning. Therefore, making compute resources interchangeable is the core goal for all cloud service providers.
OpenAI’s IPO Plans
Brad Gerstner: Reuters reported yesterday that OpenAI may plan to go public in late 2026 or 2027?
Sam Altman: No, we have no specific plans or timelines. I know the media likes to write that, but actually, we have not set a date nor made an IPO decision. I just believe that in the long run, that might be a natural step for the company, nothing more.
Brad Gerstner: However, it seems to me that if your revenue exceeds $100 billion by 2028 or 2029, you would already meet the conditions for going public.
Sam Altman: What about 2027?
Brad Gerstner: Haha, 2027 is better. If you go public then, based on the rumored $1 trillion valuation, give our listeners a simple explanation.
Assuming your revenue is $100 billion and you list at a 10x revenue multiple, this is actually lower than Facebook’s listing multiple and also lower than many large consumer companies’ listings. This would mean a company valuation of $1 trillion. If only 10% to 20% of shares are publicly offered, it could raise $100 billion to $200 billion, which is sufficient to support your expansion and R&D plans. So you’re not opposed to going public?
Sam Altman: I would prefer the company to do this based on strong revenue growth, but yes, this is certainly a direction worth considering.
Brad Gerstner: I’ve always felt this is an incredibly important company. My own children have their small investment accounts and use ChatGPT every day. I hope ordinary investors also get the opportunity to buy shares in such an influential company.
Sam Altman: Honestly, that might be the most attractive reason for me personally regarding an IPO.
Regarding Breakthroughs in 2026
Brad Gerstner: Recently, your team has been talking about future new directions: larger-scale compute, ChatGPT-6 and versions further out, robotics, physical devices, scientific research.
Sam Altman: This year, I find the development of Codex (the AI coding model) most interesting. Next year, it might leap from handling “hours-long tasks” to being able to handle “days-long tasks,” allowing humans to create software at unprecedented speeds and in entirely new ways. I am very excited about this. And I believe this trend will also expand into other industries. I am more familiar with code, so changes there are easier for me to see, but this will truly reshape the boundaries of human creativity.
I hope that by 2026, AI can bring even a tiny scientific discovery. If we can start with small breakthroughs, we can gradually accumulate toward larger achievements in the future. It sounds crazy, but if AI can make an original scientific discovery in 2026, no matter how minor, it will be a major moment for human civilization. I am very much looking forward to this. Of course, robotics and entirely new forms of computing devices are also important. But my personal preference is: letting AI truly participate in scientific research. That means allowing intelligent systems to begin expanding the total amount of human knowledge—this matter is too important.
Satya Nadella: Yes, taking Codex as an example, the key lies in combining model capabilities with interaction interfaces. ChatGPT’s “magical” explosion occurred because a suitable UI met a sufficiently powerful intelligent model. The current “Coding Agent” is forming a new paradigm: AI can autonomously execute long-duration tasks, and then humans provide “fine-tuning” at critical nodes. We internally call this macro-delegation and micro-steering. When this new type of intelligence combines with a brand-new UI, it creates an entirely new form of human-computer interaction, which I believe may have an impact even greater than ChatGPT.
Sam Altman: This is also why I am excited about the new computing device forms we are developing. Because current computer architectures are simply not suited for this workflow. A UI like ChatGPT is actually imperfect. Imagine: you possess a device that stays by your side, it can complete tasks independently, receiving your “micro-guidance” when necessary, while deeply understanding your context and flow of life. That would be very cool.
Brad Gerstner: And neither of you has mentioned consumer-side use cases yet. I often think about how we search through hundreds of apps in our devices every day and fill out various forms—these interaction methods have hardly changed in 20 years. But if AI allows us to truly have an almost free personal assistant that improves life for billions globally, whether helping order diapers for a child, booking hotels, or modifying schedules, it would be the most mundane yet revolutionary change. When we move from “answering” to “remembering” and “acting,” interacting naturally with AI through earbuds or other devices rather than staring at a glass screen—that is truly a stunning future.
Satya Nadella: I think this is exactly what Sam was just hinting at.
(Altman logs off)
Brad Gerstner: In 2019, you brought the idea of “investing $1 billion in OpenAI” to the board. Was it an instant agreement? Did you need to spend some effort convincing everyone?
Satya Nadella: Yes, looking back now, that journey was interesting. Actually, our relationship with OpenAI started much earlier—around 2016, Azure was one of OpenAI’s earliest sponsors.
At that time, they were primarily focused on reinforcement learning. I still remember that Dota 2 match running on Azure. Later, they shifted to other directions. At the time, I was quite interested in reinforcement learning, but honestly, this also validates your concept of a “prepared mind.” Since 1995, Microsoft has been obsessed with “natural language”—this was a core direction pushed internally by Bill Gates. After all, we are a company centered around coding and information work.
So, when Sam started talking about “text,” “natural language,” “Transformers,” and “Scaling Laws” in 2019, I thought: “Wow, this is really interesting.” The team’s direction aligned highly with our interests, so from that perspective, it was a “no-brainer” investment.
Of course, when you go to the boardroom and say, “I plan to put $1 billion into an entity we don’t fully understand—neither a profitable company nor a traditional non-profit,” there will definitely be debates.
Gates was initially skeptical, which was reasonable. But after seeing the GPT-4 demo, he was completely convinced.
He later publicly stated that it was the most impressive Demo he had seen since Charles Simonyi showed him demos at Xerox PARC.
For me, the thought at the time was: “Let’s give it a try.” Later, when we saw early results of Codex in GitHub Copilot—
Code completion worked seamlessly—that was the moment I knew we could scale this from “1” to “10.” To be honest, that first step was controversial, but it was truly what launched the entire AI era.
Afterward, whether it was OpenAI’s team execution or our own productization efforts on our side, both were astonishing.
If you look at the current portfolio—GitHub Copilot, ChatGPT, Microsoft 365 Copilot, and our consumer-facing Copilot—together these four constitute the largest AI product ecosystem in the world today. This is precisely what enables us to keep moving forward.
Brad Gerstner: I think many people don’t realize that your CTO, Kevin Scott—a former Google engineer—is actually based in Silicon Valley.
Keep in mind that at the time, Microsoft had missed search and the mobile era. When you became CEO, cloud computing was nearly next on the list of things we were about to miss. You described it as “catching the last train out of the station.” So I imagine you were determined to keep your “eyes and ears” in Silicon Valley so you wouldn’t miss the next wave.
Kevin must have helped you significantly with this, right?
Satya Nadella: Absolutely correct. In fact, I would say Kevin’s conviction was decisive. He started as a skeptic—which is exactly the kind of person I pay attention to: “those who didn’t believe but then changed their minds and became excited.” That shift itself is a signal because it makes you ask, “Why? What changed your mind?” Kevin initially held back but eventually became a staunch supporter. Many of us were actually taught to believe that “there must be some algorithm that solves everything,” rather than “breakthroughs come from scaling and computing power.” But it turned out that Kevin’s firm belief—that “this is worth doing”—was one of the key driving forces behind all of this.
On the Value of Collaboration
Brad Gerstner: Today, that initial $1 billion investment is valued at approximately $130 billion, and as Sam mentioned, it could potentially reach $1 trillion in the future. However, this still underestimates the true value of Microsoft’s partnership with OpenAI. Beyond equity gains, Microsoft earns billions in profit annually from revenue sharing with OpenAI and benefits from Azure’s $250 billion commitment to compute capacity.
Furthermore, your exclusive distribution of APIs has generated massive sales—drawing many customers who were previously on AWS over to Azure. Can you talk about how you view these value dimensions? Particularly the strategic significance of exclusivity for Microsoft?
Satya Nadella: Of course. Setting aside the equity portion, the most critical strategic synergy is that OpenAI’s stateless API runs exclusively on Azure. This is a win-win for OpenAI, for us, and for customers. Enterprise clients building AI applications want APIs to be stateless, which they then combine with underlying compute, storage, and databases to form complete workloads. This is exactly where Azure integrates with OpenAI.
We are now even integrating Foundry (our AI application hosting platform). Suppose you are building an AI application; the key question becomes: “How do you ensure that AI evolution aligns with application logic?” This requires a full application server layer, which is precisely what we provide in Foundry.
On the other hand, another source of value for Microsoft is that we not only have exclusive access but also rights to use intellectual property (IP). Our agreement with OpenAI allows Microsoft to use frontier models royalty-free for the next seven years. In other words, if you are a Microsoft shareholder, it means we essentially get a state-of-the-art large model “for free.”
We can embed this model into products like GitHub, Microsoft 365, and Copilot, then fine-tune it using our own data to merge proprietary knowledge at the weight level. Therefore, we are very confident in the value creation AI brings—whether at the infrastructure (Azure) level or in high-value areas such as healthcare, knowledge work, programming, and security.
Brad Gerstner: Microsoft recently consolidated OpenAI’s losses in its financial reports; reportedly, it consolidated about $4 billion in losses last quarter. Do you think investors misunderstood this? It’s possible they were penalized in their valuations because these losses affect earnings per share multiples. In reality, the long-term benefits and potential market cap growth from the OpenAI partnership far exceed these short-term figures. What is your take on this?
Satya Nadella: That’s a good question. Our CFO Amy Hood adopts a “fully transparent” approach to handling this. Honestly, I am not an accounting expert, so I believe the best course of action is to disclose all information openly. This is why we now distinguish between GAAP and Non-GAAP financial data. At least in this way, investors can clearly see the actual earnings per share (EPS) and understand the full picture.
Because in my view, the matter is actually quite simple. Suppose you invest $13.5 billion; naturally, you might lose that $13.5 billion, right? But at least to my knowledge, you won’t lose more than $13.5 billion—that is your maximum risk exposure.
Of course, one could argue that our equity value is now around $135 billion. While this asset is liquid, we do not intend to sell it, so it carries a certain degree of risk as well.
However, I think what you are really asking about is something else—what is happening outside of these investments. For example, Azure’s growth. Would Azure have grown like this without the partnership with OpenAI? As you mentioned, how many customers migrated to Azure from other cloud platforms for the first time because of this?
This is where we truly benefit. And it’s not just reflected in Azure; it’s also evident in Microsoft 365. In fact, we used to wonder: after E5, what would be the next major growth driver for Microsoft 365? We have now found it: Copilot.
Its scale has surpassed any office suite we have ever launched. Whether in terms of penetration rate, adoption speed, or growth momentum, Copilot exceeds all of Microsoft’s achievements in digital office productivity over the past few decades.
So, we are currently very confident in the opportunity to create long-term value for shareholders. At the same time, we remain fully transparent so that outsiders can clearly see—whether it’s losses or investment situations. We follow accounting rules as prescribed and disclose all data publicly so everyone understands the actual situation.
Brad Gerstner: About a year ago, many headlines claimed that Microsoft was cutting back on AI infrastructure investments. Do you think this is a fair assessment, or perhaps a misunderstanding? These reports certainly existed at the time. Perhaps you were indeed more conservative and cautious then. However, during last night’s earnings call, Amy mentioned that Microsoft has actually been short on compute capacity and infrastructure for several quarters. She originally thought you would catch up, but you didn’t—because demand continued to grow. So my question is: Looking back now, was it too conservative? Now that you know this, how do you plan your roadmap going forward?
Satya Nadella: That’s a very good question. In fact, we realized something at the time—and I am glad we did—that we must build compute clusters capable of flexible scheduling (fungibility) throughout the entire AI lifecycle. This flexibility needs to apply not only across different regions but also across different chip generations.
Take Jensen Huang and the NVIDIA team as an example; their update speed can be described as “moving at light speed.” We are now introducing GB300 chips. You certainly don’t want to deploy a large batch of GB200s only to find that GB300s have already gone into full mass production.
Therefore, you must continuously modernize your clusters, distribute them globally, and ensure they can flexibly schedule resources for different workloads. At the same time, we are constantly optimizing at the software level.
That was the decision we made back then. Sometimes we had to say “no” to certain demands, including some from OpenAI. For instance, Sam might say, “Please help me build a dedicated training data center with thousands of megawatts in this location.” While that might make sense for OpenAI, it doesn’t align with Microsoft’s long-term global infrastructure layout.
So we chose to give them the flexibility to purchase compute resources from other vendors. Meanwhile, we maintained significant cooperation with OpenAI—more importantly, this allowed us to preserve flexibility and balance with other customers (including Microsoft’s own first-party businesses).
You need to understand that we do not want to face a shortage of compute capacity. Many investors focus too heavily on Azure’s growth numbers. But for me, the high-margin business is actually the Copilot series, including Security Copilot, GitHub Copilot, Healthcare Copilot, and so on.
We hope to achieve long-term returns in a balanced manner, rather than being driven by short-term Azure growth rates. I think this point has been misunderstood among investors—it’s quite interesting. After all, they hold Microsoft stock because of its broad business portfolio, not solely due to Azure’s growth curve.
Brad Gerstner: Speaking of Azure, it grew 39% this quarter, with annualized revenue reaching $93 billion—quite impressive. In comparison, Google Cloud grew by 32%, and Amazon by only about 20%. However, based on what you just said, because you allocate compute capacity to first-party (1P) projects and research initiatives, Azure could potentially have grown by 41% or 42% if you had more capacity available at the time, right?
Satya Nadella: That is exactly where we balance internally—finding equilibrium between long-term shareholder interests, customer service quality, and risk diversification (avoiding concentration of compute power with just OpenAI).
After all, our current situation is not demand-constrained but supply-constrained. Therefore, we must strategically “shape” demand to ensure it optimally matches our compute capacity supply in the long run.
Brad Gerstner: You mentioned $400 billion in remaining performance obligations (RPO), a staggering figure. Last night you noted that this represents your current booked business volume. As sales continue, this number will undoubtedly increase tomorrow. You also mentioned that to fulfill these backlog orders, you must expand capacity on a massive scale. I want to ask: How diversified are these backlog orders? How confident are you that this $400 billion will truly convert into revenue in the coming years?
Satya Nadella: Yes, regarding the $400 billion in remaining performance obligations, the average duration is actually quite short—approximately two years. This is one of the reasons we invested in large-scale capacity; we are very certain that we need to clear these backlog orders.
As for diversification, these orders are distributed between Microsoft’s own side (1P) and third-party customers. Honestly, our internal demand is extremely high; among third-party customers, we also see more companies building actually scalable workloads.
Therefore, we are very confident in this. One advantage of RPO is its strong planability, so we feel secure about future capacity construction.
Of course, this does not include the new demand we will soon see, such as that $250 billion in long-term orders, which will grow gradually according to plan in the future.
Brad Gerstner: In this compute infrastructure race, there are many new entrants, such as Oracle, CoreWeave, and Crusoe. Typically, this would compress profit margins, yet you have successfully expanded compute capacity rapidly while maintaining healthy operating profits for Azure.
My question is: How does Microsoft maintain its advantage in such a competitive environment? When competitors leverage their positions to suppress profits, how do you balance profitability with risk? Have you seen any moves by competitors that make you think, “This could lead to another bull-bear cycle”?
Satya Nadella: For us, the good news is that even though we compete daily with major players like Amazon and Google, we remain competitive.
At the end of the day, compute resources and storage are commoditized. I remember someone once saying that unless you achieve scale, you cannot be profitable; in reality, everything tends toward commoditization through competition.
Therefore, we must maintain an efficient cost structure. Supply chain efficiency and software optimization must continuously stack up to ensure profit margins.
But scale is key—one of the aspects of the OpenAI partnership I particularly appreciate is that it provides us with large-scale workloads. When you host the largest-scale workloads on the cloud, you not only learn faster how to operate massive systems but also lower your cost structure, thereby making your pricing more competitive.
So, I am confident in maintaining our profit margins, which is a key advantage of Microsoft’s diversified business portfolio. As I have always said, the reason I was compelled to disclose Azure figures is that capital allocation is not treated in isolation for any single segment. Our capital expenditures on Xbox Cloud Gaming, Microsoft 365, or Azure are considered holistically.
From a Microsoft-wide perspective, we care about whether our blended average return matches the operating profit margin required by the company. After all, we are not a conglomerate with a single platform; rather, we amplify the overall returns of cloud and AI investments through the synergy of five or six different businesses.
Brad Gerstner: I really liked your statement: “At scale, nothing is commoditized.” You know, even within this podcast, my partner Bill Gurley and I spent considerable time discussing recurring revenue, including Microsoft’s Stasher credits and OpenAI’s revenue recognition.
Have you seen transactions similar to AMD’s—where they exchanged 10% equity for deals—or Nvidia’s? I don’t want to overemphasize these transactions, but I want to directly address the topics discussed daily by CNBC and Bloomberg: there are indeed many such cross-deals in the market. Considering Microsoft’s context, do you worry that these could impact the sustainability or stability of AI revenue?
Satya Nadella: First, our $13.5 billion investment in OpenAI is entirely for training costs and does not count as revenue. This is also why we hold an equity stake (27% or valued at $13.5 billion).
Therefore, these funds do not flow into Azure revenue. In fact, Azure revenue consists purely of consumption-based charges from ChatGPT and other API usage, which we monitor closely.
Regarding other companies, to some extent, vendor financing has always existed. That is to say, this is not a new concept: when one company is building something while its customers are also building but need financing, unconventional forms may be adopted. These obviously require careful scrutiny from the investment community.
Interestingly, we had no necessity to do so. Our approach was to invest in OpenAI and acquire equity, or support their launch with discounted compute pricing. Other companies may choose different methods. Recurring revenue ultimately depends on demand: as long as there is demand for the final output, this model works, and it has been effective thus far.
Brad Gerstner: You just mentioned that more than half of our business consists of software applications. I’d like to discuss software and intelligent agents. Last year, you noted that most application software is essentially a “thin layer” over messy databases, which caused quite a stir.
Satya Nadella: Yes, my point was that in the era of intelligent agents, traditional business applications may gradually be replaced. Because they are essentially “grouped databases” with embedded business logic, and that business logic will be superseded by agents.
On the Success of Microsoft 365 Copilot
Brad Gerstner: Today, the forward price-to-sales ratio for listed software companies is approximately 5.2x, below its historical average of 7x, even though the market is at historic highs. Many are concerned that SaaS subscriptions and profit margins may be impacted by AI.
So, what impact does AI currently have on the growth rates of your core software products (such as databases, Fabric, security, Office 365)? How do you ensure that software is not disrupted but rather enhanced through AI?
Satya Nadella: Yes, as I mentioned last time, SaaS application architectures are changing because the intelligent agent layer is replacing the old business logic layer. In the past, our SaaS applications had tightly coupled data layers, logic layers, and UIs. AI does not adhere to this coupling; it requires decoupling, and context engineering becomes crucial.
Take Office 365 as an example. I appreciate its low ARPU (Average Revenue Per User) combined with high usage rates. Outlook, Teams, SharePoint, Word, and Excel are used almost constantly by users, generating massive amounts of data input into Graph. The combination of low ARPU and high usage gives me confidence that we can fully leverage this data through the AI layer.
Interestingly, GitHub and Microsoft 365 have seen record-high data inputs due to AI. Generated code, PowerPoint presentations, Excel models, chat logs, and new documents are all entering Graph, forming vector embeddings that provide a semantic foundation for intelligent agents.
Next-generation SaaS applications must be intelligent. High ARPU with low usage might pose problems, but we operate on low ARPU with high usage. By accelerating deployment through AI, products like M365 Copilot command higher prices but achieve faster deployment and greater utilization.
The situation at GitHub is also clear: the results accumulated over the past 10–15 years saw major growth last year. Code is no longer just a tool; it has become a means of substituting labor, representing a fundamentally different business model.
Brad Gerstner: In the past, cloud primarily ran pre-compiled software that didn’t require many GPUs, with most value concentrated in databases and application layers.
But in the future, interfaces will only have value if they are intelligent. Software must be able to think, act, and provide recommendations, which requires generating large volumes of tokens and processing constantly changing contexts. In this scenario, AI factories (hardware and models) might capture more value than software or agents. What is your view?
Satya Nadella: Two things determine the value of AI:
- Token Factory: Hardware and system software are optimized to run at maximum utilization. The role of hyperscalers is to operate this token factory efficiently while managing heterogeneous hardware.
- Agent Factory: Modern SaaS drives business outcomes. It knows how to use tokens most effectively to create value. GitHub Copilot is an example: in auto-mode, it selects the best model based on prompts to complete tasks. Intelligent SaaS applications optimize token usage through feedback loops and data cycles to achieve optimal business results.
Overall, software has real marginal costs—a reality that existed in the cloud era but is now more pronounced. Business models need to adjust, optimizing Agent Factories and Token Factories separately.
Brad Gerstner: Microsoft has an obscure search business that is very profitable because of the huge volume of searches, with each search costing only a fraction of a cent. In contrast, chat interactions are more expensive. Do you think chat can eventually reach the profitability levels of search in the future?
Satya Nadella: The profitability model for search is magical: indexing is a fixed cost that can be amortized efficiently. Chat, however, requires more GPU cycles per interaction, resulting in a different cost structure. Therefore, early chat models adopted freemium or subscription models. We are still exploring advertising or agent-based business models.
Meanwhile, I personally still use search for specific navigation tasks, while commercial search is gradually shifting toward the Copilot model. In the future, there will be a redistribution process similar to the restructuring seen in SaaS during its early days.
Brad Gerstner: This is a multi-trillion-dollar market. When the search business model shifts toward personal assistants, the potential value may far exceed that of traditional search. But this means we are no longer just amortizing fixed indexing costs.
Satya Nadella: Consumers have limited discretionary time; if you spend it on one thing, you cannot spend it on another. The profitability model for consumer-side agents remains unclear. On the enterprise side, however, it is different: it is not a winner-take-all market, making it more suitable for agent interactions. In other words, agents replace traditional seat-based billing; profitability is clearer in enterprises, whereas it is ambiguous in consumer markets.
On AI and Productivity
Brad Gerstner: Recently, we saw Amazon lay off staff on a large scale, while the “Magnificent Seven” have seen limited growth over the past three years.
Microsoft’s headcount remained almost unchanged at around 225,000 in 2024–2025. Many believe this is post-pandemic efficiency optimization. But does AI also play a role? Will AI be a net job creator? In the long term, will it improve Microsoft’s productivity?
Satya Nadella: I firmly believe that the productivity curve will rise due to AI tools. Task-level work will be completed more efficiently with AI. Internally at Microsoft, we are ensuring every employee is equipped with M365 and GitHub Copilot to boost efficiency.
At the same time, we are learning a new way of working: collaborating with intelligent agents, much like how early Office tools changed workflows.
Planning and execution now start with AI: research, thinking, sharing, and generating new work products and workflows. Organizations that master this capability will achieve the greatest productivity gains, whether within Microsoft or across industries and the real world.
Brad Gerstner: So will Microsoft benefit from this? Let’s assume that at current growth rates—in five years—your revenue would be roughly double today’s. Satya, if revenue grows at this pace, how many additional employees would you hire?
Satya Nadella: The best part is the examples I see daily from our employees. For instance, our Head of Network Operations oversees the fiber optic network for our 2-gigawatt data center just built in Fairwater. The deployment of AI has made tasks such as fiber laying and operations extremely demanding. In fact, we need to coordinate with about 400 fiber operators globally; whenever issues arise, complex DevOps processes must be handled.
She said that even with budget approval, there were not enough people to hire to complete these tasks. So she made the second-best choice: she built her own intelligent agents to automate the DevOps process. This is an example of a team leveraging AI tools to significantly boost productivity.
So, we will certainly add employees, but the leverage provided by new hires is far higher than before.
You can view this as a structural adjustment—people need to relearn how to work. It’s not just about “what” to do, but “how” to do it. The learning and “learning-to-learn” process will last about a year, after which new employees can achieve maximum leverage effects.
Brad Gerstner: I feel we are on the verge of a significant productivity leap. When speaking with you or Michael Dell, I sense that most companies have not even begun to restructure their workflows to extract maximum leverage from intelligent agents.
But in the next two to three years, this will yield substantial benefits. I am also optimistic that it will create net jobs, although enterprise employee growth may lag behind revenue growth—that is the manifestation of improved productivity. Accumulating these efficiencies creates incremental value derived from productivity gains, which can be invested in creating things that did not exist before.
Satya Nadella: Absolutely correct. This applies even to software development.
Every organization has a large backlog of IT tasks; these intelligent agents will help us manage this backlog and realize the vision of “evergreen software.”
At the same time, the level of abstraction for knowledge work will change, workflows will adjust accordingly, and this will meet the changing demands of industrial products.