OpenAI’s Major Deal with Amazon Web Services
OpenAI has established a significant seven-year agreement valued at $38 billion with Amazon Web Services (AWS) to enhance its cloud computing capabilities, which are essential for powering advanced AI tools such as ChatGPT and Sora.
The agreement was revealed on Monday, and it gives OpenAI access to a vast number of Nvidia chips that are stored in Amazon’s global data centers, with the full capacity anticipated to be operational by the end of 2026.
AWS has stated that this partnership will enable OpenAI to expand quickly, leveraging the advantages of the cloud in terms of “price, performance, scale, and security.”
Sam Altman, the CEO of OpenAI, commented on the partnership, saying, “Scaling frontier AI requires reliable computing at scale. Our partnership with AWS strengthens the broader computing ecosystem that will power this next era and bring advanced AI to everyone.”
This $38 billion arrangement is a notable shift for OpenAI, signaling its first move toward Amazon for infrastructure, thus ending its long-time exclusive reliance on Microsoft’s Azure cloud.
This announcement follows closely on the heels of OpenAI restructuring its ownership to gain more autonomy in fundraising and operations, which included removing Microsoft’s first refusal rights for cloud services.
Matt Garman, the CEO of AWS, remarked that the deal demonstrates Amazon’s capability to manage “enormous AI workloads” from pioneer model developers like OpenAI.
In response to the announcement, Amazon’s stock price jumped about 5% on Monday, reaching all-time highs.
OpenAI will utilize Amazon’s UltraServer clusters—comprising racks of Nvidia GB200 and GB300 processors—to train and operate models, handle ChatGPT requests, and develop “agent AI” systems that enable software to perform tasks independently.
This collaboration provides OpenAI with access to millions of CPUs tailored for specialized workloads, helping it meet the rapidly increasing user demand accompanying the rise of AI.
Amazon expects that all planned production capacity will be operational by the end of next year, with expansions continuing into 2027 and beyond.
The partnership signifies Amazon’s efforts to reclaim its position in the competitive landscape against growing rivals like Microsoft and Google, who are seeing substantial revenue increases in their cloud divisions due to AI demand.
While AWS remains the leading cloud provider globally, analysts note they have been struggling to attract high-profile AI clients.
To address this gap, Amazon has been making significant investments. Recently reported cloud revenue grew by 20% in the last quarter—the fastest growth since 2022—and Amazon has opened an $11 billion data center campus in Indiana focused on training models for OpenAI competitor Anthropic.
Amazon has committed $8 billion to Anthropic, who also employs its own Trainium chips but has additionally signed a recent deal to utilize up to 1 million Google TPU chips.
Until recently, Amazon was excluded from OpenAI’s ecosystem due to an exclusive cloud agreement with Microsoft, which prevented OpenAI from sourcing capacity from other providers.
However, that agreement was revised last month as part of OpenAI’s major reorganization, giving Altman greater flexibility in pursuing industry-wide deals.
Currently, OpenAI is investing approximately $600 billion across various cloud providers, including Amazon, Microsoft, Oracle, and Google, which is quite remarkable considering that the company’s annual income is roughly $13 billion.
The goal of this deal is to address what Altman refers to as a “critical computing shortage,” which is hindering model training and product launches.





