Amazon and OpenAI have announced a multi-year strategic partnership aimed at accelerating artificial intelligence innovation for enterprises, start-ups and consumers globally, with Amazon committing to invest $50bn in the AI company.
The investment will begin with an initial $15bn injection, followed by an additional $35bn subject to certain conditions, deepening collaboration between the two companies as competition intensifies in the global AI market, the ChatGPT maker said in a blog post on Saturday.
Under the agreement, OpenAI and Amazon Web Services will jointly develop a Stateful Runtime Environment powered by OpenAI’s models and made available through Amazon Bedrock.
The new environment is designed to enable AI systems to retain context, access compute resources and memory, and operate across software tools and data sources, allowing developers to manage long-running projects and workflows more efficiently.
The companies said stateful developer environments represent the next phase of frontier AI deployment, allowing models to work continuously across enterprise systems rather than responding only to isolated prompts.
“The platform will integrate with Amazon Bedrock AgentCore and AWS infrastructure services so that AI applications and agents can operate alongside existing enterprise workloads. The Stateful Runtime Environment is expected to launch in the coming months,” the blog post read.
As part of the partnership, AWS will also become the exclusive third-party cloud distribution provider for OpenAI Frontier, the company’s advanced enterprise platform designed to help organisations build, deploy and manage teams of AI agents operating across business systems with shared context and enterprise-grade security.
OpenAI Frontier enables companies to integrate AI into operational workflows without managing underlying infrastructure, a capability both firms say is increasingly important as enterprises shift from AI experimentation to full-scale deployment.
The alliance also significantly expands the companies’ existing cloud infrastructure agreement. OpenAI and AWS will increase their prior $38bn multi-year arrangement by an additional $100bn over eight years, with OpenAI committing to consume roughly two gigawatts of AWS Trainium computing capacity.
The expanded infrastructure deal will support demand for Stateful Runtime, Frontier and other advanced AI workloads while lowering the cost and improving the efficiency of large-scale AI deployment, the companies said.
The agreement includes the use of AWS’s Trainium3 and next-generation Trainium4 chips, with Trainium4 expected to begin delivery in 2027. The new chips are projected to offer higher compute performance, expanded memory bandwidth and increased high-bandwidth memory capacity to support increasingly sophisticated AI systems.
OpenAI said the long-term capacity arrangement would allow it to scale advanced AI services globally while enabling enterprises to consume AI capabilities on demand through AWS without managing complex infrastructure.
