“
                     Taras Mishchenko Chief editor of Mezha.Media. Taras has over 15 years of experience in IT journalism, writes about new technologies and gadgets.
Under the terms of the agreement, OpenAI will have access to hundreds of thousands of NVIDIA graphics chips and the ability to scale to tens of millions of processors over the next 7 years. AWS capacity will begin immediately, with full infrastructure deployment planned for late 2026, with further expansion through 2027 and beyond.
“Scaling advanced AI models requires massive, reliable computing resources,” said OpenAI co-founder and CEO Sam Altman. “Our partnership with AWS strengthens the global computing ecosystem that will enable the next era of AI and make it accessible to all.”
The new infrastructure will be built on the basis of NVIDIA GB200 and GB300 chips, combined into clusters through Amazon EC2 UltraServers. This should provide low latency, high performance, and flexibility for a variety of tasks, from training new models to processing requests in ChatGPT.
The new partnership follows OpenAI’s recent restructuring and a new agreement with Microsoft, which gives the latter the rights to use OpenAI technologies until it reaches the level of AGI (artificial general intelligence). Despite this, OpenAI continues to develop a multi-cloud strategy, working with various providers to ensure the stability and scalability of its AI services.
”, — write: www.pravda.com.ua
                     Taras Mishchenko Chief editor of Mezha.Media. Taras has over 15 years of experience in IT journalism, writes about new technologies and gadgets.
Under the terms of the agreement, OpenAI will have access to hundreds of thousands of NVIDIA graphics chips and the ability to scale to tens of millions of processors over the next 7 years. AWS capacity will begin immediately, with full infrastructure deployment planned for late 2026, with further expansion through 2027 and beyond.
“Scaling advanced AI models requires massive, reliable computing resources,” said OpenAI co-founder and CEO Sam Altman. “Our partnership with AWS strengthens the global computing ecosystem that will enable the next era of AI and make it accessible to all.”
The new infrastructure will be built on the basis of NVIDIA GB200 and GB300 chips, combined into clusters through Amazon EC2 UltraServers. This should provide low latency, high performance, and flexibility for a variety of tasks, from training new models to processing requests in ChatGPT.
The new partnership follows OpenAI’s recent restructuring and a new agreement with Microsoft, which gives the latter the rights to use OpenAI technologies until it reaches the level of AGI (artificial general intelligence). Despite this, OpenAI continues to develop a multi-cloud strategy, working with various providers to ensure the stability and scalability of its AI services.
