TECH NEWS | AMD, IBM, Zyphra forge new path for AI infrastructure
The announcements underscore AMD’s bid to position itself as a leading enabler of generative AI infrastructure—ranging from data center clusters to open-standard rack designs—at a time when enterprises and AI developers are racing to scale foundation models.

Advanced Micro Devices (AMD) has unveiled a sweeping strategy to power the next era of artificial intelligence, combining new hardware innovation with deep industry collaboration. The company introduced its new Helios rack-scale platform and announced a multiyear partnership with IBM and open-source AI firm Zyphra to deploy one of the world’s largest open supercomputing clusters for AI model training.
The announcements underscore AMD’s bid to position itself as a leading enabler of generative AI infrastructure—ranging from data center clusters to open-standard rack designs—at a time when enterprises and AI developers are racing to scale foundation models.
AMD’s “Helios” ushers in rack-scale AI era
At the Open Compute Global Summit in San Jose, AMD showcased Helios, a next-generation rack-scale platform built on the Open Compute Project (OCP) Open Rack Standard. The new architecture aims to simplify the deployment of high-density AI training and inference systems by integrating compute, memory, and networking within a unified, open design.
Helios is powered by AMD Instinct MI300X GPUs—the company’s flagship AI accelerator designed for generative model workloads—and uses a high-bandwidth, low-latency interconnect optimized for large-scale cluster configurations. According to AMD, the platform enables enterprises to deploy AI infrastructure faster, more efficiently, and at reduced total cost of ownership.
Meta was the first company to implement Helios in production-scale clusters to support its Llama family of large language models (LLMs). The collaboration between AMD and Meta was presented as a milestone for the OCP ecosystem, validating the open hardware model for AI-scale computing.
“Helios reflects the spirit of open collaboration that defines the OCP community,” said Forrest Norrod, executive vice president and general manager for AMD’s Data Center Solutions Group. “It allows partners like Meta to scale their AI infrastructure on open standards that can evolve with the pace of innovation.”
The rack-scale design supports both AMD EPYC processors and Instinct accelerators, unified under the company’s ROCm open software stack, allowing developers to optimize workloads across compute and GPU nodes seamlessly.
IBM, AMD, Zyphra collaborate on Frontier AI cluster
Complementing the Helios debut, AMD also announced a strategic collaboration with IBM to deliver advanced AI infrastructure for Zyphra, an open-source AI research company based in San Francisco. Under a multiyear agreement, IBM will provide Zyphra with a large cluster of AMD Instinct MI300X GPUs hosted on IBM Cloud, supported by AMD Pensando Pollara 400 AI NICs and Ortano DPUs.
The collaboration establishes one of the largest generative AI training clusters powered by AMD hardware on a commercial cloud platform. Zyphra plans to use the system to train multimodal foundation models spanning language, vision, and audio, forming the backbone of Maia, its “superagent” designed for enterprise productivity applications.
“This collaboration marks the first time AMD’s full-stack training platform—spanning compute through networking—has been successfully integrated and scaled on IBM Cloud,” said Krithik Puthalath, CEO and chairman of Zyphra. “We’re honored to lead the way in developing frontier models with AMD silicon on IBM Cloud.”
Scaling open-source superintelligence
Zyphra, which recently completed a Series A funding round at a $1 billion valuation, describes itself as an open-science company focused on advancing neural network architectures, long-term memory, and continual learning. By leveraging IBM’s secure hybrid cloud and AMD’s GPU ecosystem, Zyphra aims to build transparent and scalable AI systems accessible to researchers and developers worldwide.
“Scaling AI workloads faster and more efficiently is a key differentiator for every enterprise and startup alike,” said Alan Peacock, general manager of IBM Cloud. “Together with AMD, we’re delivering infrastructure that can accelerate model training while providing the reliability and security our clients expect.”
For Philip Guido, AMD’s executive vice president and chief commercial officer, the partnership represents a convergence of innovation and performance at global scale. “By combining IBM’s cloud expertise with AMD’s leadership in high-performance computing and AI acceleration, we’re enabling organizations to build smarter businesses and unlock AI solutions that deliver real-world outcomes,” Guido said.
Building the backbone for generative AI
The AMD–IBM–Zyphra collaboration is part of a broader trend among technology leaders moving toward open, modular, and hybrid infrastructure for AI. AMD and IBM previously partnered to deploy AMD Instinct accelerators as-a-service on IBM Cloud, integrating them into enterprise-grade environments for AI and high-performance computing (HPC) applications.
Both companies are now expanding this roadmap to include quantum-centric supercomputing architectures, which combine classical and quantum computing with advanced AI acceleration—a vision that could redefine the limits of scientific computing in the coming decade.
Analysts see AMD’s approach as a deliberate challenge to proprietary AI ecosystems dominated by a few hyperscalers. By focusing on open standards, flexible cloud deployment, and multi-partner collaboration, AMD is positioning its architecture as the foundation for a more diverse and competitive AI infrastructure landscape.
Toward a more open AI future
From Meta’s Helios deployments to Zyphra’s frontier model training, AMD’s AI portfolio now spans hyperscale data centers, open hardware standards, and cloud-based research initiatives. This alignment between hardware innovation and open collaboration underscores the company’s broader goal: democratizing access to the computing power required to train and deploy generative AI at scale.
As enterprises race to build the next generation of intelligent applications, AMD’s ecosystem—bolstered by partners like IBM and Meta—may become one of the key engines driving the global AI infrastructure revolution.
————————————————————————-
WATCH TECHSABADO ON OUR YOUTUBE CHANNEL:
WATCH OUR OTHER YOUTUBE CHANNELS:
PLEASE LIKE our FACEBOOK PAGE and SUBSCRIBE to OUR YOUTUBE CHANNEL.
PLEASE LIKE our FACEBOOK PAGE and SUBSCRIBE to OUR YOUTUBE CHANNEL.
