Revolutionary Compute Express Link® (CXL®) technology unlocks memory agility and scalability for next-gen AI, cloud and HPC workloads.
SANTA CLARA, Calif., April 29, 2025 /PRNewswire-PRWeb/ -- XConn Technologies (XConn), the innovation leader in next-generation interconnect technology for the future of high-performance computing and AI applications, today announced a groundbreaking demonstration of dynamic memory allocation using Compute Express Link® (CXL®) switch technology at CXL DevCon 2025, taking place April 29-30 at the Santa Clara Marriott hotel. The demonstration highlights a major advancement in memory flexibility, showcasing how CXL switching can enable seamless, on-demand memory pooling and expansion across heterogeneous systems.
The milestone, achieved in collaboration with AMD, unlocks a new level of efficiency for cloud, artificial intelligence (AI), and high-performance computing (HPC) workloads. By dynamically allocating memory via the XConn Apollo™ CXL switch, data centers can eliminate over-provisioning, enhance performance, and significantly reduce total cost of ownership (TCO).
"Memory agility is the next frontier in computing, and this demonstration is a pivotal step toward delivering scalable, software-defined infrastructure for the most demanding AI and HPC environments," said JP Jiang, Senior Vice President of Product Marketing and Management at XConn. "With CXL-enabled dynamic memory allocation, we're showing how memory can be pooled and distributed in real time, unlocking efficiencies that static architectures simply can't deliver."
At the core of the demo is XConn's Apollo CXL switch, the industry's first to support both CXL 2.0 and PCIe 5.0 on a single chip. The switch enables terabyte-scale memory expansion with near-native latency and coherent memory access across CPUs, GPUs, and accelerators, including the 5th Gen AMD EPYC™ processors.
"CXL switching offers tremendous potential for the next generation of datacenter computing, especially in use cases like distributed shared memory, which can greatly enhance efficiency and reduce costs for data-intensive applications," said Raghu Nambiar, corporate vice president, Data Center Ecosystems and Solutions, AMD. "By combining AMD EPYC processors with the XConn Apollo CXL switch, XConn are helping to deliver on highly-flexible and adaptive memory infrastructure for next-gen data centers."
Key benefits of XConn's CXL-based dynamic memory allocation on display during the event include:
- On-Demand Memory Pooling: Share and scale memory across systems to avoid over-provisioning.
- Low Latency Performance: Coherent access ensures memory behaves like local DRAM.
- Terabyte-Scale Expansion: Ideal for AI inference KV caching, in-memory databases, and virtualization workloads.
CXL DevCon 2025 attendees can experience the live demo at the XConn booth and learn more about the breakthrough in the technical session, "Showcasing a CXL 2.0 Memory Pooling/Sharing System," presented by XConn's Jiang on April 29 at 2:40 p.m.
Production samples of XConn Apollo XC50256 are available now. To request a customer sample and/or Apollo reference board, contact XConn at xconn-tech.com.
About XConn Technologies
XConn Technologies Holdings, Inc. (XConn) is the innovation leader in next-generation interconnect technology for high-performance computing and AI applications. The company is the industry's first to deliver a hybrid switch supporting both CXL 2.0 and PCIe Gen5 on a single chip. Privately funded, XConn is setting the benchmark for data center interconnect with scalability, flexibility and performance. For more information visit: xconn-tech.com.
AMD, the AMD Arrow logo, EPYC, and combinations thereof are trademarks of Advanced Micro Devices, Inc.
Media Contact
Erin Jones, XConn Technologies, 1 704.664.2170, [email protected], xconn-tech.com
SOURCE XConn Technologies

Share this article