New Orleans, LA (PRWEB) November 20, 2014
Penguin Computing, provider of high performance computing, enterprise data center and cloud solutions, today announced the first application optimized accelerated processing unit (APU) clusters, making seamless GPU and CPU memory sharing on clusters a reality based on heterogeneous system architecture (HSA) from Advanced Micro Devices (NYSE: AMD). The shared memory capability involves very lightweight context switches to switch instantaneously between the GPU and CPU, whichever code runs best at a given moment.
Today’s applications require power efficiency and high performance achieved through highly parallel computing, however GPUs and CPUs present a bottleneck to efficient cluster functionality due to separate memory space. This traditional architecture results in inefficient GPU/CPU communication, which is a challenge to scaling.
“We are making these machines immediately available for evaluation as a tremendous tool for software development,” said Phil Pokorny, Chief Technology Officer, Penguin Computing. “HSA is a reality and our technology is already in the hands of major U.S. labs. Penguin Computing’s extensive experience in APU cluster development and implementation is instrumental in this progress, in addition to close collaborate with AMD.”
Named Jäätikkö, or iceberg in Finnish, the cluster is currently being demonstrated at AMD’s SC14 booth #839 and combines 10 AMD APU compute nodes, plus head node based on Penguin’s Altus 2A30 development platform, with high performance Ethernet using Penguin’s Arctica open Ethernet switches.
“Initial feedback from early adopters reinforces our belief that this collaboration with Penguin Computing is an important step forward for the industry,” said Karl Freund, corporate vice president, Product Management and Market, Server Business Unit, AMD. “The potential of modern heterogeneous architectures is exciting, and collaborations such as these can result in significant steps forward in performance for a broad range of software applications.”
The oil and gas industry is an example of a customer segment that could experience significant benefits from this capability. With the oil and gas sector’s need for GPU parallel codes and single-precision, APU has almost a teraflop of single precision floating point performance.
The combined Penguin Computing and AMD solution is a very cost-effective way to get started with one node or cluster.
Visit http://www.penguincomputing.com/products/rackmount-servers/altus/altus-2a30 for more information about Jäätikkö.
- Keep up with Penguin Computing news by visiting the company’s website
- Follow us on Twitter, Facebook, LinkedIn
About Penguin Computing
Penguin Computing is one of the largest private suppliers of enterprise and high performance computing solutions in North America and has built and operates the leading specialized public HPC cloud service Penguin Computing on Demand (POD). Penguin Computing pioneers the design, engineering, integration and delivering of solutions that are based on open architectures and comprise non-proprietary components from a variety of vendors. Penguin Computing is also one of only five authorized Open Compute Project (OCP) solution providers leveraging this Facebook-led initiative to bring the most efficient open data center solutions to a broader market, and has announced the Tundra product line which applies the benefits of OCP to high performance computing.
Penguin Computing, Scyld ClusterWare, Scyld Insight, Scyld HCATM, Relion, Altus, Penguin Computing on Demand, POD, Tundra and Arctica are trademarks or registered trademarks of Penguin Computing, Inc.
Penguin Computing has more than 18,000 systems installed with over 2,500 customers in 40 countries across eight major vertical markets.