Slurm Used on the Fastest of the TOP500 Supercomputers

Share Article

Slurm Workload Manager is used on 5 of the 15 fastest supercomputers in the world

Slurm is now one of the most widely used workload managers in the TOP500.

At SC12. Based on this week’s release of the TOP500 List— a ranking of the world’s fastest computers— Slurm Workload Manager continued to be the most widely used on the fastest of the fast: 33% of the top 15 supercomputers use Slurm.

Slurm – an open source workload manager designed for the most demanding HPC environments, originated at Lawrence Livermore National Laboratory (LLNL) ten years ago and evolved over time with the contributions of more than 100 developers. Slurm remains an important workload manager at LLNL, providing scheduling and other functionality to their Sequoia supercomputer, currently #2 in the TOP500 and ranked the fastest in the previous TOP500 List.

The other supercomputers in the 15 fastest supercomputers using Slurm are Stampede at TACC; Tianhe-1A in China; Curie at the CEA in France; and Helios at Japan’s International Fusion Energy Research Centre. Beyond the top 15 systems, SchedMD, the organization overseeing the code base for Slurm, estimates that as many as 30% of the supercomputers in the TOP500 list are using the open source workload manager.

“We built Slurm to efficiently schedule resources for the world’s biggest systems and through simulation have proven its scalability to an order of magnitude higher than the currently largest systems,” said Moe Jette, CTO of SchedMD. “It’s now one of the most widely used workload managers in the Top500. As we move to Exascale computing requirements, Slurm is the workload manager best positioned to schedule jobs at that scale.”

Outside of the large supercomputer centers, Slurm is gathering momentum. HPC computer manufacturers Bull and Cray frequently provide Slurm as part of their solutions, and Bright Computing now offers Slurm as the default workload manager in Bright Cluster Manager. In addition, a number of Slurm users and technology providers joined forces to support the growth of the Slurm community, including SchedMD, NVIDIA, Lawrence Livermore National Laboratory, Intel, Greenplum/EMC, CSCS, CEA, Bull and Bright Computing. The group sponsored a booth at SC12 this week in Salt Lake City, and are kicking off other initiatives to increase awareness and engagement.
Slurm is freely available for download, along with current documentation and information, from SchedMD at http://www.schedmd.com
Other Slurm information sources:
Twitter: @SchedMD, @SlurmWLM,
Facebook: http://www.facebook.com/schedmd
Linkedin: http://www.linkedin.com/groups/Slurm-4501392
Slurm blog: http://slurm.net
For more information:
Paul Owen
Owen Media
4111 E. Madison St.
Seattle, WA 98112
+1 206 200 6936
Paulo (at) owenmedia (dot) com
http://www.slurm.net

Share article on social media or email:

View article via:

Pdf Print

Contact Author

Paul Owen
Owen Media
+1 206 200 6936
Email >
Visit website

Media