Home

Welcome

circ100The UMBC High Performance Computing Facility (HPCF) is the community-based, interdisciplinary core facility for scientific computing and research on parallel algorithms at UMBC. Started in 2008 by more than 20 researchers from ten academic departments and research centers from all academic colleges at UMBC, it is supported by faculty contributions, federal grants, and the UMBC administration. Since HPCF’s inception, over 400 users have benefited from its computing clusters, including undergraduate and graduate students. The users generated over 400 publications, including 150 papers in peer-reviewed journals (including Nature, Science, and other top-tier journals in their fields), 50 refereed conference papers, and 50 theses. The facility is open to UMBC researchers at no charge. Researchers can contribute funding for long-term priority access. System administration is provided by the UMBC Division of Information Technology, and users have access to consulting support provided by dedicated full-time graduate assistants. The purchase of the two current clusters taki and ada were supported by several NSF grants from the MRI programs, see the About tab for precise information.

HPCF currently consists of two machines, taki and ada, and both are comprised of several types of nodes. This structure is reflected in the tabs on top of this page:

  • The CPU cluster consists of 18 compute nodes with two 24-core Intel Cascade Lake CPUs and 196 GB of memory each and 50 compute nodes with two 18-core Intel Skylake CPUs and 384 GB of memory each. This cluster also includes 2 nodes in a develop partition.
  • The taki GPU cluster contains one node with four NVIDIA Tesla V100 GPUs connected by NVLink.
  • The ada GPU cluster is comprised of 13 nodes which each have two 24-core Intel Cascade Lake CPUs and 384 GB of memory. Four of these nodes have eight 2080 Ti GPUs; seven of these nodes have eight RTX 6000 GPUs; and two of these nodes have eight RTX 8000 GPUs with an extra 384 GB of memory each. This brings the total number of GPUs to 104.

The nodes are connected to each other by EDR (extended data rate) InfiniBand interconnect. All nodes of both machines are connected to the same central storage of more than 750 TB.

This webpage provides information about the facility, its systems, research projects, publications, resources for users, and contact information.