UMBC logo
UMBC High Performance Computing Facility
Please note that this page is under construction. We are documenting the 240-node cluster maya that will be available after Summer 2014. Currently, the 84-node cluster tara still operates independently, until it becomes part of maya at the end of Summer 2014. Please see the 2013 Resources Pages under the Resources tab for tara information.

HPCF News:

Update on Cluster Extension
December 13, 2013

The new cluster equipment has begun to arrive from Dell. HPCF personnel will be working with DoIT to update the documentation on this web page while the equipment is set up and tested. More ...

Cluster Extension Ordered
October 30, 2013

HPCF has selected Dell as the vendor for its next-generation computing equipment, which will used to be extend the cluster tara. The new equipment includes 19 CPU/GPU compute nodes, 19 CPU/Phi nodes, and 34 CPU-only nodes... More ...

News Archive

Welcome:

The UMBC High Performance Computing Facility (HPCF) is the community-based, interdisciplinary core facility for scientific computing and research on parallel algorithms at UMBC. Started in 2008 by more than 20 researchers from ten academic departments and research centers from all three colleges, it is supported by faculty contributions, federal grants, and the UMBC administration. The facility is open to UMBC researchers at no charge. Researchers can contribute funding for long-term priority access. System administration is provided by the UMBC Division of Information Technology, and users have access to consulting support provided by dedicated full-time graduate assistants. See www.umbc.edu/hpcf for more information on HPCF and the projects using its resources.

Released in Summer 2014, the current machine in HPCF is the 240-node distributed-memory cluster maya. The newest part of the cluster are the 72 nodes with two eight-core 2.6 GHz Intel E5-2650v2 Ivy Bridge CPUs and 64 GB memory that include 19 hybrid nodes with two state-of-the-art NVIDIA K20 GPUs (graphics processing units) designed for scientific computing and 19 hybrid nodes with two cutting-edge 60-core Intel Phi 5110P accelerators. These new nodes are connected along with the 84 nodes with two quad-core 2.6 GHz Intel Nehalem X5550 CPUs and 24 GB memory by a high-speed quad-data rate (QDR) InfiniBand network for research on parallel algorithms. The remaining 84 nodes with two quad-core 2.8 GHz Intel Nehalem X5560 CPUs and 24 GB memory are designed for fastest number crunching and connected by a dual-data rate (DDR) InfiniBand network. All nodes are connected via InfiniBand to a central storage of more than 750 TB.

This webpage provides information about the facility, its systems, research projects, resources for users, and contact information.