UMBC logo
UMBC High Performance Computing Facility
Please note that this page is under construction. We are documenting the 240-node cluster maya that will be available after Summer 2014. Currently, the 84-node cluster tara still operates independently, until it becomes part of maya at the end of Summer 2014. Please see the 2013 Resources Pages under the Resources tab for tara information.
Supporting Materials for Research

Long Description of Facility

Here is a long description of HPCF in two paragraphs in text version:
The UMBC High Performance Computing Facility (HPCF)
is the community-based, interdisciplinary core facility for
scientific computing and research on parallel algorithms at UMBC.
Started in 2008 by more than 20 researchers from
ten academic departments and research centers
from all three colleges, it is supported by
faculty contributions, federal grants, and the UMBC administration.
The facility is open to UMBC researchers at no charge.
Researchers can contribute funding for long-term priority access.
System administration is provided by the
UMBC Division of Information Technology,
and users have access to consulting support provided by
dedicated full-time graduate assistants.
See www.umbc.edu/hpcf
for more information on HPCF and the projects using its resources.

Released in Summer 2014,
the current machine in HPCF is the 240-node distributed-memory cluster maya.
The newest components of the cluster are the 72 nodes
with two eight-core 2.6 GHz Intel E5-2650v2 Ivy Bridge CPUs and 64 GB memory 
that include 19 hybrid nodes
with two state-of-the-art NVIDIA K20 GPUs (graphics processing units)
designed for scientific computing and
19 hybrid nodes with two cutting-edge 60-core Intel Phi 5110P accelerators.
These new nodes are connected along with the 84 nodes
with two quad-core 2.6 GHz Intel Nehalem X5550 CPUs and 24 GB memory 
by a high-speed quad-data rate (QDR) InfiniBand network
for research on parallel algorithms.
The remaining 84 nodes
with two quad-core 2.8 GHz Intel Nehalem X5560 CPUs and 24 GB memory
are designed for fastest number crunching and connected by
a dual-data rate (DDR) InfiniBand network. 
All nodes are connected via InfiniBand to a 
central storage of more than 750 TB.


Download: facility_long.txt
And in LaTeX version:
The UMBC High Performance Computing Facility (HPCF)
is the community-based, interdisciplinary core facility for
scientific computing and research on parallel algorithms at UMBC.
Started in 2008 by more than 20~researchers from
ten academic departments and research centers
from all three colleges, it is supported by
faculty contributions, federal grants, and the UMBC administration.
The facility is open to UMBC researchers at no charge.
Researchers can contribute funding for long-term priority access.
System administration is provided by the
UMBC Division of Information Technology,
and users have access to consulting support provided by
dedicated full-time graduate assistants.
See \url{www.umbc.edu/hpcf}
for more information on HPCF and the projects using its resources.

Released in Summer 2014,
the current machine in HPCF is the 240-node distributed-memory cluster maya.
The newest components of the cluster are the 72~nodes % in maya (2013)
with two eight-core 2.6~GHz Intel E5-2650v2 Ivy Bridge CPUs and 64~GB memory 
that include 19~hybrid nodes
with two state-of-the-art NVIDIA K20 GPUs (graphics processing units)
designed for scientific computing and
19~hybrid nodes with two cutting-edge 60-core Intel Phi 5110P accelerators.
These new nodes are connected along with the 84~nodes % in maya (2009)
with two quad-core 2.6~GHz Intel Nehalem X5550 CPUs and 24~GB memory 
by a high-speed quad-data rate (QDR) InfiniBand network
for research on parallel algorithms.
The remaining 84~nodes % in maya (2010)
with two quad-core 2.8~GHz Intel Nehalem X5560 CPUs and 24~GB memory
are designed for fastest number crunching and connected by
a dual-data rate (DDR) InfiniBand network. 
All nodes are connected via InfiniBand to a 
central storage of more than 750~TB.


Download: facility_long.tex

Short Description of Facility

Here is a short description of HPCF in one paragraph in text version:
The UMBC High Performance Computing Facility (HPCF)
is the community-based, interdisciplinary core facility for
scientific computing and research on parallel algorithms at UMBC.
The current machine in HPCF is the 240-node distributed-memory cluster maya.
The newest components of the cluster are the 72 nodes
with two eight-core 2.6 GHz Intel E5-2650v2 Ivy Bridge CPUs and 64 GB memory 
that include 19 hybrid nodes
with two state-of-the-art NVIDIA K20 GPUs (graphics processing units)
designed for scientific computing and
19 hybrid nodes with two cutting-edge 60-core Intel Phi 5110P accelerators.
These new nodes are connected along with the 84 nodes
with two quad-core 2.6 GHz Intel Nehalem X5550 CPUs and 24 GB memory 
by a high-speed quad-data rate (QDR) InfiniBand network
for research on parallel algorithms.
The remaining 84 nodes
with two quad-core 2.8 GHz Intel Nehalem X5560 CPUs and 24 GB memory
are designed for fastest number crunching and connected by
a dual-data rate (DDR) InfiniBand network. 
All nodes are connected via InfiniBand to a 
central storage of more than 750 TB.


Download: facility_short.txt
And in LaTeX version:
The UMBC High Performance Computing Facility (HPCF)
is the community-based, interdisciplinary core facility for
scientific computing and research on parallel algorithms at UMBC.
The current machine in HPCF is the 240-node distributed-memory cluster maya.
The newest components of the cluster are the 72~nodes
with two eight-core 2.6~GHz Intel E5-2650v2 Ivy Bridge CPUs and 64~GB memory 
that include 19~hybrid nodes
with two state-of-the-art NVIDIA K20 GPUs (graphics processing units)
designed for scientific computing and
19~hybrid nodes with two cutting-edge 60-core Intel Phi 5110P accelerators.
These new nodes are connected along with the 84~nodes
with two quad-core 2.6~GHz Intel Nehalem X5550 CPUs and 24~GB memory 
by a high-speed quad-data rate (QDR) InfiniBand network
for research on parallel algorithms.
The remaining 84~nodes
with two quad-core 2.8~GHz Intel Nehalem X5560 CPUs and 24~GB memory
are designed for fastest number crunching and connected by
a dual-data rate (DDR) InfiniBand network. 
All nodes are connected via InfiniBand to a 
central storage of more than 750~TB.


Download: facility_short.tex

Acknowledgments for papers

Please give the following acknowledgment for any paper or report which uses HPCF resources. Here is the plain text version:
The hardware used in the computational studies is part of the
UMBC High Performance Computing Facility (HPCF).
The facility is supported by the U.S. National Science Foundation
through the MRI program (grant nos. CNS-0821258 and CNS-1228778)
and the SCREMS program (grant no. DMS-0821311),
with additional substantial support from the
University of Maryland, Baltimore County (UMBC).
See www.umbc.edu/hpcf
for more information on HPCF and the projects using its resources.


Download: acknowledgments.txt
And here is the LaTeX version:
The hardware used in the computational studies is part of the
UMBC High Performance Computing Facility (HPCF).
The facility is supported by the U.S. National Science Foundation
through the MRI program (grant nos.~CNS--0821258 and CNS--1228778)
and the SCREMS program (grant no.~DMS--0821311),
with additional substantial support from the
University of Maryland, Baltimore County (UMBC).
See \verb"www.umbc.edu/hpcf"
for more information on HPCF and the projects using its resources.


Download: acknowledgments.tex