|
HPCF News: tara Places 99th in November 2012 Graph 500 List
November 13, 2012
The HPCF cluster tara was ranked 99th on the Graph 500 benchmark, a benchmark that measures memory access, in a study carried out by HPC REU students in Summer 2012. More ... MRI Grant Awarded by NSF
September 04, 2012
The MRI proposal submitted in January with 30 researchers from 10 departments and research centers across the campus was successful! More ... |
Welcome:
The UMBC High Performance Computing Facility (HPCF) is
the community-based, interdisciplinary core facility for scientific
computing and research on parallel algorithms. Started in 2008 by more
than 20 researchers from more than ten departments and research centers
from all three colleges, it is supported by faculty contributions,
federal grants, and the UMBC administration. The facility is open to
UMBC researchers at no charge. Researchers can purchase nodes for
long-term priority access. System administration is provided by the UMBC
Division of Information Technology, and users have access to consulting
support provided by a dedicated full-time GRA. Installed in Fall 2009,
the current machine is the 86-node distributed-memory cluster tara with
two quad-core Intel Nehalem processors and 24 GB per node, an InfiniBand
interconnect, and 160 TB central storage.
This webpage provides information about the facility, its systems, research projects, resources for users, and contact information. |
Hot Links: |

The UMBC High Performance Computing Facility (HPCF) is
the community-based, interdisciplinary core facility for scientific
computing and research on parallel algorithms. Started in 2008 by more
than 20 researchers from more than ten departments and research centers
from all three colleges, it is supported by faculty contributions,
federal grants, and the UMBC administration. The facility is open to
UMBC researchers at no charge. Researchers can purchase nodes for
long-term priority access. System administration is provided by the UMBC
Division of Information Technology, and users have access to consulting
support provided by a dedicated full-time GRA. Installed in Fall 2009,
the current machine is the 86-node distributed-memory cluster tara with
two quad-core Intel Nehalem processors and 24 GB per node, an InfiniBand
interconnect, and 160 TB central storage.