|
HPCF News: HPCF Cluster tara #15 on Green Graph 500 List
April 02, 2013
The cluster tara has placed #15 on the (unofficial) Green Graph 500 List. More ... tara Places 98th in November 2012 Graph 500 List
November 13, 2012
The HPCF cluster tara was ranked 98th on the Graph 500 benchmark, a benchmark that measures memory access, in a study carried out by HPC REU students in Summer 2012. More ... |
Welcome:
The UMBC High Performance Computing Facility (HPCF) is
the community-based, interdisciplinary core facility for scientific
computing and research on parallel algorithms. Started in 2008 by more
than 20 researchers from more than ten departments and research centers
from all three colleges, it is supported by faculty contributions,
federal grants, and the UMBC administration. The facility is open to
UMBC researchers at no charge. Researchers can purchase nodes for
long-term priority access. System administration is provided by the UMBC
Division of Information Technology, and users have access to consulting
support provided by a dedicated full-time GRA. Installed in Fall 2009,
the current machine is the 86-node distributed-memory cluster tara with
two quad-core Intel Nehalem processors and 24 GB per node, an InfiniBand
interconnect, and 160 TB central storage.
This webpage provides information about the facility, its systems, research projects, resources for users, and contact information. |
Hot Links: |

The UMBC High Performance Computing Facility (HPCF) is
the community-based, interdisciplinary core facility for scientific
computing and research on parallel algorithms. Started in 2008 by more
than 20 researchers from more than ten departments and research centers
from all three colleges, it is supported by faculty contributions,
federal grants, and the UMBC administration. The facility is open to
UMBC researchers at no charge. Researchers can purchase nodes for
long-term priority access. System administration is provided by the UMBC
Division of Information Technology, and users have access to consulting
support provided by a dedicated full-time GRA. Installed in Fall 2009,
the current machine is the 86-node distributed-memory cluster tara with
two quad-core Intel Nehalem processors and 24 GB per node, an InfiniBand
interconnect, and 160 TB central storage.