UMBC High Performance Computing Facility
Please note that this page is under construction. We are documenting the
240-node cluster maya that will be available in Spring 2014. Currently,
the cluster tara is still available. Please see the 2013 Resources Pages
under the Resources tab.
HPCF News Archive
Update on Cluster Extension
December 13, 2013
The new cluster equipment has begun to arrive from Dell. HPCF personnel will be
working with DoIT to update the documentation on this web page while the
equipment is set up and tested. During this time, much of the web site will
be "under construction" (especially under
Resources for Tara Users
), so please bear with us during the transition.
Cluster Extension Ordered
October 30, 2013
HPCF has selected Dell
as the vendor for its
next-generation computing equipment, which will used to be extend the cluster
tara. The new equipment includes 19 CPU/GPU compute nodes, 19 CPU/Phi nodes,
and 34 CPU-only nodes, as well as high performance networking equipment.
Each compute node carries two Intel Ivy Bridge processors with eight cores
apiece, for a total of 16 cores per node. The GPU and Phi coprocessors present
exciting new opportunities for high performance computing. Each CPU/GPU node
carries two NVIDIA Tesla K20 GPUs, while each CPU/Phi node carries two
Phi 5110P coprocessors. The web page will be updated in the coming months
as the equipment makes its way to UMBC and becomes operational.
June 16, 2013
HPCF RA Andrew Raim presented a workshop to the department of Math and
Statistics on an R package called pbdR for high performance computing.
The handout and examples codes are available as
Technical Report HPCF-2013-2
Abstract for Workshop
pbdR ("Programming with Big Data in R",
) is a recent package for high
performance computing in R offered by the Remote Data Analysis and
Visualization Center (RDAV) at the National Institute for Computational
Sciences (NICS). pbdR, like its predecessor Rmpi, presents the appealing
possibility of high level Message Passing Interface (MPI) programming in
the open source math/statistics environment R. pdbR also supports higher
level functionality for distributed dense matrix operations and scalable
linear algebra. We have recently installed pbdR to the cluster tara at the
High Performance Computing Facility
). In this informal
tutorial, we will demonstrate running pbdR programs through the scheduler
on tara and some of the basic capabilities of the software.
HPCF Cluster tara #15 on Green Graph 500 List
April 02, 2013
The cluster tara has placed #15 on the (unofficial)
Green Graph 500 List
This list ranks systems on the Graph 500 list by the efficiency
of their energy consumption.
In Summer 2012, a team of undergraduate students in the REU Site
placed tara on the Graph 500 list, from which the ranking in
the Green Graph 500 list is derived.
tara Places 98th in November 2012 Graph 500 List
November 13, 2012
A team of undergraduate students participating in the Summer 2012
HPC REU Site took on the
to implement and run the
Graph 500 benchmark
to assess the performance of the cluster tara.
This benchmark measures performance in accessing memory,
as opposed to CPU performance
which is emphasized in more traditional benchmarks for HPC.
The submission was accepted,
and tara ranked 98th on the November 2012 list. For details, see
the technical report HPCF-2012-11 in our
The ranking was announced at the
Supercomputing 2012 conference
in Salt Lake City.
Pictured are (left to right) the client David Mountain,
mentor Dr. Matthias Gobbert, and students
Jordan Angel, Nathan Wardrip, and Amy Flores,
holding the official certificate for the ranking.
Not able to join them at the conference were
client Richard Murphy and student Justine Heritage.
We acknowledge funding support for the student travel
from the National Security Agency and the National Science Foundation.
MRI Grant Awarded by NSF
September 04, 2012
The MRI proposal submitted in January with
30 researchers from 10 departments and research centers across the campus
The National Science Foundation awarded the grant on September 04, 2012
with $300,000 from the foundation.
With mandatory institutional cost-sharing of $128,571 from UMBC,
this contributes a total of $428,571 towards sustaining HPCF as
core facility into the future.
Andrew Raim presents at JSM 2012
July 31, 2012
HPCF RA Andrew Raim gave a talk at the
2012 Joint Statistical
in San Diego, CA. The talk An Approximate Fisher Scoring
Algorithm for Finite Mixtures of Multinomials
was based on Technical
REU Site on High Performance Computing Awarded
June 20, 2012
We received the formal award of the three-year renewal of the
REU Site: Interdisciplinary Program in High Performance Computing
on June 20, 2012.
This Research Experiences for Undergraduates summer program
is closely associated with HPCF and uses its cluster as integral
part of the program.
The grant for summer programs in 2012, 2013, and 2014
extends the program to 12 funded participants
and is jointly funded by the
National Science Foundation (NSF) and the National Security Agency (NSA).
Sai Popuri presents at UseR! 2012
June 14, 2012
Sai Popuri gave a talk at the
conference in Nashville, TN. The talk Implementation of the binomial method
of option pricing using parallel computing in R
was based on a project
at HPCF with
Dr. Nagaraj Neerchal
REU Site on High Performance Computing Rebudgeted
February 06, 2012
The REU Site: Interdisciplinary Program in High Performance Computing
that is closely associated with HPCF and uses its cluster
received a request from the National Science Foundation (NSF) for a
rebudget at the level of supporting 12 participants each year in
2012, 2013, and 2014.
This includes additional support from the National Security Agency (NSA)
that permits 50% more participants to be supported than in 2010 and 2011.
MRI 2012 Proposal with 30 Investigators
January 26, 2012
The HPCF user community, with cost-sharing support from
the Vice Presidents for IT and for Research,
submitted a new proposal to the National Science Foundation's MRI program.
The proposal requests support for one iDataPlex rack with
42 state-of-the-art hybrid compute nodes,
each with two eight-core Intel Nehalem CPUs
and two NVIDIA GPUs, an extension of the InfiniBand interconnect,
and additional central storage.
This proposal is supported by
30 researchers from 10 departments and research centers across the campus.
Dr. Nagaraj Neerchal Appointed Entrepreneurship Fellow
September 01, 2011
Dr. Nagaraj Neerchal, Director of the Center for Interdisciplinary Research and
Consulting (CIRC), was selected Entrepreneurship Fellow for 2012 by the Alex.
Brown Center for Entrepreneurship, along with Dr. Amy Froide (History) and
Dr. George Karabatis (IS). Entrepreneurship Fellows raise the profile of
entrepreneurship on campus, for which Dr. Neerchal is a clear prototype
with his engagement in CIRC.
HPCF RA for 2011-12
August 07, 2011
Ph.D. student in Statistics,
continues as HPCF RAs for 2011-12. The HPCF user support is accessed by
e-mail to email@example.com, for convenience
the same e-mail used to report system problems.
You need to be in the UMBC domain to send a mail to this address.
User meeting on scheduling policy
June 15, 2011
The user meeting on the new scheduling policy on June 15
was attended by 16 members of the user community.
First, Dr. Gobbert explained the philosophical ideas behind the new design:
Any user can submit as many jobs as desired;
when another users submits a job, it should have a higher priority
than the already queued jobs, assuming equal priority in other respects.
This and other principals are contained in SLURM's fair-use feature
that we are using now.
Then, HPCF RA Andrew Raim detailed the synta of the SLURM submission script
to access the features of its fair-use feature.
Changes to scheduling and usage policy
June 14, 2011
Major updates to the scheduling policy on tara were implemented on Tuesday,
June 14. These include adoption of fair-share priority, QOS's, and a change
to the available partitions. All users should check the following pages:
Information for the previous "2010" policies is located here:
ProbStatDay 2011 Conference
April 22-23, 2011
The Department of Mathematics and Statistics hosted the
5th Annual Probability and Statistics Day at UMBC
One of the conference attractions was the student poster presentation, where
HPCF RA Andrew Raim collected third prize for his poster
The approximate fisher information matrix for multinomial mixture models
This work with advisor
Dr. Nagaraj Neerchal
featured simulations that were run in parallel using the
SNOW package for R
MRI 2011 Proposal Submitted
January 27, 2011
A new MRI proposal was submitted to extend the 2009 MRI proposal.
The new proposal features a strong participation of faculty, featuring
a total of 41 researchers from 15 departments and research centers across
the campus. These totals include departments new to the effort such as
Information Systems and the Institute of Fluorescence.
COMSOL Conference 2010 Boston
October 6-9, 2010
HPCF RA David Trott and Dr. Matthias Gobbert attended the Boston area COMSOL conference
where they presented their conference paper
Conducting finite element convergence studies
using COMSOL 4.0
. This is an example of the cross collaboration between CIRC and HPCF.
HPCF User Meeting and Training
November 19, 2010
HPCF users met for a presentation by Dr. Gobbert on the features of tara and
Complete details of the performance results presented can be found in the
tech. reports HPCF-2010-2 and HPCF-2010-4 on the
ENGR 104 was filled to capacity with 21 attendees.
The photos show users with tara during the tour of the
computer room and during the hands-on training session with HPCF RAs
Andrew Raim and David Trott.
Two HPCF RAs for 2010-11
August 19, 2010
graduate students in the Statistics and Applied Mathematics programs,
respectively, have been appointed as HPCF RAs for 2010-11. The HPCF RAs are
available to provide user support and help with research on tara. Please
send e-mail to firstname.lastname@example.org to make initial contact with them.
New cluster tara released
April 25, 2010
The new cluster tara was released for public use.
have already been using it for the semester.
With the transfer and final synchronization of all data
to the new central storage completed during the scheduled downtime
on Tuesday, April 20, 2010, the cluster was ready to release.
The first user meeting is scheduled for Friday, April 30, 2010.
We thank the tireless work of DoIT staff in the setup of the cluster
and its storage and cooling.
Dr. Matthias Gobbert Wins University System Award for Mentoring
April 16, 2010
Dr. Matthias Gobbert is the recipient of the 2010
University System of Maryland Board of Regents Faculty Award for Excellence in
Mentoring, in recognition of his success in
using the Center for Interdisciplinary Research and Consulting,
the UMBC High Performance Computing Facility, and other initiatives
in leading many graduate students to their first publications.
Dr. Gobbert was honored at the
UMBC Faculty and Staff Awards Ceremony on April 07, and the award was
formally given as part of the Board of Regents meeting on April 16. The photo
shows (from left to right) Board of Regents Chair Clifford M. Kendall,
UMBC Provost Elliot Hirshman, Dr. Gobbert,
and University System Chancellor William Brit Kirwan at the Regents meeting.
Research Highlight by the Sparling Group
January 05, 2010
Research on hurricane modeling by Lynn Sparling (Physics)
and her group is summarized in this
HPCF posts job ad
November 12, 2009
HPCF is starting the search for a post-doctoral research associate
to extend the facility's user support significantly
through collaborative research in a consulting approach.
Please help identify suitable candidates! The complete job ad
HPCF releases new webpage
November 11, 2009
HPCF releases its new webpage design at www.umbc.edu/hpcf.
The new, more flexible design features was developed in
collaboration with our partners in
and features more attractive visuals and a current look tied in
with UMBC's colors.
This webpage is designed to document the new cluster tara,
which is in testing now and will be released to the public soon.
Information about the existing system hpc can be found at the