•
The MRI2 cluster, meeting the application needs of Caltech’s Theoretical AstroPhysics Including Relativity (TAPIR) group, has expanded to include an additional 40 compute nodes. The configuration, integrated by Hewlett-Packard and CACR’s Operations team, now consists of 2016 Intel X5650 compute cores, in 168 dual Westmere hex-core nodes, equipped with ~ 4 TB of memory (2 GB/core). The cluster is connected by a 2:1 fat tree QDR InfiniBand network, with high speed access to 80 TB (usable) of high performance Panasas storage and 48 TB of archival storage. For more information about the system, see the CACR Facilities & Operations page.
The MRI2 cluster, called “zwicky”, provides compute and storage resources for research codes investigating core-collapse supernovae, gamma-ray bursts, black holes and neutron stars.
•

Rendering of a rapidly spinning, gravitational-wave emitting newborn neutron star. Simulation: Ott et al. 2007 Rendering: Ralf Kaehler ZIB/AEI/KIPAC 2007
This month CACR has installed and configured a new cluster in the Powell-Booth Laboratory for Computational Science. This system is specifically configured to meet the applications needs of Caltech’s Theoretical AstroPhysics Including Relativity (TAPIR) group in the Physics, Mathematics, and Astronomy Division.
The MRI2 cluster is funded by an NSF MRI-R2 award with matching funds from the Sherman Fairchild Foundation.The configuration, integrated by Hewlett-Packard and CACR’s operations team, consists of 1536 Intel X5650 compute cores in 128 dual Westmere hex-core nodes equipped with a total of ~3 TB of memory, connected via QDR InfiniBand (IB). It includes 100 TB of high-performance, high-reliability disk space access via IB through a Panasas rack.
The research project using the new cluster, Simulating eXtreme Spacetimes: Facilitating LIGO and Enabling Multi-Messenger Astronomy, is led by Professor Christian Ott. The co-Investigators on the MRI award are Dr. Mark Scheel of TAPIR and CACR’s director, Dr. Mark Stalzer. The research will explore the dynamics of spacetime curvature, matter, and radiation at high energies and densities. Central project aspects are the simulation of black hole binary coalescence, neutron-star — black hole inspiral and merger, and the collapse of massive stars leading to core-collapse supernovae or gamma-ray bursts. Key results will be the prediction of gravitational waveforms from these phenomena to enable LIGO gravitational wave searches and to facilitate the extraction of (astro-)physics from observed signals.
The MRI2 cluster is named Zwicky, in honor of Caltech Astrophysics Professor Fritz Zwicky (1898-1974), who discovered supernovae and who was the first to explain how supernovae can be powered by the collapse of a massive star into a neutron star. Zwicky also discovered the first evidence for dark matter in our universe, proposed to use supernovae as standard candles to measure distances in the universe, and suggested that galaxy clusters could act as gravitational lenses.
•
Cellcenter and SHC users now have access to the Intel Cluster Toolkit 3.2, including the compiler suite v 11.1. We have a two-seat license. “use intel” will this package into your environment. More Information about the Intel Cluster Toolkit
Pathscale compilers are being retired Feb 25, 2010. Users are encouraged to rebuild their codes using Intel, PGI, or gnu compilers. Specifically, these pathscale based packages will retire Feb 25:
- pathscale (alias for pathscale 3.2)
- pathscale_[3.1,3.2]
- openmpi_1.3.3_pathscale
- openmpi_1.3.3_pathscale_3.2
- openmpi_1.3.4_pathscale
- openmpi_1.4_pathscale
- openmpi_pathscale (alias for openmpi_1.3.3_pathscale_3.2)
Additional packages that will be retiring on Feb 25, due to bug fixes and newer release compatible offerings:
- openmpi_1.3.2 (alias for openmpi_1.3.2_gcc)
- openmpi_1.3.2_gcc
- openmpi_1.3.4_gcc
- openmpi_1.3.4_intel
- openmpi_1.3.4_pgi
- pgi_8.0
- tecplot-2006r2
- tecplot-2009r1
- totalview_8.6
•
CACR’s support staff will be observing Institute holidays and special release days, Dec 25 to Jan 3. During the break, we’ll periodically check e-mail but primarily be watching for support issues of critical nature. Jan 4, we are back to full staff/regular support operations. Wishing you all a peaceful Holiday season!
•
Important Information for SHC Users
As of Sept 8, 2009, SHC has been transitioned to the new sw stack (RHEL+OpenIB). There are currently 115 core4 nodes and 65 core8 nodes, in production. For more information, please visit the SHC Getting Started / System Guide.