•

Rendering of a rapidly spinning, gravitational-wave emitting newborn neutron star. Simulation: Ott et al. 2007 Rendering: Ralf Kaehler ZIB/AEI/KIPAC 2007
This month CACR has installed and configured a new cluster in the Powell-Booth Laboratory for Computational Science. This system is specifically configured to meet the applications needs of Caltech’s Theoretical AstroPhysics Including Relativity (TAPIR) group in the Physics, Mathematics, and Astronomy Division.
The MRI2 cluster is funded by an NSF MRI-R2 award with matching funds from the Sherman Fairchild Foundation.The configuration, integrated by Hewlett-Packard and CACR’s operations team, consists of 1536 Intel X5650 compute cores in 128 dual Westmere hex-core nodes equipped with a total of ~3 TB of memory, connected via QDR InfiniBand (IB). It includes 100 TB of high-performance, high-reliability disk space access via IB through a Panasas rack.
The research project using the new cluster, Simulating eXtreme Spacetimes: Facilitating LIGO and Enabling Multi-Messenger Astronomy, is led by Professor Christian Ott. The co-Investigators on the MRI award are Dr. Mark Scheel of TAPIR and CACR’s director, Dr. Mark Stalzer. The research will explore the dynamics of spacetime curvature, matter, and radiation at high energies and densities. Central project aspects are the simulation of black hole binary coalescence, neutron-star — black hole inspiral and merger, and the collapse of massive stars leading to core-collapse supernovae or gamma-ray bursts. Key results will be the prediction of gravitational waveforms from these phenomena to enable LIGO gravitational wave searches and to facilitate the extraction of (astro-)physics from observed signals.
The MRI2 cluster is named Zwicky, in honor of Caltech Astrophysics Professor Fritz Zwicky (1898-1974), who discovered supernovae and who was the first to explain how supernovae can be powered by the collapse of a massive star into a neutron star. Zwicky also discovered the first evidence for dark matter in our universe, proposed to use supernovae as standard candles to measure distances in the universe, and suggested that galaxy clusters could act as gravitational lenses.
•

One of the first proton proton collisions seen in the CMS Detector, displayed using the collaboration's software tool "Fireworks"
First beam circulated in the world’s most powerful particle accelerator the Large Hadron Collider (LHC) at CERN on 20 November 2009 – a clockwise circulating beam was established at ten o’clock that evening, followed by a circulating beam in the other direction a few hours later. When the proton beams are made collide at the centres of each of the four LHC experiments, the electronic data captured from the detectors will flow at rates ranging from a few hundred MBytes/sec to over one GByte/sec.
Global transport and analysis of this imminent stream of physics data is one of the major computing and networking challenges facing particle physics experiments. Leading edge explorations such as these require advances in all system components, from detectors to remote data analysis. CACR research staff have been involved with demonstrating technologies that reliably deliver over 100 Gb/s sustained from worldwide sources to a single analysis point. CACR also hosts a major “Tier2″ computing center, which is dedicated to receiving LHC datasets over twin 10Gbps networks from CERN, and running applications that analyse the events they contain.
•
Building on seven years of record-breaking developments, an international team of physicists, computer scientists, and network engineers led by the California Institute of Technology (Caltech)–with partners from Michigan, Florida, Tennessee, Fermilab, Brookhaven, CERN, Brazil, Pakistan, Korea, and Estonia–set new records for sustained data transfer among storage systems during the SuperComputing 2008 (SC08) conference recently held in Austin, Texas.
Caltech’s exhibit at SC08 by the CACR and the High Energy Physics (HEP) group demonstrated new applications and systems for globally distributed data analysis for the Large Hadron Collider (LHC) at CERN, along with Caltech global monitoring system MonALISA and its collaboration system EVO (Enabling Virtual Organizations), together with near real-time simulations of earthquakes in the Southern California region, experiences in time-domain astronomy with VOEventNet and Google Sky, and recent results in multiphysics multiscale modeling with the PSAAP project.
A highlight of the exhibit was the HEP team record-breaking demonstration of storage-to-storage data transfers over wide area networks from a single rack of servers on the exhibit floor. The team’s demonstration of “High Speed LHC Data Gathering, Distribution and Analysis Using Next Generation Networks” achieved a bidirectional peak throughput of 114 gigabits per second (Gbps) and a sustained data flow of more than 110 Gbps among clusters of servers on the show floor and at Caltech, Michigan, CERN (Geneva), Fermilab (Batavia), Brazil (Rio de Janiero, Sao Paulo), Korea (Daegu), Estonia, and locations in the US LHCNet network in Chicago, New York, Geneva, and Amsterdam.
The image shows a sample of the results obtained at the Caltech booth, monitored by MonALISA, flowing in and out of the servers at the booth. The feature in the middle of the graph is the result of briefly losing the local session at SC08 driving some of the flows.
Read more in the Caltech Press Release
•

The first beam in the Large Hadron Collider at CERN was successfully steered around the full 27 kilometers of the world’s most powerful particle accelerator at 10h28 September 10. This historic event marks a key moment in the transition from over two decades of preparation to a new era of scientific discovery. (Read the full story on the Caltech website)
The LHC experiments are using a globally distributed “Tiered” grid of computational resources to process the proton collision data. This multi-Tier design was proposed and prototyped by Caltech in the late 90s, and is now universally adopted. The prototype cluster in the CACR machine room has been continually expanded and enhanced since then, and it is now one of the major Tier2 centres in the world, with over 1M SPECInt2k of compute power, several hundred Terabytes of storage space, and multiple 10 Gigabit network connections to other LHC sites in the US, and to CERN. First data from the initial LHC tests has already arrived for processing and analysis on the Caltech Tier2.
Local Press, featuring commentary and an interview with CACR Principal Computational Scientist Julian Bunn:
For further information on the Large Hadron Collider, see the CERN website.
•
Click here for a full-sized version of the thumbnail above. Image shows a MonALISA plot of the aggregated network traffic to the Caltech booth, during and after the Bandwidth Challenge. (The initial blue region at the left of the graph is the BWC entry.)
An international team of physicists, computer scientists, and network engineers led by the California Institute of Technology, CERN, and the University of Michigan with partners at the University of Florida and Vanderbilt, as well as participants from Brazil and Korea, joined forces to set new records for sustained data transfer between storage systems during the SC06 Bandwidth Challenge.
The high-energy physics team’s demonstration of “High Speed Data Gathering, Distribution and Analysis for Physics Discoveries at the Large Hadron Collider” achieved a peak throughput of 17.77 gigabits per second (Gbps) between clusters of servers at the show floor and at Caltech. Following the rules set for the SC06 Bandwidth Challenge, the team used a single 10-Gbps link provided by National Lambda Rail that carried data in both directions. Sustained throughput throughout the night prior to the bandwidth challenge exceeded 16 Gbps (or two gigabytes per second) using just 10 pairs of small servers sending data at 9 Gbps to Caltech from Tampa, and eight pairs of servers sending 7 Gbps of data in the reverse direction. (Read more in the Caltech Press Release)