Eight of sixteen new SandyBridge nodes in a new rack.
CACR is pleased to announce the addition of another 256 compute cores to the zwicky cluster, used exclusively by Caltech’s Theoretical Astrophysics (TAPIR) group for simulation of black holes and extreme spacetimes.
During the week of January 6, 2014, 16 new Hewlett-Packard SL250 nodes, each with two Intel E5-2670 8 core processors were integrated into the existing cluster.
Zwicky’s compute configuration now has 2,564 cores – 187 dual processor Intel x5650 6 core nodes plus 20 dual processor Intel E5-2670 8 core nodes, all connected with [F,Q]DR InfiniBand.
We would like to thank the TAPIR group for continuing to fund, expand, and enhance zwicky resources.
Please see this page for an overview of CACR managed sxs compute and storage resources.
CACR is pleased to announce that an award of $500,000 from the National Science Foundation Campus Cyberinfrastructure – Network Infrastructure and Engineering (CC-NIE) Program has been made for the Caltech High-performance OPtical Integrated Network (CHOPIN) Project. The funds will be used to purchase networking equipment to bring the most advanced network technologies to the Caltech campus, in particular, supporting the network needs of experiments, simulations, and observations that generate large amounts of data.
The CHOPIN project plan consists of several major elements: Deployment of SDN (Software-Defined Networking) capable switches supporting direct connectivity to the California OpenFlow Testbed Network (COTN) for Research and Development activities in the networking area, as well as the connection to the Internet2 Advanced Network Services, meant explicitly to support high-throughput data applications and research using large data volumes. The connection will also enable direct access to the nation-wide GENI infrastructure. In a direct way, the CHOPIN network will also connect to the DYNES cyber instrument, and extend its use within Caltech.
The project plan also includes the deployment of a new 100Gbps uplink between Caltech and CENIC. in support of current and future data-centric research and high-throughput applications. The new 100G capability will also pave the way for integration with routed high-performance services such as the CENIC HPR-L3 and Internet2 Advanced Layer3 Services. The uplink is complemented by 100G and 40G links to a set of Openflow capable switches to each of several major research groups.
The upgrades in capacity and capability will ensure that the Caltech campus infrastructure is up-to-date for the next few years, through direct access to the new high-capacity national backbones of Internet2, Energy Sciences Network, and possibly National LambdaRail.
The CHOPIN project infrastructure will provide fertile ground for Caltech faculty and their students as both a research tool as well as a tool for computer and computational scientists, and will benefit the entire Caltech campus, and explicitly groups involved in Big Data, high-throughput research.
CACR is pleased to announce that an award of $233,000 from the National Science Foundation Astronomy & Astrophysics Research Grants Program has been made to the Catalina Real-Time Transient Survey-II (CRTS-II). The project, entitled “Open Exploration of the Time Domain with the Catalina Real-Time Transient Survey”, will analyze data from the upgraded telescopes of Catalina Sky Survey (CSS, http://www.lpl.arizona.edu/css/), which scour the sky for potentially life-threatening asteroids. CRTS-II will leverage this increased data stream to discover and study objects and phenomena such as supernova and massive accreting blackholes (AGN).
Transient discoveries from the Catalina Real-Time Transient Survey.
The CRTS-II project follows on from the NSF-funded CRTS project which began in 2007 and discovered more than 7,000 highly variable and transient sources including significant scientific discoveries of super-luminous supernovae. CRTS-II will continue to provide a steady open stream of astronomical events, available to the entire community in real time, while further expanding the discovery space for time-domain astrophysics. The project will more than triple current sensitivity to transient objects and phenomena changing on time scales from tens of minutes to decades.
Representing CACR’s expertise in time-domain astronomy, project co-PI and research scientist Andrew Drake manages the data analysis, mining for astrophysical transients such as unusual types of supernovae, rapid transients, and outbursts of beamed active galactic nuclei (blazars). This work also employs a strategized approach aimed at enabling the serendipitous discovery of new types of objects and phenomena. Senior Computational Scientist Matthew Graham is characterizing the archival collection of 500 million sources using new statistical measures to study the broad phenomenology of behavior in the time domain.
“The project will more than triple current sensitivity to transient objects and phenomena changing on time scales from tens of minutes to decades.” says co-PI Professor George Djorgovski. More information about the project can be found on the CRTS website.
Monday August 19, 2013
2PM – 100 Powell-Booth
“Simulations, Databases and Data-Intensive Computing”
Department of Physics and Astronomy
Institute for Data Intensive Engineering and Science
Johns Hopkins University
We are researching an architecture for a data-intensive computer, a system capable of performing computational tasks that require O(N log N) arithmetic operations, where N is the size of the data set and is very large.
We have developed the MPI-DB software library that provides scientific computing processes with an easy to use abstraction layer to read, write and perform general computations with large arrays stored in a database.
MPI-DB is a client-server framework, which is being developed as a prototype of the operating system of the data-intensive computer, consisting of a computational front end, a fast network and a database back end. MPI-DB is being used by the Johns Hopkins Turbulence research group to automatically create databases containing simulation results and to expose the stored results for subsequent analysis by researchers.
In this talk we will describe MPI-DB and discuss the challenges in the design of an architecture for a data-intensive computer.
As part of her master’s thesis, computer science senior Judy Mou worked alongside K. Mani Chandy, Simon Ramo Professor and professor of computer science, and Julian Bunn, CACR principal computational scientist, to develop an Android phone and tablet application that could be used to keep members of the Caltech community—and, ultimately, similar communities—informed about crisis situations, such as local earthquakes, fires, and pollution hazards. Her application, called a situational awareness application, combines this hazard information with dynamically updated, individualized content, such as traffic on the user’s commute, campus events, or news feeds that the user has subscribed to.
Read more at http://www.caltech.edu/content/senior-spotlight-mou