•

Map covering around 29.4 by 24 degrees on the sky, indicating the huge scale of the newly discovered structure. Credit: R. G. Clowes / UCLan
An international team of astronomers, led by academics from the University of Central Lancashire and including CACR Senior Computational Scientist Matthew Graham, has found the largest known structure in the universe. The large quasar group (LQG) is so large that it would take a vehicle travelling at the speed of light some 4 billion years to cross it. The team published their results in the journal Monthly Notices of the Royal Astronomical Society…(more)
•
Tuesday November 27, 2012
Powell Booth Room 100
1:30PM
“General purpose GPU programming by CUDA—an introductory tutorial”
Dr. Hailiang Zhang, Caltech Center for Advanced Computing Research (CACR)
Abstract:
In recent years, various fields of large-scale scientific computations
have greatly benefit from the massively parallel programming. This talk
presents a brief introduction and tutorial on the state-of-the-art
general-purpose GPU programming platform—CUDA. The GPU device
architecture, memory hierarchy, and the general CUDA programming model
will be introduced. The CUDA numerical schemes of some vector and matrix
operations will be demonstrated as examples. Some CUDA applications on
molecular and biophysical modeling will be presented. The standard CUDA
toolkit libraries and some third party APIs will also be introduced.
•
Thursday, October 18, 2012
Annenberg 105
4PM
“Software Challenges for Extreme Scale Systems”
Vivek Sarkar, E.D. Butcher Chair in Engineering, Professor of Computer Science, Rice University
ABSTRACT:
It is widely recognized that computer systems anticipated in the
2020 timeframe will be qualitatively different from current
and past computer systems. Specifically, they will be built using
homogeneous and heterogeneous many-core processors with 100’s of
cores per chip, their performance will be driven by parallelism
(million-way parallelism just for a departmental server), and
constrained by energy and data movement. They will also be subject to
frequent faults and failures. Unlike previous generations of hardware
evolution, these Extreme Scale systems will have a profound impact on
future software. The software challenges are further compounded by
the need to support new workloads and application domains
that have traditionally not had to worry about large scales of
parallelism in the past.
The challenges across the entire software stack for Extreme Scale
systems are driven by programmability and performance requirements,
and impose new requirements on programming models, languages, compilers, and runtime systems.
We focus on the critical role played by the runtime
system in enabling programmability in upper layers of the software
stack that interface with the programmer, and in enabling performance
in lower levels of the software stack that interface with the
hardware. Examples of key runtime primitives will be drawn from early
experiences in the Habanero Multicore Software Research project
(http://habanero.rice.edu) which targets a wide range of homogeneous
and heterogeneous manycore processors. On the programmability front,
the runtime primitives are shown to support important semantic guarantees
for different classes of programs. On the performance front, we show how general structures for
task creation, synchronization, and termination can be implemented in a scalable manner
on manycore processor testbeds that are representative of the challenges
that we will face in future Extreme Scale processors.
BIO:
Vivek Sarkar conducts research in multiple aspects of parallel
software including programming languages, program analysis, compiler
optimizations and runtimes for parallel and high performance computer
systems. He currently leads the Habanero Multicore Software Research
project at Rice University, and serves as Associate Director of the
NSF Expeditions project on the Center for Domain-Specific Computing.
Prior to joining Rice in July 2007, Vivek was Senior Manager of
Programming Technologies at IBM Research. His responsibilities at IBM
included leading IBM’s research efforts in programming model, tools,
and productivity in the PERCS project during 2002- 2007 as part of the
DARPA High Productivity Computing System program. His past projects
include the X10 programming language, the Jikes Research Virtual
Machine for the Java language, the MIT RAW multicore project, the ASTI
optimizer used in IBM’s XL Fortran product compilers, the PTRAN
automatic parallelization system, and profile-directed partitioning
and scheduling of Sisal programs. Vivek holds a B.Tech. degree from
the Indian Institute of Technology, Kanpur, an M.S. degree from
University of Wisconsin-Madison, and a Ph.D. from Stanford University.
He became a member of the IBM Academy of Technology in 1995, the
E.D. Butcher Chair in Engineering at Rice University in 2007, and was
inducted as an ACM Fellow in 2008. Vivek has been serving as a member
of the US Department of Energy’s Advanced Scientific Computing
Advisory Committee (ASCAC) since 2009.
•
HOWARD & JAN ORINGER SEMINAR
**A Special IST Lunch Bunch Event**
Tuesday, May 29th, 2012
12:00 – 1:00pm
105 Annenberg
*Lunch will be Provided*
Community Participation in Disaster Management Through Information Technology
K. Mani Chandy
Simon Ramo Professor, Caltech
This talk describes ongoing work by GPS (geology and planetary sciences), civil engineering, CACR (Center for Advanced Computing Research) and Computer Science at Caltech on community sensor networks for disaster management. The research group includes PhD students, undergraduates, research staff and faculty. This talk looks at questions such as: Can ordinary people, such as school children, deploy sensors to detect shaking from earthquakes? Can a system based on installation and deployment of sensors by self-selected members of the community provide early warning (of a few seconds) of impending shaking? Are sensors in phones useful for disaster management? What sorts of sensors can be used to detect hazardous radiation? Can “personal hazard stations” in homes and offices be used to sense and respond to disasters such as fires? The talk describes research challenges including design of sensors, methods of exploiting Cloud computing systems, and algorithms for rapid detection of geospatial events. A new initiative by the Computing Community Consortium on disaster management will be described briefly.
•
Tuesday, May 8th
12:00 – 1:00pm
105 Annenberg
*Lunch will be provided*
SPEAKER:
Matthew Graham
Computational Scientist
Center for Advanced Computing Research, Caltech
TITLE:
Characterizing the Time Domain
ABSTRACT:
The new generation of synoptic sky surveys promise unprecedented amounts of data and information and automated processing and analysis is a necessity. Light curves, however, can show tremendous variation in their temporal coverage, sampling rates, errors and missing values, etc., which makes comparisons between them difficult and training classifiers even harder. A common approach to tackling this is to characterize a set of light curves via a set of common features and then use this alternate homogeneous representation as the basis for further analysis or training. Many different types of feature are used in the literature to capture information contained in the light curve: moments, flux and shape ratios, variability indices, periodicity measures, model representations. In this talk, we will review characterization features with particular attention to the problem of determining accurate and reliable periods for astrophysical objects.