<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Caltech Center for Advanced Computing Research &#187; high-performance computing</title>
	<atom:link href="http://www.cacr.caltech.edu/main/?feed=rss2&#038;tag=high-performance-computing" rel="self" type="application/rss+xml" />
	<link>http://www.cacr.caltech.edu/main</link>
	<description>...at the forefront of computational science and engineering</description>
	<lastBuildDate>Fri, 03 May 2013 17:16:51 +0000</lastBuildDate>
	<generator>http://wordpress.org/?v=2.8.4</generator>
	<language>en</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
			<item>
		<title>CACR at SC09 in Portland</title>
		<link>http://www.cacr.caltech.edu/main/?p=725</link>
		<comments>http://www.cacr.caltech.edu/main/?p=725#comments</comments>
		<pubDate>Mon, 09 Nov 2009 17:40:59 +0000</pubDate>
		<dc:creator>cacrweb</dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[CACR]]></category>
		<category><![CDATA[conferences]]></category>
		<category><![CDATA[high-performance computing]]></category>
		<category><![CDATA[high-speed data transfer]]></category>
		<category><![CDATA[SC]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=725</guid>
		<description><![CDATA[At the 2009 Supercomputing (SC) Conference being held in Portland, Oregon November 14-20, CACR will be highlighting our research in computational biology, computing and networking for high-energy physics, data analysis for neutron scattering experiments, hypervelocity impact simulations, and time-domain astronomy.  The SC Conference is the premier international conference for high performance computing (HPC), networking, [...]]]></description>
			<content:encoded><![CDATA[<div id="attachment_726" class="wp-caption alignleft" style="width: 227px"><img class="size-full wp-image-726" title="SC09Logo4cShadow" src="http://www.cacr.caltech.edu/main/wp-content/uploads/2009/11/SC09Logo4cShadow.jpg" alt="SC09Logo4cShadow" width="217" height="184" /><p class="wp-caption-text">Visit us at Booth 2135!</p></div>
<p>At the <a href="http://sc09.supercomputing.org">2009 Supercomputing (SC) Conference</a> being held in Portland, Oregon November 14-20, CACR will be highlighting our research in computational biology, computing and networking for high-energy physics, data analysis for neutron scattering experiments, hypervelocity impact simulations, and time-domain astronomy.  The SC Conference is the premier international conference for high performance computing (HPC), networking, storage and analysis.</p>
<p>Among the demonstrations at the CACR exhibit will be the Caltech entry in SC&#8217;s Bandwidth Challenge. The Bandwidth Challenge is an annual competition for leading-edge network applications developed by teams of researchers from around the globe.  The Caltech entry for this year&#8217;s challenge is entitled <a href="http://supercomputing.caltech.edu/">Moving towards Terabit/sec Scientific Dataset Transfers: the LHC Challenge</a>. This entry will demonstrate Storage to Storage physics dataset transfers of up to 100 Gbps sustained in one direction, and well above 100 Gbps in total bidirectionally, using a total of fifteen 10Gbps drops at the Caltech Booth.</p>
<p>Caltech&#8217;s <a href="http://www.psaap.caltech.edu">PSAAP center</a> will be represented in the NNSA exhibit as one of five centers of excellence focusing on predictive science. A talk entitled, &#8220;UQ Pipeline Ballistic Impact Simulations  &#8211; Methods and  Experiences&#8221;, will be given by Sharon Brunett in the NNSA exhibit (Booth 735) on Tuesday, November 17 at 5:15PM.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=725</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>CACR Participation in Energy-Efficient HPC Working Group</title>
		<link>http://www.cacr.caltech.edu/main/?p=427</link>
		<comments>http://www.cacr.caltech.edu/main/?p=427#comments</comments>
		<pubDate>Tue, 20 Jan 2009 22:15:12 +0000</pubDate>
		<dc:creator>cacrweb</dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[CACR]]></category>
		<category><![CDATA[energy-efficient]]></category>
		<category><![CDATA[high-performance computing]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=427</guid>
		<description><![CDATA[Chip Chapman, CACR Facilities Manager, has joined the Energy Efficient High Performance Computing Working Group. This group, founded at SC08 in Austin, TX also includes participation from several national labs and other major HPC manufacturers and users.
The activities of this committee include:

Market pull strategies (collectively influencing vendors)
HPC/SC energy performance metrics and benchmarking
Computer center (infrastructure) energy [...]]]></description>
			<content:encoded><![CDATA[<p>Chip Chapman, CACR Facilities Manager, has joined the Energy Efficient High Performance Computing Working Group. This group, founded at SC08 in Austin, TX also includes participation from several national labs and other major HPC manufacturers and users.</p>
<p>The activities of this committee include:</p>
<ul>
<li>Market pull strategies (collectively influencing vendors)</li>
<li>HPC/SC energy performance metrics and benchmarking</li>
<li>Computer center (infrastructure) energy performance metrics and benchmarking</li>
<li>Best practices, case studies, and lessons learned in design and operation of super computer centers</li>
<li>Energy efficient design guidelines and specifications for super computer centers</li>
<li>Improving software for energy efficiency</li>
<li>Integrating energy efficiency into SC09’s technical program and HPCC (energy challenge) &#8211; subject to organizer&#8217;s approval</li>
</ul>
<p>CACR&#8217;s goals in participating in this initiative are to keep our infrastructure as efficient as possible and to help Caltech&#8217;s Facility department make informed choices when preparing upgrades or modifications to the existing and future HPC computer rooms on campus.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=427</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>NSF Award: Development of a Research Infrastructure for the Multithreaded Computing Community Using the Cray XMT Platform</title>
		<link>http://www.cacr.caltech.edu/main/?p=310</link>
		<comments>http://www.cacr.caltech.edu/main/?p=310#comments</comments>
		<pubDate>Wed, 10 Oct 2007 20:29:09 +0000</pubDate>
		<dc:creator>cacrweb</dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[architecture]]></category>
		<category><![CDATA[CACR]]></category>
		<category><![CDATA[high-performance computing]]></category>
		<category><![CDATA[research]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=310</guid>
		<description><![CDATA[An award of $994,408 from the National Science Foundation was made to the project entitled &#8220;Development of a Research Infrastructure for the Multithreaded Computing Community Using the Cray XMT Platform.&#8221; The subcontract for Caltech/CACR (PI Ed Upchurch) will fund the porting of significant science applications to an XMT system. CACR will assess the XMT&#8217;s performance [...]]]></description>
			<content:encoded><![CDATA[<p><a href="http://www.cacr.caltech.edu/main/wp-content/uploads/2009/01/cray_xmt.jpg"><img class="alignleft size-medium wp-image-311" title="cray_xmt" src="http://www.cacr.caltech.edu/main/wp-content/uploads/2009/01/cray_xmt.jpg" alt="" width="284" height="252" /></a>An award of $994,408 from the National Science Foundation was made to the project entitled &#8220;Development of a Research Infrastructure for the Multithreaded Computing Community Using the Cray XMT Platform.&#8221; The subcontract for Caltech/CACR (PI Ed Upchurch) will fund the porting of significant science applications to an XMT system. CACR will assess the XMT&#8217;s performance and compare it with the performance on other parallel architectures at CACR.</p>
<p>With the advent of MPI and Linux clusters, message-passing architectures are today the dominant approach for parallel computing systems, and the high-end computing community has developed a strong infrastructure to support this. With the trend towards multicore processors, however, the situation is changing. The major processor developers all envision placing tens to hundreds of cores on a single die, each running multiple threads. To take advantage of this, the CS community will need focus on how to develop efficient multithreaded programs in shared memory. The goal of the project is to bring a diverse group of researchers with extensive experience with shared-memory multithreading together as a community, and to jointly develop a shared infrastructure needed to broaden its impact for developing software to run on the next generation of computer hardware.</p>
<p>The first objective of the program is to acquire computer hardware as a shared community resource capable of efficiently running, in experimental and production modes, complex programs with thousands of threads in shared memory. <a href="http://www.cray.com/products/xmt/">The Cray XMT system</a>, scheduled for delivery in the first half of 2008, is an ideal platform for this.</p>
<p>The second objective of the program is assembling a software infrastructure for developing and measuring the performance of programs running on this hardware. This will include algorithms, data sets, libraries, languages, tools, and simulators to evaluate architectural enhancements for future hardware.</p>
<p>The third objective of the project building stronger ties between the people themselves, creating ways for researchers at the partner institutions to collaborate and communicate their findings to the broader community.</p>
<p>The academic partners on the team are the <a href="http://www.nd.edu/">University of Notre Dame</a>, <a href="http://www.gatech.edu/">Georgia Institute of Technology</a>, <a href="http://www.berkeley.edu/">University of California, Berkeley</a>, <a href="http://www.ucsb.edu/">University of California, Santa Barbara</a>, <a href="http://www.udel.edu/">University of Delaware</a>, and the <a href="http://www.caltech.edu/">California Institute of Technology</a>. The team will also collaborate with <a href="http://www.sandia.gov/">Sandia National Laboratories</a>, who has agreed to host the Cray XMT system and will provide supplementary funding.</p>
<p>For further information on the Caltech subcontract, contact <a href="mailto:etu%27at%27cacr.caltech.edu">Ed Upchurch</a></p>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=310</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>First Phase of TeraGrid Goes into Production</title>
		<link>http://www.cacr.caltech.edu/main/?p=376</link>
		<comments>http://www.cacr.caltech.edu/main/?p=376#comments</comments>
		<pubDate>Thu, 01 Jan 2004 21:44:30 +0000</pubDate>
		<dc:creator>cacrweb</dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[CACR]]></category>
		<category><![CDATA[hardware]]></category>
		<category><![CDATA[high-performance computing]]></category>
		<category><![CDATA[teragrid]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=376</guid>
		<description><![CDATA[The first computing systems of the National Science Foundation&#8217;s TeraGrid project are in production mode, making 4.5 T eraflops of distributed computing power available to scientists across the country who are conducting research in a wide range of disciplines, from astrophysics to environmental science.
The TeraGrid is a multi-year effort to build and deploy the world&#8217;s [...]]]></description>
			<content:encoded><![CDATA[<p>The first computing systems of the National Science Foundation&#8217;s TeraGrid project are in production mode, making 4.5 T eraflops of distributed computing power available to scientists across the country who are conducting research in a wide range of disciplines, from astrophysics to environmental science.</p>
<p>The TeraGrid is a multi-year effort to build and deploy the world&#8217;s largest, most comprehensive distributed infrastructure for open scientific research. The TeraGrid also offers storage, visualization, database, and data collection capabilities. Hardware at multiple sites across the country is networked through a 40-gigabit per second backplane &#8212; the fastest research network on the planet.</p>
<p>The systems currently in production represent the first of two deployments, with the completed TeraGrid scheduled to provide over 20 T eraflops of capability. The phase two hardware, which will add more than 11 T eraflops of capacity, was installed in December 2003 and is scheduled to be available to the research community this spring.</p>
<p>&#8220;We are pleased to see scientific research being conducted on the initial production TeraGrid system,&#8221; said Peter Freeman, head of NSF&#8217;s Computer and Information Sciences and Engineering directorate. &#8220;Leading-edge supercomputing capabilities are essential to the emerging cyberinfrastructure, and the TeraGrid represents NSF&#8217;s commitment to providing high-end, innovative resources.&#8221;</p>
<p>The TeraGrid sites are: Argonne National Laboratory; the Center for Advanced Computing Research (CACR) at the California Institute of Technology; Indiana University; the National Center for Supercomputing Applications (NCSA) at the University of Illinois, Urbana-Champaign; Oak Ridge National Laboratory; the Pittsburgh Supercomputing Center (PSC); Purdue University; the San Diego Supercomputer Center (SDSC) at the University of California, San Diego; and the Texas Advanced Computing Center at The University of Texas at Austin.</p>
<p>&#8220;This is an exciting milestone for scientific computing &#8212; the TeraGrid is a new concept and there has never been a distributed computing system of its size and scope,&#8221; said NCSA interim director Rob Pennington, the TeraGrid site lead for NCSA. &#8220;In addition to its immediate value in enabling new science, the TeraGrid project is a tool for the development of a national cyberinfrastructure, and the cooperative relationships forged through this effort provide a framework for future innovation and collaboration.&#8221;</p>
<p>&#8220;The TeraGrid partners have worked extremely hard during the two-year construction phase of this project and are delighted that this initial phase of what will be an unprecedented level of computing and data resources is now online for the nation&#8217;s researchers to use,&#8221; said Fran Berman, SDSC director and co-principal investigator of the TeraGrid project. &#8220;The TeraGrid is one of the foundations of cyberinfrastructure that will provide even more computing resources later this year.&#8221;</p>
<p>The computing systems that entered production this month consist of more than 800 Itanium-family IBM processors running Linux. NCSA maintains a 2.7-teraflop cluster, which was installed in spring 2003, and SDSC has a 1.3-teraflop cluster. The 6-teraflop, 3,000-processor HP AlphaServerSC Terascale Computing System (TCS) at PSC is also a component of the TeraGrid infrastructure.</p>
<p>&#8220;The launch of the National Science Foundation&#8217;s TeraGrid project provides scientists and researchers across the nation with access to unprecedented computational power,&#8221; said David Turek, vice president of Deep Computing with IBM.&#8221;Working with the NSF, IBM is committed to the continued development of breakthrough Grid technologies that benefit our scientific/technical and commercial customers.&#8221;</p>
<p>Allocations for use of the TeraGrid were awarded by the NSF&#8217;s Partnerships for Advanced Computational Infrastructure (PACI) last October. Among the first wave of researchers to use the TeraGrid are scientists studying the evolution of the universe and the cleanup of contaminated groundwater, simulating seismic events, and analyzing biomolecular dynamics.</p>
<p>Among the allocations awarded included one for Caltech physicist Harvey Newman . Newman leads a team of investigators who are developing codes for CERN&#8217;s Compact Muon Solenoid (CMS) collaboration. The CMS experiment will begin operation at the Large Hadron Collider (LHC) in 2007. The Caltech team&#8217;s planned use of the TeraGrid will be a valuable and possibly critical factor in the success of several planned &#8220;Data Challenges&#8221; for CMS. These Challenges are designed to test the readiness of the global Grid-enabled computing system being developed for the experiment, in collaboration with partner projects such as PPDG, GriPhyN, iVDGL, DataTAG, LCG, and others. The TeraGrid will further a program of developing optimized search strategies for the Higgs particles, thought to be responsible for mass in the Universe, for super-symmetry, and for investigating new physics processes beyond the Standard Model of particle physics.</p>
<p>To learn more about the TeraGrid, go to <a href="http://www.teragrid.org/"> www.teragrid.org </a></p>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=376</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>CACR &amp; JPL team members of CASCADE project</title>
		<link>http://www.cacr.caltech.edu/main/?p=386</link>
		<comments>http://www.cacr.caltech.edu/main/?p=386#comments</comments>
		<pubDate>Wed, 09 Jul 2003 21:48:50 +0000</pubDate>
		<dc:creator>cacrweb</dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[CACR]]></category>
		<category><![CDATA[hardware]]></category>
		<category><![CDATA[high-performance computing]]></category>
		<category><![CDATA[research]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=386</guid>
		<description><![CDATA[Cray Inc. Signs $49.9 Million A greement for Second Phase of DARPA Petaflops Computing Systems Program.
Cray Inc. today announced that it, together with New Technology Endeavors, Inc. , have signed an agreement with the Defense Advanced Research Projects Agency (DARPA) to participate in the second phase of DARPA&#8217;s High Productivity Computing Systems Program. The program will provide [...]]]></description>
			<content:encoded><![CDATA[<p align="left">Cray Inc. Signs $49.9 Million A greement for Second Phase of DARPA Petaflops Computing Systems Program.</p>
<p>Cray Inc. today announced that it, together with New Technology Endeavors, Inc. , have signed an agreement with the Defense Advanced Research Projects Agency (DARPA) to participate in the second phase of DARPA&#8217;s High Productivity Computing Systems Program. The program will provide Cray and its university research partners with $49.9 million in additional funding over the next three years to support an advanced research program aimed at developing a commercially available system capable of sustained performance in excess of one petaflops (a million billion calculations per second).</p>
<p>Ed Upchurch as Caltech/JPL &#8216; s Principal Investigator leads the team and Thomas Sterling is Chief Technologist . Other research partners include Stanford University  and The University of Notre Dame , and the Principal Investigator of the project is Cray&#8217;s chief scientist, Burton Smith . &#8220;The DARPA HPCS program in general and the Cray Cascade Petaflops computer project in particular is an important opportunity for Caltech and JPL to directly, significantly, and substantively influence the direction of future supercomputer systems architecture and software ,&#8221; Sterling says. &#8221; Through this program, innovative concepts developed by scientists at Caltech and JPL, in collaboration with colleagues at the University of Notre Dame, in the field of advanced PIM architecture will be developed and transferred to real world end users from academic research through this industrial partnership. The result may be little less than the next revolution in supercomputing.&#8221;</p>
<p>DARPA formed the High Productivity Computing Systems Program to foster development of the next generation of high productivity computing systems for both the national security and industrial user communities.  Program goals are for these systems to be more broadly applicable, much easier to program and more resistant to failure than currently available high performance computing systems.</p>
<p>Five computer-makers, including Cray, were selected for the first phase concept study that was initiated in mid-2002, and all five firms submitted proposals for the second phase. Cray, along with IBM and Sun Microsystems, were selected to continue to the second phase, where further definition and validation of the proposed systems will occur. In mid-2006, DARPA plans to select up to two vendors for the final phase, a full-scale development phase with initial prototype deliveries scheduled for 2010.</p>
<p>More information about the Cascade project can be found at <a href="http://www.cray.com/cascade/">http://www.cray.com/cascade/ </a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=386</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>TeraGrid Project Begins Accepting Computing Proposals</title>
		<link>http://www.cacr.caltech.edu/main/?p=388</link>
		<comments>http://www.cacr.caltech.edu/main/?p=388#comments</comments>
		<pubDate>Wed, 11 Jun 2003 21:49:39 +0000</pubDate>
		<dc:creator>cacrweb</dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[CACR]]></category>
		<category><![CDATA[high-performance computing]]></category>
		<category><![CDATA[research]]></category>
		<category><![CDATA[teragrid]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=388</guid>
		<description><![CDATA[Researchers across the U.S. will be able to submit proposals for use of the first computing systems of the National Science Foundation&#8217;s TeraGrid project beginning June 15.
Proposals requesting 200,000 or more CPU hours will be reviewed in September through the NSF&#8217;s Partnerships for Advanced Computational Infrastructure (PACI)  peer-review allocation process. The first computers in [...]]]></description>
			<content:encoded><![CDATA[<p>Researchers across the U.S. will be able to submit proposals for use of the first computing systems of the National Science Foundation&#8217;s TeraGrid project beginning June 15.</p>
<p>Proposals requesting 200,000 or more CPU hours will be reviewed in September through the NSF&#8217;s <a href="http://www.paci.org/">Partnerships for Advanced Computational Infrastructure (PACI) </a> peer-review allocation process. The first computers in the TeraGrid distributed computing system &#8211; about four teraflops tota l- will be available for use in December.</p>
<p>The TeraGrid project is a multi-year effort to build and deploy the world&#8217;s fastest, most comprehensive distributed computing infrastructure for open scientific research.</p>
<p>The &#8220;Phase I&#8221; TeraGrid machines designated to enter production by the start of the new year consist of more than 800 Itanium-family processors running Linux. In addition to offering four teraflops of computing power, these systems will provide more than a quarter petabyte of storage, visualization, database and data collection capabilities. Scientists at research institutions nationwide will use the systems to conduct research in a wide range of scientific and engineering disciplines, from environmental science to microbiology to astrophysics.</p>
<p>The new systems are located at four of the five TeraGrid sites: the <a href="http://www.ncsa.uiuc.edu/">National Center for Supercomputing Applications (NCSA) </a> at the University of Illinois, Urbana-Champaign; the <a href="http://www.sdsc.edu/">San Diego Supercomputer Center </a>; the <a href="../../">Center for Advanced Computing Research (CACR) </a> at the California Institute of Technology; and <a href="http://www.anl.gov/">Argonne National Laboratory </a>. In addition the 3,000-processor HP AlphaServerSC Terascale Computing System at the <a href="http://www.psc.edu/">Pittsburgh Supercomputing Center (PSC) </a> will be partially allocated as part of the TeraGrid infrastructure during this allocation process. Researchers will be able to use TeraGrid computers and resources at multiple sites as a virtual machine through the high-speed TeraGrid network.</p>
<p>The TeraGrid partners are also part of PACI, an NSF project to build an advanced computational infrastructure for science and engineering.</p>
<p>&#8220;NSF is pleased to support the TeraGrid as one of the first components to become available to the nation&#8217;s researchers as part of the emerging cyberinfrastructure that integrates computing, information and communication resources,&#8221; said Richard Hilderbrandt, program director for PACI. &#8220;The scientific community has made clear that cyberinfrastructure is going to provide many opportunities to revolutionize the conduct of science and engineering.&#8221; Both PACI and TeraGrid are NSF-funded initiatives.</p>
<p>Because the TeraGrid is unique among supercomputing systems, the TeraGrid management team foresees new and unique usage scenarios. As a result, the peer review process will look for new and different usages. Researchers with applications that can take advantage of this unique collection of resources will be given preference in the allocation process.</p>
<p>The TeraGrid will allow researchers to launch thousands of independent jobs using data from a single data source, or to use a number of resources &#8211; including massive amounts of storage, remote visualization systems and online data collections &#8211; to complete large, tightly coupled simulations. Other uses could include analyzing huge datasets using a Web-based portal to access specific TeraGrid resources, and on-demand computing &#8211; for example, using large computational resources to respond in real time to natural or man-made disasters.</p>
<p>When completed, the TeraGrid will include 20 teraflops of distributed computing power as well as facilities capable of managing and storing nearly 1 petabyte of data, high-resolution visualization environments, and toolkits for grid computing. All the TeraGrid components will be tightly integrated and connected through the new 40-gigabit-per-second TeraGrid dedicated network, the world&#8217;s fastest research network.</p>
<p>For more information, see <a href="http://www.teragrid.org/"> http://www.teragrid.org/ </a>. For details on submitting proposals, see <a href="http://www.teragrid.org/userinfo/index.html"> http://www.teragrid.org/userinfo/index.html </a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=388</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>