<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Caltech Center for Advanced Computing Research &#187; hardware</title>
	<atom:link href="http://www.cacr.caltech.edu/main/?feed=rss2&#038;tag=hardware" rel="self" type="application/rss+xml" />
	<link>http://www.cacr.caltech.edu/main</link>
	<description>...at the forefront of computational science and engineering</description>
	<lastBuildDate>Fri, 03 May 2013 17:16:51 +0000</lastBuildDate>
	<generator>http://wordpress.org/?v=2.8.4</generator>
	<language>en</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
			<item>
		<title>New Cluster for Theoretical AstroPhysics Installed</title>
		<link>http://www.cacr.caltech.edu/main/?p=893</link>
		<comments>http://www.cacr.caltech.edu/main/?p=893#comments</comments>
		<pubDate>Wed, 11 Aug 2010 22:53:27 +0000</pubDate>
		<dc:creator>cacrweb</dc:creator>
				<category><![CDATA[Computing Resources]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[astronomy]]></category>
		<category><![CDATA[CACR]]></category>
		<category><![CDATA[clusters]]></category>
		<category><![CDATA[hardware]]></category>
		<category><![CDATA[physics]]></category>
		<category><![CDATA[sxs]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=893</guid>
		<description><![CDATA[This month CACR has installed and configured a new cluster in the Powell-Booth Laboratory for Computational Science. This system is specifically configured to meet the applications needs of Caltech&#8217;s Theoretical AstroPhysics Including Relativity (TAPIR) group in the Physics, Mathematics, and Astronomy Division.
The MRI2 cluster is funded by an NSF MRI-R2 award with matching funds from [...]]]></description>
			<content:encoded><![CDATA[<div id="attachment_892" class="wp-caption alignleft" style="width: 310px"><a href="http://www.cacr.caltech.edu/main/wp-content/uploads/2010/08/sxs_newbornstar.png"><img class="size-medium wp-image-892" title="sxs_newbornstar" src="http://www.cacr.caltech.edu/main/wp-content/uploads/2010/08/sxs_newbornstar-300x225.png" alt="Rendering of a rapidly spinning, gravitational-wave emitting newborn neutron star" width="300" height="225" /></a><p class="wp-caption-text">Rendering of a rapidly spinning, gravitational-wave emitting newborn neutron star. Simulation: Ott et al. 2007 Rendering: Ralf Kaehler ZIB/AEI/KIPAC 2007</p></div>
<p>This month CACR has installed and configured a new cluster in the Powell-Booth Laboratory for Computational Science. This system is specifically configured to meet the applications needs of Caltech&#8217;s Theoretical AstroPhysics Including Relativity (TAPIR) group in the Physics, Mathematics, and Astronomy Division.</p>
<p>The MRI2 cluster is funded by an NSF MRI-R2 award with matching funds from the Sherman Fairchild Foundation.The configuration, integrated by Hewlett-Packard and CACR’s operations team, consists of 1536 Intel X5650 compute cores in 128 dual Westmere hex-core nodes equipped with a total of ~3 TB of memory, connected via QDR InfiniBand (IB). It includes 100 TB of high-performance, high-reliability disk space access via IB through a Panasas rack.</p>
<p>The research project using the new cluster, <em>Simulating eXtreme Spacetimes: Facilitating LIGO and Enabling Multi-Messenger Astronomy</em>, is led by Professor Christian Ott. The co-Investigators on the MRI award are Dr. Mark Scheel of TAPIR and CACR&#8217;s director, Dr. Mark Stalzer. The research will explore the dynamics of spacetime curvature, matter, and radiation at high energies and densities. Central project aspects are the simulation of black hole binary coalescence, neutron-star &#8212; black hole inspiral and merger, and the collapse of massive stars leading to core-collapse supernovae or gamma-ray bursts. Key results will be the prediction of gravitational waveforms from these phenomena to enable LIGO gravitational wave searches and to facilitate the extraction of (astro-)physics from observed signals.</p>
<p>The MRI2 cluster is named Zwicky, in honor of Caltech Astrophysics Professor Fritz Zwicky (1898-1974), who discovered supernovae and who was the first to explain how supernovae can be powered by the collapse of a massive star into a neutron star. Zwicky also discovered the first evidence for dark matter in our universe, proposed to use supernovae as standard candles to measure distances in the universe, and suggested that galaxy clusters could act as gravitational lenses.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=893</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>CACR system support over the holidays</title>
		<link>http://www.cacr.caltech.edu/main/?p=778</link>
		<comments>http://www.cacr.caltech.edu/main/?p=778#comments</comments>
		<pubDate>Thu, 24 Dec 2009 23:33:42 +0000</pubDate>
		<dc:creator>cacrweb</dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[CACR]]></category>
		<category><![CDATA[clusters]]></category>
		<category><![CDATA[hardware]]></category>
		<category><![CDATA[resources]]></category>
		<category><![CDATA[support]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=778</guid>
		<description><![CDATA[CACR&#8217;s support staff will be observing Institute holidays and special release days, Dec 25 to Jan 3.   During the break, we&#8217;ll periodically check e-mail but primarily be watching for support issues of critical nature.  Jan 4, we are back to full staff/regular support operations.  Wishing you all a peaceful Holiday season!
]]></description>
			<content:encoded><![CDATA[<p>CACR&#8217;s support staff will be observing Institute holidays and special release days, Dec 25 to Jan 3.   During the break, we&#8217;ll periodically check e-mail but primarily be watching for support issues of critical nature.  Jan 4, we are back to full staff/regular support operations.  Wishing you all a peaceful Holiday season!</p>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=778</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>SHC Software Stack Upgrade &#8211; Update</title>
		<link>http://www.cacr.caltech.edu/main/?p=677</link>
		<comments>http://www.cacr.caltech.edu/main/?p=677#comments</comments>
		<pubDate>Wed, 09 Sep 2009 17:37:58 +0000</pubDate>
		<dc:creator>cacrweb</dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[clusters]]></category>
		<category><![CDATA[hardware]]></category>
		<category><![CDATA[PM]]></category>
		<category><![CDATA[SHC]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=677</guid>
		<description><![CDATA[Important Information for SHC Users
As of Sept 8, 2009, SHC has been transitioned to the new sw stack (RHEL+OpenIB). There are currently 115 core4 nodes and 65 core8 nodes, in production. For more information, please visit the SHC Getting Started / System Guide.
]]></description>
			<content:encoded><![CDATA[<p><strong>Important Information for SHC Users</strong></p>
<p>As of Sept 8, 2009, SHC has been transitioned to the new sw stack (RHEL+OpenIB). There are currently 115 core4 nodes and 65 core8 nodes, in production. For more information, please visit the <a href="http://www.cacr.caltech.edu/main/?page_id=108">SHC Getting Started / System Guide</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=677</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>SHC Software Stack Upgrade</title>
		<link>http://www.cacr.caltech.edu/main/?p=653</link>
		<comments>http://www.cacr.caltech.edu/main/?p=653#comments</comments>
		<pubDate>Thu, 03 Sep 2009 16:58:54 +0000</pubDate>
		<dc:creator>cacrweb</dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[clusters]]></category>
		<category><![CDATA[hardware]]></category>
		<category><![CDATA[PM]]></category>
		<category><![CDATA[SHC]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=653</guid>
		<description><![CDATA[Important Information for SHC Users
Over the next couple of days, more backend nodes from shc-a will be transitioned to shc-[c,new]&#8217;s cluster of backend nodes, running the new software stack. By Sept 4, there will be just 24 shc-a backend nodes, all the rest of the compute nodes will be running the new software stack, seen [...]]]></description>
			<content:encoded><![CDATA[<p><strong>Important Information for SHC Users</strong></p>
<p>Over the next couple of days, more backend nodes from shc-a will be transitioned to shc-[c,new]&#8217;s cluster of backend nodes, running the new software stack. By Sept 4, there will be just 24 shc-a backend nodes, all the rest of the compute nodes will be running the new software stack, seen from shc-[new,c].</p>
<ul>
<li>Please port your codes to the new software environment if you&#8217;ve not already done so!</li>
<li>Please report any porting problems you&#8217;re having; we&#8217;ll help asap.</li>
<li>Details on how to rebuild your code for the new SHC environment can be found <a href="http://www.cacr.caltech.edu/main/?page_id=440">here</a></li>
<li>Your MPI based code <em>must</em> be rebuilt for the new and improved shc software stack.</li>
</ul>
<p>Preventive Maintenance on Sept 8 from 0800 to 1400 will encompass testing the complete transition of SHC compute and head node resources to the upgraded software stack environment. The fully upgraded production SHC cluster configuration will be two head nodes (shc-[a,b]) and 1180 Opteron compute node cores (163 dual cpu/dual core + 66 dual cpu/quad core).</p>
<p>Questions or concerns about the upgrade? Just <a href="http://www.cacr.caltech.edu/main/?page_id=76">let us know</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=653</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>SHC Cluster Expansion</title>
		<link>http://www.cacr.caltech.edu/main/?p=437</link>
		<comments>http://www.cacr.caltech.edu/main/?p=437#comments</comments>
		<pubDate>Thu, 12 Feb 2009 21:30:47 +0000</pubDate>
		<dc:creator>cacrweb</dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[clusters]]></category>
		<category><![CDATA[hardware]]></category>
		<category><![CDATA[SHC]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=437</guid>
		<description><![CDATA[CACR&#8217;s 163 node Shared Heterogeneous Cluster (SHC) has recently expanded by an additional 20 nodes. Each of these new nodes contains 16 GB of memory and have two quad-core, 2.5 GHz AMD Opteron Processors (model 2380). As with the existing SHC nodes, each of the new nodes is connected via Infiniband to CACR&#8217;s Infiniband Switch.
The [...]]]></description>
			<content:encoded><![CDATA[<p><a href="http://www.cacr.caltech.edu/main/wp-content/uploads/2009/02/shc022009.jpg"><img class="alignleft size-medium wp-image-438" style="margin: 4px;" title="shc022009" src="http://www.cacr.caltech.edu/main/wp-content/uploads/2009/02/shc022009-244x300.jpg" alt="" width="244" height="300" /></a>CACR&#8217;s 163 node Shared Heterogeneous Cluster (SHC) has recently expanded by an additional 20 nodes. Each of these new nodes contains 16 GB of memory and have two quad-core, 2.5 GHz AMD Opteron Processors (model 2380). As with the existing SHC nodes, each of the new nodes is connected via Infiniband to CACR&#8217;s Infiniband Switch.</p>
<p>The SHC provides computing capabilities specifically configured to meet the needs of applications from Caltech’s PSAAP, Turbulent Mixing, Applied and Computational Mathematics, and Numerical Relativity communities. For more information about the SHC, including information for test users of the new nodes, see <a href="http://www.cacr.caltech.edu/main/?page_id=101">this page</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=437</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>CACR&#8217;s Shared Heterogeneous Cluster (SHC) Now Online</title>
		<link>http://www.cacr.caltech.edu/main/?p=348</link>
		<comments>http://www.cacr.caltech.edu/main/?p=348#comments</comments>
		<pubDate>Fri, 24 Mar 2006 21:15:59 +0000</pubDate>
		<dc:creator>cacrweb</dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[CACR]]></category>
		<category><![CDATA[clusters]]></category>
		<category><![CDATA[hardware]]></category>
		<category><![CDATA[SHC]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=348</guid>
		<description><![CDATA[The nature of financial support for high-end computing resources has evolved given the widespread adoption of Beowulf clusters. Research groups that need computing often obtain funds for clusters as part of their grants. CACR participates in some of these efforts, and supports significant dedicated resources for high-energy physics, astronomy, geophysics, physics-based simulation, and others. Unfortunately, [...]]]></description>
			<content:encoded><![CDATA[<p><a href="http://www.cacr.caltech.edu/main/wp-content/uploads/2009/01/shc.jpg"><img class="alignleft size-medium wp-image-349" title="shc" src="http://www.cacr.caltech.edu/main/wp-content/uploads/2009/01/shc-224x300.jpg" alt="" width="224" height="300" /></a>The nature of financial support for high-end computing resources has evolved given the widespread adoption of Beowulf clusters. Research groups that need computing often obtain funds for clusters as part of their grants. CACR participates in some of these efforts, and supports significant dedicated resources for high-energy physics, astronomy, geophysics, physics-based simulation, and others. Unfortunately, the balkanization of computation by this model has created inefficiencies. The clusters do not take advantage of economies of scale, can be underutilized, and poorly administered. CACR has developed a shared cluster model, and Professors Paul Dimotakis, Dan Meiron, and Kip Thorne have agreed to be pioneer partners in this effort. CACR has purchased a machine optimized for parallel numerical codes that can sustain over 1 trillion floating point operations per second. It consists of 352 2.2Ghz AMD Opteron cores, 700+ Gigabytes of memory, all interconnected by an Infiniband networking fabric that can move 160+ Gigabytes/s between the compute nodes. The cluster is administered by CACR with funds from the partner groups, and each group has an allocation of time on the machine proportionate to its contribution. By sharing, the groups get better pricing from vendors, professional systems administration by experienced CACR staff, and the ability to use a much larger machine than each group could afford separately. Some of the partners are also supporting efforts at CACR in visualization and code tuning. The shared cluster model is extremely scalable, and CACR is interested in expanding the machine to increase simulation capability and add support for data intensive science. Please contact CACR&#8217;s Executive Director, Mark Stalzer (stalzer at caltech.edu) for more information.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=348</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>First Phase of TeraGrid Goes into Production</title>
		<link>http://www.cacr.caltech.edu/main/?p=376</link>
		<comments>http://www.cacr.caltech.edu/main/?p=376#comments</comments>
		<pubDate>Thu, 01 Jan 2004 21:44:30 +0000</pubDate>
		<dc:creator>cacrweb</dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[CACR]]></category>
		<category><![CDATA[hardware]]></category>
		<category><![CDATA[high-performance computing]]></category>
		<category><![CDATA[teragrid]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=376</guid>
		<description><![CDATA[The first computing systems of the National Science Foundation&#8217;s TeraGrid project are in production mode, making 4.5 T eraflops of distributed computing power available to scientists across the country who are conducting research in a wide range of disciplines, from astrophysics to environmental science.
The TeraGrid is a multi-year effort to build and deploy the world&#8217;s [...]]]></description>
			<content:encoded><![CDATA[<p>The first computing systems of the National Science Foundation&#8217;s TeraGrid project are in production mode, making 4.5 T eraflops of distributed computing power available to scientists across the country who are conducting research in a wide range of disciplines, from astrophysics to environmental science.</p>
<p>The TeraGrid is a multi-year effort to build and deploy the world&#8217;s largest, most comprehensive distributed infrastructure for open scientific research. The TeraGrid also offers storage, visualization, database, and data collection capabilities. Hardware at multiple sites across the country is networked through a 40-gigabit per second backplane &#8212; the fastest research network on the planet.</p>
<p>The systems currently in production represent the first of two deployments, with the completed TeraGrid scheduled to provide over 20 T eraflops of capability. The phase two hardware, which will add more than 11 T eraflops of capacity, was installed in December 2003 and is scheduled to be available to the research community this spring.</p>
<p>&#8220;We are pleased to see scientific research being conducted on the initial production TeraGrid system,&#8221; said Peter Freeman, head of NSF&#8217;s Computer and Information Sciences and Engineering directorate. &#8220;Leading-edge supercomputing capabilities are essential to the emerging cyberinfrastructure, and the TeraGrid represents NSF&#8217;s commitment to providing high-end, innovative resources.&#8221;</p>
<p>The TeraGrid sites are: Argonne National Laboratory; the Center for Advanced Computing Research (CACR) at the California Institute of Technology; Indiana University; the National Center for Supercomputing Applications (NCSA) at the University of Illinois, Urbana-Champaign; Oak Ridge National Laboratory; the Pittsburgh Supercomputing Center (PSC); Purdue University; the San Diego Supercomputer Center (SDSC) at the University of California, San Diego; and the Texas Advanced Computing Center at The University of Texas at Austin.</p>
<p>&#8220;This is an exciting milestone for scientific computing &#8212; the TeraGrid is a new concept and there has never been a distributed computing system of its size and scope,&#8221; said NCSA interim director Rob Pennington, the TeraGrid site lead for NCSA. &#8220;In addition to its immediate value in enabling new science, the TeraGrid project is a tool for the development of a national cyberinfrastructure, and the cooperative relationships forged through this effort provide a framework for future innovation and collaboration.&#8221;</p>
<p>&#8220;The TeraGrid partners have worked extremely hard during the two-year construction phase of this project and are delighted that this initial phase of what will be an unprecedented level of computing and data resources is now online for the nation&#8217;s researchers to use,&#8221; said Fran Berman, SDSC director and co-principal investigator of the TeraGrid project. &#8220;The TeraGrid is one of the foundations of cyberinfrastructure that will provide even more computing resources later this year.&#8221;</p>
<p>The computing systems that entered production this month consist of more than 800 Itanium-family IBM processors running Linux. NCSA maintains a 2.7-teraflop cluster, which was installed in spring 2003, and SDSC has a 1.3-teraflop cluster. The 6-teraflop, 3,000-processor HP AlphaServerSC Terascale Computing System (TCS) at PSC is also a component of the TeraGrid infrastructure.</p>
<p>&#8220;The launch of the National Science Foundation&#8217;s TeraGrid project provides scientists and researchers across the nation with access to unprecedented computational power,&#8221; said David Turek, vice president of Deep Computing with IBM.&#8221;Working with the NSF, IBM is committed to the continued development of breakthrough Grid technologies that benefit our scientific/technical and commercial customers.&#8221;</p>
<p>Allocations for use of the TeraGrid were awarded by the NSF&#8217;s Partnerships for Advanced Computational Infrastructure (PACI) last October. Among the first wave of researchers to use the TeraGrid are scientists studying the evolution of the universe and the cleanup of contaminated groundwater, simulating seismic events, and analyzing biomolecular dynamics.</p>
<p>Among the allocations awarded included one for Caltech physicist Harvey Newman . Newman leads a team of investigators who are developing codes for CERN&#8217;s Compact Muon Solenoid (CMS) collaboration. The CMS experiment will begin operation at the Large Hadron Collider (LHC) in 2007. The Caltech team&#8217;s planned use of the TeraGrid will be a valuable and possibly critical factor in the success of several planned &#8220;Data Challenges&#8221; for CMS. These Challenges are designed to test the readiness of the global Grid-enabled computing system being developed for the experiment, in collaboration with partner projects such as PPDG, GriPhyN, iVDGL, DataTAG, LCG, and others. The TeraGrid will further a program of developing optimized search strategies for the Higgs particles, thought to be responsible for mass in the Universe, for super-symmetry, and for investigating new physics processes beyond the Standard Model of particle physics.</p>
<p>To learn more about the TeraGrid, go to <a href="http://www.teragrid.org/"> www.teragrid.org </a></p>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=376</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>CACR &amp; JPL team members of CASCADE project</title>
		<link>http://www.cacr.caltech.edu/main/?p=386</link>
		<comments>http://www.cacr.caltech.edu/main/?p=386#comments</comments>
		<pubDate>Wed, 09 Jul 2003 21:48:50 +0000</pubDate>
		<dc:creator>cacrweb</dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[CACR]]></category>
		<category><![CDATA[hardware]]></category>
		<category><![CDATA[high-performance computing]]></category>
		<category><![CDATA[research]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=386</guid>
		<description><![CDATA[Cray Inc. Signs $49.9 Million A greement for Second Phase of DARPA Petaflops Computing Systems Program.
Cray Inc. today announced that it, together with New Technology Endeavors, Inc. , have signed an agreement with the Defense Advanced Research Projects Agency (DARPA) to participate in the second phase of DARPA&#8217;s High Productivity Computing Systems Program. The program will provide [...]]]></description>
			<content:encoded><![CDATA[<p align="left">Cray Inc. Signs $49.9 Million A greement for Second Phase of DARPA Petaflops Computing Systems Program.</p>
<p>Cray Inc. today announced that it, together with New Technology Endeavors, Inc. , have signed an agreement with the Defense Advanced Research Projects Agency (DARPA) to participate in the second phase of DARPA&#8217;s High Productivity Computing Systems Program. The program will provide Cray and its university research partners with $49.9 million in additional funding over the next three years to support an advanced research program aimed at developing a commercially available system capable of sustained performance in excess of one petaflops (a million billion calculations per second).</p>
<p>Ed Upchurch as Caltech/JPL &#8216; s Principal Investigator leads the team and Thomas Sterling is Chief Technologist . Other research partners include Stanford University  and The University of Notre Dame , and the Principal Investigator of the project is Cray&#8217;s chief scientist, Burton Smith . &#8220;The DARPA HPCS program in general and the Cray Cascade Petaflops computer project in particular is an important opportunity for Caltech and JPL to directly, significantly, and substantively influence the direction of future supercomputer systems architecture and software ,&#8221; Sterling says. &#8221; Through this program, innovative concepts developed by scientists at Caltech and JPL, in collaboration with colleagues at the University of Notre Dame, in the field of advanced PIM architecture will be developed and transferred to real world end users from academic research through this industrial partnership. The result may be little less than the next revolution in supercomputing.&#8221;</p>
<p>DARPA formed the High Productivity Computing Systems Program to foster development of the next generation of high productivity computing systems for both the national security and industrial user communities.  Program goals are for these systems to be more broadly applicable, much easier to program and more resistant to failure than currently available high performance computing systems.</p>
<p>Five computer-makers, including Cray, were selected for the first phase concept study that was initiated in mid-2002, and all five firms submitted proposals for the second phase. Cray, along with IBM and Sun Microsystems, were selected to continue to the second phase, where further definition and validation of the proposed systems will occur. In mid-2006, DARPA plans to select up to two vendors for the final phase, a full-scale development phase with initial prototype deliveries scheduled for 2010.</p>
<p>More information about the Cascade project can be found at <a href="http://www.cray.com/cascade/">http://www.cray.com/cascade/ </a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=386</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>