<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Caltech Center for Advanced Computing Research &#187; SHC</title>
	<atom:link href="http://www.cacr.caltech.edu/main/?feed=rss2&#038;tag=shc" rel="self" type="application/rss+xml" />
	<link>http://www.cacr.caltech.edu/main</link>
	<description>...at the forefront of computational science and engineering</description>
	<lastBuildDate>Fri, 03 May 2013 17:16:51 +0000</lastBuildDate>
	<generator>http://wordpress.org/?v=2.8.4</generator>
	<language>en</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
			<item>
		<title>Cellcenter &amp; SHC users: Compiler Changes</title>
		<link>http://www.cacr.caltech.edu/main/?p=792</link>
		<comments>http://www.cacr.caltech.edu/main/?p=792#comments</comments>
		<pubDate>Thu, 28 Jan 2010 00:24:40 +0000</pubDate>
		<dc:creator>cacrweb</dc:creator>
				<category><![CDATA[Computing Resources]]></category>
		<category><![CDATA[cellcenter]]></category>
		<category><![CDATA[clusters]]></category>
		<category><![CDATA[SHC]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=792</guid>
		<description><![CDATA[Cellcenter and SHC users now have access to the Intel Cluster Toolkit 3.2, including the compiler suite v 11.1. We have a two-seat license. &#8220;use intel&#8221; will this package into your environment. More Information about the Intel Cluster Toolkit
Pathscale compilers are being retired Feb 25, 2010. Users are encouraged to rebuild their codes using Intel, [...]]]></description>
			<content:encoded><![CDATA[<p><a href="http://www.cacr.caltech.edu/main/?page_id=183">Cellcenter</a> and <a href="http://www.cacr.caltech.edu/main/?page_id=101">SHC</a> users now have access to the Intel Cluster Toolkit 3.2, including the compiler suite v 11.1. We have a two-seat license. &#8220;use intel&#8221; will this package into your environment. <a href="http://software.intel.com/en-us/intel-cluster-toolkit/">More Information about the Intel Cluster Toolkit</a></p>
<p>Pathscale compilers are being retired Feb 25, 2010. Users are encouraged to rebuild their codes using Intel, PGI, or gnu compilers. Specifically, these pathscale based packages will retire Feb 25:</p>
<ul>
<li> pathscale (alias for pathscale 3.2)</li>
<li>pathscale_[3.1,3.2]</li>
<li>openmpi_1.3.3_pathscale</li>
<li>openmpi_1.3.3_pathscale_3.2</li>
<li>openmpi_1.3.4_pathscale</li>
<li>openmpi_1.4_pathscale</li>
<li>openmpi_pathscale (alias for openmpi_1.3.3_pathscale_3.2)</li>
</ul>
<p>Additional packages that will be retiring on Feb 25, due to bug fixes and newer release compatible offerings:</p>
<ul>
<li> openmpi_1.3.2 (alias for openmpi_1.3.2_gcc)</li>
<li>openmpi_1.3.2_gcc</li>
<li>openmpi_1.3.4_gcc</li>
<li>openmpi_1.3.4_intel</li>
<li>openmpi_1.3.4_pgi</li>
<li>pgi_8.0</li>
<li>tecplot-2006r2</li>
<li>tecplot-2009r1</li>
<li>totalview_8.6</li>
</ul>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=792</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>SHC Software Stack Upgrade &#8211; Update</title>
		<link>http://www.cacr.caltech.edu/main/?p=677</link>
		<comments>http://www.cacr.caltech.edu/main/?p=677#comments</comments>
		<pubDate>Wed, 09 Sep 2009 17:37:58 +0000</pubDate>
		<dc:creator>cacrweb</dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[clusters]]></category>
		<category><![CDATA[hardware]]></category>
		<category><![CDATA[PM]]></category>
		<category><![CDATA[SHC]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=677</guid>
		<description><![CDATA[Important Information for SHC Users
As of Sept 8, 2009, SHC has been transitioned to the new sw stack (RHEL+OpenIB). There are currently 115 core4 nodes and 65 core8 nodes, in production. For more information, please visit the SHC Getting Started / System Guide.
]]></description>
			<content:encoded><![CDATA[<p><strong>Important Information for SHC Users</strong></p>
<p>As of Sept 8, 2009, SHC has been transitioned to the new sw stack (RHEL+OpenIB). There are currently 115 core4 nodes and 65 core8 nodes, in production. For more information, please visit the <a href="http://www.cacr.caltech.edu/main/?page_id=108">SHC Getting Started / System Guide</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=677</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>SHC Software Stack Upgrade</title>
		<link>http://www.cacr.caltech.edu/main/?p=653</link>
		<comments>http://www.cacr.caltech.edu/main/?p=653#comments</comments>
		<pubDate>Thu, 03 Sep 2009 16:58:54 +0000</pubDate>
		<dc:creator>cacrweb</dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[clusters]]></category>
		<category><![CDATA[hardware]]></category>
		<category><![CDATA[PM]]></category>
		<category><![CDATA[SHC]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=653</guid>
		<description><![CDATA[Important Information for SHC Users
Over the next couple of days, more backend nodes from shc-a will be transitioned to shc-[c,new]&#8217;s cluster of backend nodes, running the new software stack. By Sept 4, there will be just 24 shc-a backend nodes, all the rest of the compute nodes will be running the new software stack, seen [...]]]></description>
			<content:encoded><![CDATA[<p><strong>Important Information for SHC Users</strong></p>
<p>Over the next couple of days, more backend nodes from shc-a will be transitioned to shc-[c,new]&#8217;s cluster of backend nodes, running the new software stack. By Sept 4, there will be just 24 shc-a backend nodes, all the rest of the compute nodes will be running the new software stack, seen from shc-[new,c].</p>
<ul>
<li>Please port your codes to the new software environment if you&#8217;ve not already done so!</li>
<li>Please report any porting problems you&#8217;re having; we&#8217;ll help asap.</li>
<li>Details on how to rebuild your code for the new SHC environment can be found <a href="http://www.cacr.caltech.edu/main/?page_id=440">here</a></li>
<li>Your MPI based code <em>must</em> be rebuilt for the new and improved shc software stack.</li>
</ul>
<p>Preventive Maintenance on Sept 8 from 0800 to 1400 will encompass testing the complete transition of SHC compute and head node resources to the upgraded software stack environment. The fully upgraded production SHC cluster configuration will be two head nodes (shc-[a,b]) and 1180 Opteron compute node cores (163 dual cpu/dual core + 66 dual cpu/quad core).</p>
<p>Questions or concerns about the upgrade? Just <a href="http://www.cacr.caltech.edu/main/?page_id=76">let us know</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=653</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>SHC back in production mode</title>
		<link>http://www.cacr.caltech.edu/main/?p=558</link>
		<comments>http://www.cacr.caltech.edu/main/?p=558#comments</comments>
		<pubDate>Wed, 01 Jul 2009 20:16:07 +0000</pubDate>
		<dc:creator>cacrweb</dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[PM]]></category>
		<category><![CDATA[SHC]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=558</guid>
		<description><![CDATA[Thanks to SHC users for your patience during Preventive Maintenance on the InfiniBand chassis and associated infrastructure. Any questions, comments, or concerns should be addressed to:

]]></description>
			<content:encoded><![CDATA[<p>Thanks to SHC users for your patience during Preventive Maintenance on the InfiniBand chassis and associated infrastructure. Any questions, comments, or concerns should be addressed to:</p>
<p><a href="../wp-content/uploads/2009/01/shc-email.gif"><img class="alignnone size-medium wp-image-252" title="shc-email" src="../wp-content/uploads/2009/01/shc-email.gif" alt="" width="218" height="20" /></a></p>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=558</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>SHC Preventive Maintenance Notice</title>
		<link>http://www.cacr.caltech.edu/main/?p=550</link>
		<comments>http://www.cacr.caltech.edu/main/?p=550#comments</comments>
		<pubDate>Fri, 12 Jun 2009 17:56:44 +0000</pubDate>
		<dc:creator>cacrweb</dc:creator>
				<category><![CDATA[Computing Resources]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[PM]]></category>
		<category><![CDATA[SHC]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=550</guid>
		<description><![CDATA[CACR&#8217;s Shared Heterogeneous Cluster (SHC) will take extended Preventive Maintenance in the near future to address issues with the InfiniBand switch &#8211; the fast interconnect fabric for the cluster.
Downtime is scheduled for Monday, June 29, 8AM through Tuesday, June 30, 12 noon to deploy a new switch chassis. After this major hardware upgrade/repair, the cluster [...]]]></description>
			<content:encoded><![CDATA[<p>CACR&#8217;s Shared Heterogeneous Cluster (SHC) will take extended Preventive Maintenance in the near future to address issues with the InfiniBand switch &#8211; the fast interconnect fabric for the cluster.</p>
<p>Downtime is scheduled for Monday, June 29, 8AM through Tuesday, June 30, 12 noon to deploy a new switch chassis. After this major hardware upgrade/repair, the cluster will be much more serviceable and will be operating with optimal communications.</p>
<p>SHC user questions or concerns are welcome, please contact:</p>
<p><a href="http://www.cacr.caltech.edu/main/wp-content/uploads/2009/01/shc-email.gif"><img class="alignnone size-medium wp-image-252" title="shc-email" src="http://www.cacr.caltech.edu/main/wp-content/uploads/2009/01/shc-email.gif" alt="" width="218" height="20" /></a></p>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=550</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>SHC Cluster Expansion</title>
		<link>http://www.cacr.caltech.edu/main/?p=437</link>
		<comments>http://www.cacr.caltech.edu/main/?p=437#comments</comments>
		<pubDate>Thu, 12 Feb 2009 21:30:47 +0000</pubDate>
		<dc:creator>cacrweb</dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[clusters]]></category>
		<category><![CDATA[hardware]]></category>
		<category><![CDATA[SHC]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=437</guid>
		<description><![CDATA[CACR&#8217;s 163 node Shared Heterogeneous Cluster (SHC) has recently expanded by an additional 20 nodes. Each of these new nodes contains 16 GB of memory and have two quad-core, 2.5 GHz AMD Opteron Processors (model 2380). As with the existing SHC nodes, each of the new nodes is connected via Infiniband to CACR&#8217;s Infiniband Switch.
The [...]]]></description>
			<content:encoded><![CDATA[<p><a href="http://www.cacr.caltech.edu/main/wp-content/uploads/2009/02/shc022009.jpg"><img class="alignleft size-medium wp-image-438" style="margin: 4px;" title="shc022009" src="http://www.cacr.caltech.edu/main/wp-content/uploads/2009/02/shc022009-244x300.jpg" alt="" width="244" height="300" /></a>CACR&#8217;s 163 node Shared Heterogeneous Cluster (SHC) has recently expanded by an additional 20 nodes. Each of these new nodes contains 16 GB of memory and have two quad-core, 2.5 GHz AMD Opteron Processors (model 2380). As with the existing SHC nodes, each of the new nodes is connected via Infiniband to CACR&#8217;s Infiniband Switch.</p>
<p>The SHC provides computing capabilities specifically configured to meet the needs of applications from Caltech’s PSAAP, Turbulent Mixing, Applied and Computational Mathematics, and Numerical Relativity communities. For more information about the SHC, including information for test users of the new nodes, see <a href="http://www.cacr.caltech.edu/main/?page_id=101">this page</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=437</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Final Report of the TeraVoxel Project</title>
		<link>http://www.cacr.caltech.edu/main/?p=342</link>
		<comments>http://www.cacr.caltech.edu/main/?p=342#comments</comments>
		<pubDate>Wed, 03 May 2006 21:10:48 +0000</pubDate>
		<dc:creator>cacrweb</dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[CACR]]></category>
		<category><![CDATA[research]]></category>
		<category><![CDATA[SHC]]></category>
		<category><![CDATA[Teravoxel]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=342</guid>
		<description><![CDATA[Image caption: Visualization of downstream turbulence captured by KFS camera using laser light scattered from Rhodamine 6G particles injected into the water. Color has been assigned to denote local concentration.
The NSF-sponsored Teravoxel project was motivated by investigations of flow turbulence, and designed to handle both laboratory and simulation data. As part of the project, a [...]]]></description>
			<content:encoded><![CDATA[<p><a href="http://www.cacr.caltech.edu/main/wp-content/uploads/2009/01/teravoxel_image.jpg"><img class="aligncenter size-medium wp-image-343" title="teravoxel_image" src="http://www.cacr.caltech.edu/main/wp-content/uploads/2009/01/teravoxel_image-243x300.jpg" alt="" width="243" height="300" /></a><em><span>Image caption: Visualization of downstream turbulence captured by KFS camera using laser light scattered from Rhodamine 6G particles injected into the water. Color has been assigned to denote local concentration.</span></em></p>
<p>The NSF-sponsored Teravoxel project was motivated by investigations of flow turbulence, and designed to handle both laboratory and simulation data. As part of the project, a novel laser-illuminated KFS digital imaging system was developed. The 1024&#215;1024 pixel camera has low noise and high dynamic range. It operates at up to 1000 frames per second and generates over 2GB per second. The data is stored on two local Datawulfs for subsequent transport to CACR for processing. A significant component of the TeraVoxel system is CACR&#8217;s <a href="../../resources/resource.cfm?ID=9">Shared Heterogeneous Cluster</a> (SHC). The SHC is used to perform theoretical predications via simulation, and to process the very large data sets produced by the camera hardware.</p>
<p>A six-node volume rendering farm was constructed, with each node equipped with a 1 GB TeraRecon VolumePro 1000 rendering card. Custom software was developed to run interactively with a GTK/GL interface. The farm can geometrically correct and render 1000 frames of a 1024<sup>3</sup> volume in under 30 minutes.</p>
<p>The TeraVoxel project is a collaboration between Caltech&#8217;s <a href="http://galcit.caltech.edu/">Graduate Aeronautical  Laboratories (GALCIT)</a>, the <a href="http://csdrm.caltech.edu/">NNSA ASC Center</a>, and <a href="../../">CACR</a>.   The final <a href="http://resolver.caltech.edu/CaltechCACR:2006.101">Teravoxel report</a> can be found in the CACR archive of  the Caltech Library System&#8217;s Digital Collection.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=342</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>CACR&#8217;s Shared Heterogeneous Cluster (SHC) Now Online</title>
		<link>http://www.cacr.caltech.edu/main/?p=348</link>
		<comments>http://www.cacr.caltech.edu/main/?p=348#comments</comments>
		<pubDate>Fri, 24 Mar 2006 21:15:59 +0000</pubDate>
		<dc:creator>cacrweb</dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[CACR]]></category>
		<category><![CDATA[clusters]]></category>
		<category><![CDATA[hardware]]></category>
		<category><![CDATA[SHC]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=348</guid>
		<description><![CDATA[The nature of financial support for high-end computing resources has evolved given the widespread adoption of Beowulf clusters. Research groups that need computing often obtain funds for clusters as part of their grants. CACR participates in some of these efforts, and supports significant dedicated resources for high-energy physics, astronomy, geophysics, physics-based simulation, and others. Unfortunately, [...]]]></description>
			<content:encoded><![CDATA[<p><a href="http://www.cacr.caltech.edu/main/wp-content/uploads/2009/01/shc.jpg"><img class="alignleft size-medium wp-image-349" title="shc" src="http://www.cacr.caltech.edu/main/wp-content/uploads/2009/01/shc-224x300.jpg" alt="" width="224" height="300" /></a>The nature of financial support for high-end computing resources has evolved given the widespread adoption of Beowulf clusters. Research groups that need computing often obtain funds for clusters as part of their grants. CACR participates in some of these efforts, and supports significant dedicated resources for high-energy physics, astronomy, geophysics, physics-based simulation, and others. Unfortunately, the balkanization of computation by this model has created inefficiencies. The clusters do not take advantage of economies of scale, can be underutilized, and poorly administered. CACR has developed a shared cluster model, and Professors Paul Dimotakis, Dan Meiron, and Kip Thorne have agreed to be pioneer partners in this effort. CACR has purchased a machine optimized for parallel numerical codes that can sustain over 1 trillion floating point operations per second. It consists of 352 2.2Ghz AMD Opteron cores, 700+ Gigabytes of memory, all interconnected by an Infiniband networking fabric that can move 160+ Gigabytes/s between the compute nodes. The cluster is administered by CACR with funds from the partner groups, and each group has an allocation of time on the machine proportionate to its contribution. By sharing, the groups get better pricing from vendors, professional systems administration by experienced CACR staff, and the ability to use a much larger machine than each group could afford separately. Some of the partners are also supporting efforts at CACR in visualization and code tuning. The shared cluster model is extremely scalable, and CACR is interested in expanding the machine to increase simulation capability and add support for data intensive science. Please contact CACR&#8217;s Executive Director, Mark Stalzer (stalzer at caltech.edu) for more information.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=348</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>