<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Caltech Center for Advanced Computing Research &#187; LHC</title>
	<atom:link href="http://www.cacr.caltech.edu/main/?feed=rss2&#038;tag=lhc" rel="self" type="application/rss+xml" />
	<link>http://www.cacr.caltech.edu/main</link>
	<description>...at the forefront of computational science and engineering</description>
	<lastBuildDate>Fri, 03 May 2013 17:16:51 +0000</lastBuildDate>
	<generator>http://wordpress.org/?v=2.8.4</generator>
	<language>en</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
			<item>
		<title>First Beam &#8211; LHC back online</title>
		<link>http://www.cacr.caltech.edu/main/?p=756</link>
		<comments>http://www.cacr.caltech.edu/main/?p=756#comments</comments>
		<pubDate>Tue, 24 Nov 2009 20:25:03 +0000</pubDate>
		<dc:creator>cacrweb</dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[high-speed data transfer]]></category>
		<category><![CDATA[LHC]]></category>
		<category><![CDATA[physics]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=756</guid>
		<description><![CDATA[First beam circulated in the world’s most powerful particle accelerator the Large Hadron Collider (LHC) at CERN on 20 November 2009 – a clockwise circulating beam was established at ten o&#8217;clock that evening, followed by a circulating beam in the other direction a few hours later. When the proton beams are made collide at the [...]]]></description>
			<content:encoded><![CDATA[<div id="attachment_757" class="wp-caption alignleft" style="width: 310px"><img class="size-medium wp-image-757" title="CollisionEvent" src="http://www.cacr.caltech.edu/main/wp-content/uploads/2009/11/CollisionEvent-300x215.png" alt="One of the first proton proton collisions seen in the CMS Detector, displayed using the collaboration's software tool &quot;Fireworks&quot;" width="300" height="215" /><p class="wp-caption-text">One of the first proton proton collisions seen in the CMS Detector, displayed using the collaboration&#39;s software tool &quot;Fireworks&quot;</p></div>
<p>First beam circulated in the world’s most powerful particle accelerator the Large Hadron Collider (LHC) at CERN on 20 November 2009 – a clockwise circulating beam was established at ten o&#8217;clock that evening, followed by a circulating beam in the other direction a few hours later. When the proton beams are made collide at the centres of each of the four LHC experiments, the electronic data captured from the detectors will flow at rates ranging from a few hundred MBytes/sec to over one GByte/sec.</p>
<p>Global transport and analysis of this imminent stream of physics data is one of the major computing and networking challenges facing particle physics experiments. Leading edge explorations such as these require advances in all system components, from detectors to remote data analysis. CACR research staff have been involved with demonstrating technologies that reliably deliver over 100 Gb/s sustained from worldwide sources to a single analysis point. CACR also hosts a major &#8220;Tier2&#8243; computing center, which is dedicated to receiving LHC datasets over twin 10Gbps networks from CERN, and running applications that analyse the events they contain.</p>
<ul>
<li>More Information on LHC First Physics here: <a href="http://press.web.cern.ch/press/lhc-first-physics/">http://press.web.cern.ch/press/lhc-first-physics/</a>)</li>
<li>More information on High Energy Physics projects at CACR:<a href="http://www.cacr.caltech.edu/main/?page_id=133"> http://www.cacr.caltech.edu/main/?page_id=133</a>)</li>
<li>More CACR news items tagged &#8220;LHC&#8221;: <a href="http://www.cacr.caltech.edu/main/?tag=lhc">http://www.cacr.caltech.edu/main/?tag=lhc</a>)</li>
</ul>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=756</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>High Energy Physics Team Sets New Data-Transfer World Records</title>
		<link>http://www.cacr.caltech.edu/main/?p=207</link>
		<comments>http://www.cacr.caltech.edu/main/?p=207#comments</comments>
		<pubDate>Mon, 08 Dec 2008 23:32:48 +0000</pubDate>
		<dc:creator>cacrweb</dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[hep]]></category>
		<category><![CDATA[high-speed data transfer]]></category>
		<category><![CDATA[LHC]]></category>
		<category><![CDATA[physics]]></category>
		<category><![CDATA[SC]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=207</guid>
		<description><![CDATA[Building on seven years of record-breaking developments, an international team of physicists, computer scientists, and network engineers led by the California Institute of Technology (Caltech)&#8211;with partners from Michigan, Florida, Tennessee, Fermilab, Brookhaven, CERN, Brazil, Pakistan, Korea, and Estonia&#8211;set new records for sustained data transfer among storage systems during the SuperComputing 2008 (SC08) conference recently held [...]]]></description>
			<content:encoded><![CDATA[<p>Building on seven years of record-breaking developments, an international team of physicists, computer scientists, and network engineers led by the California Institute of Technology (Caltech)&#8211;with partners from Michigan, Florida, Tennessee, Fermilab, Brookhaven, CERN, Brazil, Pakistan, Korea, and Estonia&#8211;set new records for sustained data transfer among storage systems during the SuperComputing 2008 (SC08) conference recently held in Austin, Texas.</p>
<p><a href="http://www.cacr.caltech.edu/main/wp-content/uploads/2008/12/bwc2008.png"><img class="alignleft size-medium wp-image-208" title="bwc2008" src="http://www.cacr.caltech.edu/main/wp-content/uploads/2008/12/bwc2008-300x209.png" alt="" width="300" height="209" /></a>Caltech&#8217;s exhibit at SC08 by the CACR and  the High Energy Physics (HEP) group demonstrated new applications and systems for globally distributed data analysis for the Large Hadron Collider (LHC) at CERN, along with Caltech global monitoring system <a href="http://monalisa.caltech.edu">MonALISA</a> and its collaboration system <a href="http://evo.caltech.edu">EVO</a> (Enabling Virtual Organizations), together with near real-time simulations of <a href="http://shakemovie.caltech.edu">earthquakes in the Southern California region</a>, experiences in time-domain astronomy with <a href="http://voeventnet.caltech.edu">VOEventNet</a> and Google Sky, and recent results in multiphysics multiscale modeling with the <a href="http://psaap.caltech.edu">PSAAP</a> project.</p>
<p>A highlight of the exhibit was the HEP team record-breaking demonstration of storage-to-storage data transfers over wide area networks from a single rack of servers on the exhibit floor. The team&#8217;s demonstration of &#8220;High Speed LHC Data Gathering, Distribution and Analysis Using Next Generation Networks&#8221; achieved a bidirectional peak throughput of 114 gigabits per second (Gbps) and a sustained data flow of more than 110 Gbps among clusters of servers on the show floor and at Caltech, Michigan, CERN (Geneva), Fermilab (Batavia), Brazil (Rio de Janiero, Sao Paulo), Korea (Daegu), Estonia, and locations in the US LHCNet network in Chicago, New York, Geneva, and Amsterdam.</p>
<p>The image shows a sample of the results obtained at the Caltech booth, monitored by MonALISA, flowing in and out of the servers at the booth. The feature in the middle of the graph is the result of briefly losing the local session at SC08 driving some of the flows.</p>
<p>Read more in the <a href="http://mr.caltech.edu/media/Press_Releases/PR13216.html">Caltech Press Release</a></p>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=207</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>First beam in the LHC &#8211; Accelerating Science</title>
		<link>http://www.cacr.caltech.edu/main/?p=14</link>
		<comments>http://www.cacr.caltech.edu/main/?p=14#comments</comments>
		<pubDate>Wed, 24 Sep 2008 18:13:23 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[LHC]]></category>
		<category><![CDATA[physics]]></category>
		<category><![CDATA[research]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=14</guid>
		<description><![CDATA[

The first beam in the Large Hadron Collider at CERN was successfully steered around the full 27 kilometers of the world&#8217;s most powerful particle accelerator at 10h28 September 10. This historic event marks a key moment in the transition from over two decades of preparation to a new era of scientific discovery. (Read the full [...]]]></description>
			<content:encoded><![CDATA[<p><a href="http://www.cacr.caltech.edu/main/wp-content/uploads/2009/01/lhc.jpg"><img class="size-medium wp-image-286 alignleft" title="lhc" src="http://www.cacr.caltech.edu/main/wp-content/uploads/2009/01/lhc.jpg" alt="" width="200" height="200" /></a></p>
<p style="text-align: center;">
<p>The first beam in the Large Hadron Collider at CERN was successfully steered around the full 27 kilometers of the world&#8217;s most powerful particle accelerator at 10h28 September 10. This historic event marks a key moment in the transition from over two decades of preparation to a new era of scientific discovery. (Read the full story on the <a href="http://today.caltech.edu/today/story-display.tcl?story_id=31242">Caltech website</a>)</p>
<p>The LHC experiments are using a globally distributed &#8220;Tiered&#8221; grid of computational resources to process the proton collision data. This multi-Tier design was proposed and prototyped by Caltech in the late 90s, and is now universally adopted. The prototype cluster in the CACR machine room has been continually expanded and enhanced since then, and it is now one of the major Tier2 centres in the world, with over 1M SPECInt2k of compute power, several hundred Terabytes of storage space, and multiple 10 Gigabit network connections to other LHC sites in the US, and to CERN. First data from the initial LHC tests has already arrived for processing and analysis on the Caltech Tier2.</p>
<p><strong>Local Press, featuring commentary and an interview with CACR Principal Computational Scientist Julian Bunn:</strong></p>
<ul>
<li>Audio: <a href="http://www.publicradio.org/tools/media/player/kpcc/news/shows/airtalk/2008/09/20080910_airtalk1?start=00:15:01&amp;end=00:26:01">AirTalk</a> (KPCC)</li>
<li>Video: <a href="http://abclocal.go.com/kabc/story?section=news/world_news&amp;id=6381728">ABC News</a> (KABC)</li>
</ul>
<p><strong>For further information on the Large Hadron Collider, see the <a href="http://public.web.cern.ch/public/en/LHC/LHC-en.html">CERN website</a>.</strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=14</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="http://www.publicradio.org/tools/media/player/kpcc/news/shows/airtalk/2008/09/20080910_airtalk1?start=00:15:01&amp;amp" length="0" type="audio/x-pn-realaudio" />
		</item>
		<item>
		<title>High-Speed Data Transfer System Garners Outreach Award</title>
		<link>http://www.cacr.caltech.edu/main/?p=5</link>
		<comments>http://www.cacr.caltech.edu/main/?p=5#comments</comments>
		<pubDate>Sat, 08 Mar 2008 17:56:27 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[CENIC]]></category>
		<category><![CDATA[high-speed data transfer]]></category>
		<category><![CDATA[LHC]]></category>
		<category><![CDATA[UltraLight]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=5</guid>
		<description><![CDATA[The Corporation for Education Network Initiatives in California (CENIC) has rewarded researchers at the California Institute of Technology for better connecting physicists worldwide. Lead project scientist Harvey Newman, professor of physics at Caltech, Julian Bunn of the Caltech Center for Advanced Computing Research, and their international team of researchers will receive a trophy for Innovations [...]]]></description>
			<content:encoded><![CDATA[<p>The Corporation for Education Network Initiatives in California (CENIC) has rewarded researchers at the California Institute of Technology for better connecting physicists worldwide. Lead project scientist Harvey Newman, professor of physics at Caltech, Julian Bunn of the Caltech Center for Advanced Computing Research, and their international team of researchers will receive a trophy for Innovations in Networking at a ceremony in Oakland, California, on March 11.</p>
<p>Based on exciting recent developments, the Caltech award is for the project called <a href="http://www.ultralight.org/">UltraLight</a>, Bunn says. UltraLight was developed in 2004 in large part to support the decades of research that will emerge from the Large Hadron Collider (LHC) at CERN in Geneva, Switzerland. The project provides advanced global systems and networks, and this summer will start transferring data as the LHC becomes operational.</p>
<p>UltraLight exhibited its capabilities in a showroom demonstration for CENIC during a supercomputing conference in November 2007, sustaining disk-to-disk data transfers of up to 88 gigabits per second (Gbps) between Caltech and Reno, Nevada, for more than a day. But data flows from the LHC experiments will be the first time that UltraLight will strut its stuff for scientists hungry for data.</p>
<p>The CENIC Innovations in Networking awards are split into four categories, and this year for the first time CENIC declared a tie in Experimental/Developmental Applications between UltraLight and another contender, CineGrid, which facilitates the exchange of digital media over a network. Bunn will accept the trophy and present the group&#8217;s project at the CENIC 2008: Lightpath to the Stars conference in Oakland on Tuesday, March 11.</p>
<p><strong>&gt; Read more at the <a href="http://mr.caltech.edu/media/Press_Releases/PR13113.html">Caltech Press Release</a>.</strong></p>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=5</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Physicists Set New Record for Network Data Transfer at CACR/Caltech/CERN joint exhibit at SC06</title>
		<link>http://www.cacr.caltech.edu/main/?p=321</link>
		<comments>http://www.cacr.caltech.edu/main/?p=321#comments</comments>
		<pubDate>Wed, 13 Dec 2006 20:50:49 +0000</pubDate>
		<dc:creator>cacrweb</dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[hep]]></category>
		<category><![CDATA[LHC]]></category>
		<category><![CDATA[physics]]></category>
		<category><![CDATA[SC]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=321</guid>
		<description><![CDATA[Click here for a full-sized version of the thumbnail above. Image shows a MonALISA plot of the aggregated network traffic to the Caltech booth, during and after the Bandwidth Challenge. (The initial blue region at the left of the graph is the BWC entry.)
An international team of physicists, computer scientists, and network engineers led by [...]]]></description>
			<content:encoded><![CDATA[<p><span><a href="http://www.cacr.caltech.edu/main/wp-content/uploads/2009/01/sc2006_rezults_thumb.gif"><img class="size-medium wp-image-323 aligncenter" title="sc2006_rezults_thumb" src="http://www.cacr.caltech.edu/main/wp-content/uploads/2009/01/sc2006_rezults_thumb-300x183.gif" alt="" width="300" height="183" /></a><a href="http://www.cacr.caltech.edu/main/wp-content/uploads/2009/01/sc2006_rezults_2.gif">C</a><a href="http://www.cacr.caltech.edu/main/wp-content/uploads/2009/01/sc2006_rezults_2.gif">lick here</a> for a full-sized version of the thumbnail above. Image shows a MonALISA plot of the aggregated network traffic to the Caltech booth, during and after the Bandwidth Challenge. (The initial blue region at the left of the graph is the BWC entry.)</span></p>
<p style="text-align: justify;">An international team of physicists, computer scientists, and network engineers led by the California Institute of Technology, CERN, and the University of Michigan with partners at the University of Florida and Vanderbilt, as well as participants from Brazil and Korea, joined forces to set new records for sustained data transfer between storage systems during the SC06 Bandwidth Challenge.</p>
<p style="text-align: justify;">The high-energy physics team&#8217;s demonstration of &#8220;High Speed Data Gathering, Distribution and Analysis for Physics Discoveries at the Large Hadron Collider&#8221; achieved a peak throughput of 17.77 gigabits per second (Gbps) between clusters of servers at the show floor and at Caltech. Following the rules set for the SC06 Bandwidth Challenge, the team used a single 10-Gbps link provided by <a href="http://www.nlr.net/">National Lambda Rail</a> that carried data in both directions. Sustained throughput throughout the night prior to the bandwidth challenge exceeded 16 Gbps (or two gigabytes per second) using just 10 pairs of small servers sending data at 9 Gbps to Caltech from Tampa, and eight pairs of servers sending 7 Gbps of data in the reverse direction. (<a href="http://pr.caltech.edu/media/Press_Releases/PR12933.html">Read more in the Caltech Press Release</a>)</p>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=321</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>SC2004: World Network Speed Record Quadrupled</title>
		<link>http://www.cacr.caltech.edu/main/?p=362</link>
		<comments>http://www.cacr.caltech.edu/main/?p=362#comments</comments>
		<pubDate>Wed, 24 Nov 2004 21:27:57 +0000</pubDate>
		<dc:creator>cacrweb</dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[hep]]></category>
		<category><![CDATA[high-speed data transfer]]></category>
		<category><![CDATA[LHC]]></category>
		<category><![CDATA[SC]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=362</guid>
		<description><![CDATA[Caltech, SLAC, Fermilab, CERN, Florida and Partners in the UK, Brazil and Korea Set 101 Gigabit Per Second Mark During the SuperComputing 2004 Bandwidth Challenge.
PITTSBURGH, Pa. &#8212; For the second consecutive year, the High Energy Physics team of physicists, computer scientists and network engineers led by the California Institute of Technology and their partners at [...]]]></description>
			<content:encoded><![CDATA[<p>Caltech, SLAC, Fermilab, CERN, Florida and Partners in the UK, Brazil and Korea Set 101 Gigabit Per Second Mark During the SuperComputing 2004 Bandwidth Challenge.</p>
<p>PITTSBURGH, Pa. &#8212; For the second consecutive year, the High Energy Physics team of physicists, computer scientists and network engineers led by the California Institute of Technology and their partners at the Stanford Linear Accelerator Center (SLAC), Fermilab, CERN and the University of Florida, as well as international participants from the UK (University of Manchester, UCL and UKLight), Brazil (Rio de Janeiro State University, UERJ, and the State Universities of Sao Paulo, USP and UNESP) and Korea (Kyungpook National University, KISTI) joined forces at the Supercomputing 2004 (SC04) Bandwidth Challenge to capture the Sustained Bandwidth Award. Their demonstration of High Speed TeraByte Transfers for Physics achieved a throughput of 101 gigabits per second (Gbps) to and from the show floor, which exceeds the previous year&#8217;s mark of 23.2 Gbps, set by the same team, by a factor of more than four. The record data transfer speed is equivalent to downloading three full DVD movies per second, or transmitting all of the content of the Library of Congress in 15 minutes. It also has been estimated to be approximately 5% of the total rate of production of new content on Earth during the test. <span id="more-362"></span></p>
<p>The new mark, according to Bandwidth Challenge (BWC) sponsor Dr. Wesley Kaplow, V.P. of Engineering and Operations for Qwest Government Services, exceeded the sum of all the throughput marks submitted in the present and previous years by other BWC entrants. The extraordinary achieved bandwidth was made possible in part through the use of the FAST TCP protocol developed by Professor Steven Low and his Caltech Netlab team. It was achieved through the use of seven 10 Gbps links to Cisco 7600 and 6500 series switch routers provided by Cisco Systems at the Caltech Center for Advanced Computing (CACR) booth, and three 10 Gbps links to the SLAC/Fermilab booth. The external network connections included four dedicated wavelengths of National LambdaRail, between the SC2004 show floor in Pittsburgh and Los Angeles (two waves), Chicago, and Jacksonville, as well as three 10 Gbps connections across the Scinet network infrastructure at SC2004 with Qwest-provided wavelengths to the Internet2 Abilene Network (two 10 Gbps links), the TeraGrid (three 10 Gbps links) and ESnet. 10 Gigabit Ethernet (10 GbE) interfaces provided by S2io were used on servers running FAST at the Caltech/CACR booth, and interfaces from Chelsio equipped with transport offload engines (TOE) running standard TCP were used at the SLAC/FNAL booth. During the test, the network links over both the Abilene and National Lambda Rail networks were shown to operate successfully up to 99 percent of full capacity.</p>
<p>The Bandwidth Challenge allowed the scientists and engineers involved to preview the globally distributed Grid system that is now being developed in the US and Europe in preparation for the next generation of high energy physics experiments at CERNs Large Hadron Collider (LHC), scheduled to begin operation in 2007. Physicists at the LHC will search for the Higgs particles thought to be responsible for mass in the universe, supersymmetry, and other fundamentally new phenomena bearing on the nature of matter and spacetime, in an energy range made accessible by the LHC for the first time.</p>
<p>The largest physics collaborations at the LHC, CMS and ATLAS, each encompass more than 2000 physicists and engineers from 160 universities and laboratories spread around the globe. In order to fully exploit the potential for scientific discoveries, many Petabytes of data will have to be processed, distributed and analyzed. The key to discovery is the analysis phase, where individual physicists and small groups repeatedly access, and sometimes extract and transport Terabyte-scale data samples on demand, in order to optimally select the rare signals of new physics from potentially overwhelming backgrounds from already-understood particle interactions. This data will be drawn from major facilities at CERN in Switzerland, at Fermilab and the Brookhaven lab in the U.S., and at other laboratories and computing centers around the world, where the accumulated stored data will amount to many tens of Petabytes in the early years of LHC operation, rising to the Exabyte range within the coming decade.</p>
<p>Future optical networks, incorporating multiple 10 Gbps links are the foundation of the Grid system that will drive the scientific discoveries. A hybrid network integrating both traditionally switching and routing of packets, and dynamically constructed optical paths to support the largest data flows, is a central part of the near-term future vision that the scientific community has adopted to meet the challenges of data intensive science in many fields. By demonstrating that many 10 Gbps wavelengths can be used efficiently over continental and transoceanic distances (often in both directions simultaneously), the high energy physics team showed that this vision of a worldwide dynamic Grid supporting many Terabyte and larger data transactions is practical.</p>
<p>While the SC2004 100+ Gbps demonstration required a major effort by the teams involved and their sponsors, in partnership with major research and education network organizations in the U.S., Europe, Latin America and Asia Pacific, it is expected that networking on this scale in support of the largest science projects (such as the LHC), will be commonplace within the next three to five years.</p>
<p>The network has been deployed through exceptional support by Cisco Systems, Hewlett Packard, Newisys, S2io, Chelsio, Sun Microsystems and Boston Ltd., as well as the staffs of National LambdaRail, Qwest, the Internet2 Abilene Network, CENIC, ESnet, TeraGrid, AMPATH, RNP and the GIGA project, as well as ANSP/FAPESP in Brazil, KAIST in Korea, UKERNA in the UK, and the Starlight international peering point in Chicago. The international connections included the LHCNet OC-192 link between Chicago and CERN at Geneva, the CHEPREO OC-48 link between Abilene ( Atlanta), FIU ( Miami) and Sao Paulo, as well as an OC-12 link between Rio de Janeiro, Madrid, Geant, and Abilene ( New York). The APII-TransPAC links to Korea also were used with good occupancy. The throughputs to and from Latin America and Korea represented a significant step up in scale, that the team members hope will be the beginning of a trend towards the widespread use of 10 Gbps-scale network links on DWDM optical networks interlinking different world regions in support of science, by the time the LHC begins operation in 2007. The demonstration and the developments leading up to it, were made possible through the strong support of the U.S. Department of Energy and the National Science Foundation, in cooperation with the agencies of the international partners.</p>
<p>As part of the demonstration, a distributed analysis of simulated LHC physics data was done using the Grid-enabled Analysis Environment (GAE) developed at Caltech for the LHC and many other major particle physics experiments, as part of the Particle Physics Data Grid (PPDG), GriPhyN/iVDGL and Open Science Grid projects. This involved the transfer of data to CERN, Florida, Fermilab, Caltech, UC San Diego, and Brazil for processing by clusters of computers, and finally aggregating the results back to the show floor to create a dynamic visual display of quantities of interest to the physicists. In another part of the demonstration, file servers at the SLAC/FNAL booth, in London and Manchester also were used for disk to disk transfers from Pittsburgh to the UK. This gave physicists valuable experience in the use of the large distributed datasets and computational resources connected by fast networks, on the scale required at the start of the LHC physics program.</p>
<p>The team used the MonALISA (MONtoring Agents using a Large Integrated Services Architecture) system developed at Caltech to monitor and display the real-time data for all the network links used in the demonstration, as illustrated in the figure. MonALISA ( <a href="http://monalisa.caltech.edu/">http://monalisa.caltech.edu </a>) is a highly scalable set of autonomous self-describing agent-based subsystems which are able to collaborate and cooperate in performing a wide range of monitoring tasks for networks and Grid systems, as well as the scientific applications themselves. Detailed results for the network traffic on all the links used are available at : <a href="http://boson.cacr.caltech.edu:8888/"> http://boson.cacr.caltech.edu:8888/ </a> .</p>
<p>The team hopes this new demonstration will encourage scientists and engineers in many sectors of society to develop and plan to deploy a new generation of revolutionary Internet applications. Multi-gigabit/s end-to-end network performance will lead to new models for how research and business is performed. Scientists will be empowered to form &#8220;virtual organizations&#8221; on a planetary scale, sharing in a flexible way their collective computing and data resources. In particular, this is vital for projects on the frontiers of science and engineering, in data intensive fields such as particle physics, astronomy, bioinformatics, global climate modeling, geosciences, fusion, and neutron science.</p>
<p>Harvey Newman, Professor of Physics at Caltech and head of the team said, &#8220;This is a breakthrough for the development of global networks and Grids, as well as inter-regional cooperation in science projects at the high energy frontier. We demonstrated that multiple links of various bandwidths, up to the 10 Gbps range can be used effectively over long distances. This is a common theme that will drive many fields of data intensive science, where the network needs are foreseen to rise from tens of Gbps to the Terabit/sec range within the next 5-10 years. In a broader sense, this demonstration paves the way for more flexible, efficient sharing of data and collaborative work by scientists in many countries, which could be a key factor enabling the next round of physics discoveries at the high energy frontier. There are also profound implications for how we could integrate information sharing and on-demand audiovisual collaboration in our daily lives, with a scale and quality previously unimaginable.</p>
<p>Les Cottrell, assistant director of SLAC&#8217;s computer services, said: The smooth interworking of 10GE interfaces from multiple vendors, the ability to successfully fill 10Gbits/s paths both on local area networks (LANs), cross country and inter-continentally, the ability to transmit greater than 10Gbits/s from a single host, and the ability of TCP Offload Engines to (TOE) to reduce CPU utilization, all illustrate the emerging maturity of the 10Gigabit/second Ethernet market. The current limitations are not in the network but rather in the servers at the ends of the links, and their buses.</p>
<p>Further information about the demonstration may be found at:<br />
<a href="http://ultralight.caltech.edu/sc2004"> http://ultralight.caltech.edu/sc2004 </a> and <a href="http://www-iepm.slac.stanford.edu/monitoring/bulk/sc2004/hiperf.html"> http://www-iepm.slac.stanford.edu/monitoring/bulk/sc2004/hiperf.html </a></p>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=362</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>New Internet Land Speed Record</title>
		<link>http://www.cacr.caltech.edu/main/?p=369</link>
		<comments>http://www.cacr.caltech.edu/main/?p=369#comments</comments>
		<pubDate>Wed, 01 Sep 2004 21:39:43 +0000</pubDate>
		<dc:creator>cacrweb</dc:creator>
				<category><![CDATA[News]]></category>
		<category><![CDATA[CACR]]></category>
		<category><![CDATA[high-speed data transfer]]></category>
		<category><![CDATA[LHC]]></category>

		<guid isPermaLink="false">http://www.cacr.caltech.edu/main/?p=369</guid>
		<description><![CDATA[Caltech issued the following Press Release describing a new Internet2 Land Speed record. This record was set by a collaboration that includes CACR and employs 10 Gbit networking gear located in CACR&#8217;s computational facility.
Scientists at the California Institute of Technology (Caltech) and the European Organization for Nuclear Research (CERN), along with colleagues at AMD, Cisco, [...]]]></description>
			<content:encoded><![CDATA[<p>Caltech issued the following Press Release describing a new Internet2 Land Speed record. This record was set by a collaboration that includes CACR and employs 10 Gbit networking gear located in CACR&#8217;s computational facility.</p>
<p>Scientists at the California Institute of Technology (Caltech) and the European Organization for Nuclear Research (CERN), along with colleagues at AMD, Cisco, Microsoft Research, Newisys, and S2io have set a new Internet2 land-speed record. The team transferred 859 gigabytes of data in less than 17 minutes at a rate of 6.63 gigabits per second between the CERN facility in Geneva, Switzerland, and Caltech in Pasadena, California, a distance of more than 15,766 kilometers. The speed is equivalent to transferring a full-length DVD movie in just four seconds. <span id="more-369"></span></p>
<p>The technology used in setting this record included S2io&#8217;s Xframe 10 GbE server adapter, Cisco 7600 Series Routers, Newisys 4300 servers utilizing AMD Opteron processors, Itanium servers, and the 64-bit version of Windows Server 2003.</p>
<p>The performance is also remarkable because it is the first record to break the 100 petabit meter per second mark. One petabit is 1,000,000,000,000,000 bits.</p>
<p>This latest record by Caltech and CERN is a further step in an ongoing research-and-development program to create high-speed global networks as the foundation of next-generation data-intensive grids.</p>
<p>Multi-gigabit-per-second IPv4 and IPv6 end-to-end network performance will lead to new research and business models. People will be able to form &#8220;virtual organizations&#8221; of planetary scale, sharing in a flexible way their collective computing and data resources. In particular, this is vital for projects on the frontiers of science and engineering, projects such as particle physics, astronomy, bioinformatics, global climate modeling, and seismology.</p>
<p>Harvey Newman, professor of physics at Caltech, said, &#8220;This is a major milestone towards our dynamic vision of globally distributed analysis in data-intensive, next-generation high-energy physics (HEP) experiments. Terabyte-scale data transfers on demand, by hundreds of small groups and thousands of scientists and students spread around the world, is a basic element of this vision; one that our recent records show is realistic.&#8221; Olivier Martin, head of external networking at CERN and manager of the DataTAG project said, &#8220;As of 2007, when the Large Hadron Collider, currently being built at CERN, is switched on, this huge facility will produce some 15 petabytes of data a year, which will be stored and analyzed on a global grid of computer centers. This new record is a major step on the way to providing the sort of networking solutions that can deal with this much data.&#8221;</p>
<p>The team used the optical networking capabilities of the LHCnet, DataTAG, and StarLight and gratefully acknowledges support from the DataTAG project sponsored by the European Commission (EU Grant IST-2001-32459), the DOE Office of Science, High Energy and Nuclear Physics Division (DOE Grants DE-FG03-92-ER40701 and DE-FC02-01ER25459), and the National Science Foundation (Grants ANI 9730202, ANI-0230967, and PHY-0122557).</p>
<p><em>About Caltech: </em></p>
<p>With an outstanding faculty, including three Nobel laureates, and such off-campus facilities as Palomar Observatory, and the W. M. Keck Observatory, the California Institute of Technology is one of the world&#8217;s major research centers. The Institute also conducts instruction in science and engineering for a student body of approximately 900 undergraduates and 1,000 graduate students who maintain a high level of scholarship and intellectual achievement. Caltech&#8217;s 124-acre campus is situated in Pasadena, California, a city of 135,000 at the foot of the San Gabriel Mountains, about 10 miles northeast of the Los Angeles Civic Center. Caltech is an independent, privately supported university. More information is available at <a href="http://www.caltech.edu/">http://www.caltech.edu </a>.</p>
<p><em>About CERN: </em></p>
<p>CERN, the European Organization for Nuclear Research, has its headquarters in Geneva, Switzerland. At present, its member states are Austria, Belgium, Bulgaria, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Italy, the Netherlands, Norway, Poland, Portugal, Slovakia, Spain, Sweden, Switzerland, and the United Kingdom. Israel, Japan, the Russian Federation, the United States of America, Turkey, the European Commission, and UNESCO have observer status. For more information, see <a href="http://www.cern.ch/">http://www.cern.ch </a>.</p>
<p><em>About the European Union DataTAG project: </em></p>
<p>The DataTAG is a project co-funded by the European Union, the U.S. Department of Energy, and the National Science Foundation. It is led by CERN together with four other partners. The project brings together the following European leading research agencies: Italy&#8217;s Istituto Nazionale di Fisica Nucleare (INFN), France&#8217;s Institut National de Recherche en Informatique et en Automatique (INRIA), the UK&#8217;s Particle Physics and Astronomy Research Council (PPARC), and Holland&#8217;s University of Amsterdam (UvA). The DataTAG project is very closely associated with the European Union DataGrid project, the largest grid project in Europe also led by CERN. For more information, see <a href="http://www.datatag.org/">http://www.datatag.org </a>.</p>
<p><em>Contact: Robert Tindol (626) 395-3631 <a href="mailto:tindol@caltech.edu">tindol@caltech.edu </a></em></p>
]]></content:encoded>
			<wfw:commentRss>http://www.cacr.caltech.edu/main/?feed=rss2&amp;p=369</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>