Monday, October 27, 2014

ESnet: The 100-gigabit shadow internet that only the US government has access to  ~ gee i wonder how far ahead of US is the deep.deep,Deep,DEEP black world  ???  ya know the 1  we the people ...paid 4 !

The Doors (The Crystal Ship)


588,794
Uploaded on Feb 24, 2007
this video is not new but ah i like it ....
Accelerating InternetOne day, as I surfed the web on my laptop and lamented how long it takes a YouTube video to load, I found myself wondering if employees of the US government — DoD researchers, DoE scientists, CIA spies — are also beholden to the same congestion and shoddy peering that affects everyone else on the internet. Surely, as hundreds of scientists at Fermi Lab near Chicago wait for petabytes of raw data to arrive from the Large Hadron Collider in Europe, they don’t suffer interminable connection drops and inexplicable lag. And, as it turns out, they don’t: the US government and its national laboratories all have exclusive access to ESnet — a shadow internet that can sustain 100-gigabits-per-second transfers between any of the major Department of Energy labs. And today, the DoE announced that the 100-gigabit ESnet will be extended across the Atlantic to our Old World comrades, who occasionally manage to dazzle us with their scientific endeavors.
ESnet, or the Energy Sciences Network to give its full name, has existed in some form or another since 1986. Throughout the history of telecommunications, networking, and the internet, it isn’t unusual for non-profits and government agencies to set up their own networks for their own specific needs — and indeed, the internet itself started as the ARPAnet, a defense-and-research-oriented packet-switched network that offered much higher transfer speeds and more utility than the existing circuit-switched telephone networks. ARPAnet eventually became open-access, gaining equal measures of awesomeness and terribleness – which in turn triggered the creation of various high-speed specialized networks that sought to bypass the internet, such as Internet2 (US research and education), JANET (British), GEANT (European), and ESnet.
Read: Why Netflix streaming is getting slower, and probably won’t get better any time soon

ESnet network map
ESnet network map
As you can see in the network map above, ESnet spans the US, providing a network of 100-gigabit links between many of the country’s major cities and all of the Department of Energy’s national laboratories (Ames, Argonne, Berkeley, Oak Ridge, Fermi, Brookhaven, etc.) There are also a handful of peering connections to commercial networks (i.e. the internet) and to other research/education networks around the world.
The ESnet’s links to Europe are of significant importance, as the world’s largest science experiment — CERN’s Large Hadron Collider in Switzerland — produces tens of petabytes (tens of thousands of terabytes) of data every year, and the supercomputers at Brookhaven and Fermi labs in the US are used to process that data. This morning, ESnet said it is deploying four separate links between Boston, New York, and Washington DC to London, Amsterdam, and Geneva. The four links will have a total capacity of 340 gigabits per second. The four links will take different paths across the Atlantic, which is a savvy move to increase redundancy (submarine cables get damaged fairly regularly).
Internet submarine cable map
A map of the world’s submarine cables
Read: The secret world of submarine cables
While 100-gigabit fiber-optic links are fairly old hat by this point (commercial 100 GbE switches have been around since 2010), ESnet is fairly unique in that its users can actually obtain end-to-end transfer speeds that are close to the theoretical maximum. It’s one thing to push hundreds of gigabits or even terabits of data second over a single stretch of optical fiber, but a much, much more difficult proposition to create a stable 100-gigabit connection across the breadth of the US, traversing thousands of miles and a dozen routers. Back in November last year, ESnet managed a solid disk-to-disk transfer speed of 91-gigabits-per-second from Denver to Maryland. That’s about 11 gigabytes per second — or 11 movies, if you prefer — copied from one massive high-speed disk cluster to another, over a distance of around 1,700 miles (2,700 km). As far as we’re aware, this is still the fastest long-distance connection ever created.
There’s no word on whether the entirety of ESnet is now enjoying 100-gigabit connections from one end of the country to the other, but it’s probably a work in progress for the DoE. Remember, having the physical fiber-optic links and routers is just one part of the equation — you also need a storage solution on each end of the connection that’s capable of 100Gbps I/O, which isn’t cheap and probably not even necessary for most national labs, unless they’re working on something big like the LHC.
The LHC's CMS detector
Big experiments, like the LHC’s CMS detector, create petabytes of data per year that needs to be sent thousands of miles from Geneva to supercomputers in the US for analysis

Moving forward, I’m sure the ESnet’s 100-gigabits-per-second won’t be bleeding edge for much longer. Most of the world’s large research and education networks — such as the UK’s JANET and Europe’s GEANT — have had 100-gigabit backbones for a few years now. The IEEE is currently working on the next high-speed network standard — somewhere between 400Gbps and 1,000Gbps (1Tbps) — which should be ready by 2017.
Another bundle of twisted pair copper wires
Delivering 100-gigabit speeds over the last mile of plain ol’ copper wires is a slightly more difficult proposition.
Finally, while you might be impressed by the speed of ESnet and the other networks that make up the shadow internet, you’re probably wondering when your internet — the internet — will see anything approaching these kinds of speeds. As of 2014, most of the internet is still made up 1, 10, and 40Gbps links. So far, despite protestations from the likes of Verizon and other US ISPs, there’s still plenty of headroom in the data centers and peering exchange points that make up the core backbone links of the internet. With the amount of bandwidth available across a single pair of optic fibers, and the relative simplicity of upgrading a few core routers, it won’t be hard to upgrade the internet backbone to 100 gigabits, and then later to 400Gbps and beyond.
The real difficulty of bringing high-speed internet access to the consumer — to your home, your office, your smartphone — is the last mile. It’s one thing to connect two filing cabinet-sized routers with a 100-mile stretch of fiber, but a completely different problem — on a completely different scale — to somehow connect billions of consumers to that same network. It can theoretically be done by running fiber all the way into your home, as Google is slowly doing with with its Fiber project — and perhaps, eventually, with millimeter-wave wireless networks — but we’re still a good few years away from the specific, commercialized technologies that will allow us to cost effectively bring gigabit-and-faster connections to the clamoring masses.

No comments:

Post a Comment