Monday, February 17, 2014

The Matrix Is Real & Here – US Ignite Is The Biggest Threat To Human Freedom & Will Burn Down America

by Pete Santilli, Investigative Journalist & Host of The Pete Santilli Show
coming-soon copyFor almost 2 years, my co-host Susannah Cole has been patiently insistent on getting my attention about U.S. Ignite.  Almost every issue we cover on The Pete Santilli Show is impacted and/or influenced by U.S. Ignite, but when I began looking into the program, I couldn’t wrap my mind around the enormity of the initiative.
At a first glance, I understood how U.S. Ignite was tied directly and indirectly to Obamacare in many different respects.  Early in my findings, I focused my energy on the detrimental elements of ObamaCare, and quite frankly I considered ObamaCare the top-level parent of the potential hazards of U.S. Ignite.
As I now understand it, Obamacare is merely a “tentacle of the U.S. Ignite octopus”.
When I received news recently regarding the merger between Comcast and Time-Warner, my hackles of curiosity were raised once again, and U.S. Ignite immediately came to mind.  From  my perspective, I saw U.S.Ignite as the central hub connected to ObamaCare, Verizon/AT&T telecommunications industry, as well as the media industrial complex especially when I realized there is no possible way to stop the monopolistic merger of Comcast and Time Warner.    Needless to say, the Comcast Time-Warner merger stopped me in my tracks when I discovered that Barack Obama is golfing buddies with Comcast’s CEO.  The deal is done, and it immediately became obvious to me that it is one of the final steps to implementing U.S. Ignite.
On February 14, 2014, I started digging into my stored library of U.S. Ignite resources I hadn’t looked at over the past several months.  That isn’t to say that Susannah Cole hadn’t mentioned U.S. Ignite on  many occassions when the topics warranted.  In almost every instance, we were referencing U.S. Ignite when discussing Obamacare, frequently linking the “cradle-to-grave” monitoring aspects of Obamacare.  To us, we naively thought the fearful death panel was U.S. Ignite’s telecomm-tether system which transmits vital statistics to the ObamaCare database, and when a senior citizen reaches obsolescence, Obamacare would — in essence — pull the plug on an unproductive 65 year old “useless feeder”.
Finally, after almost 2 years of Susannah Cole’s insistence that I fully understand the full implications of U.S. Ignite, the light came on.   I asked myself, “who the heck started this ‘U.S. Ignite” thing?” and I began digging.  I quickly began peeling back layer after layer, and I will state this for the record.  As of this writing, what I discovered is so shocking; so startling; and so awakening, that I realized almost everything I have done up to this moment of discovery has prepared me to understand the full implications of U.S. Ignite.  Since a young age, I have acquired extensive experience in computer technology, manufacturing and engineering, telecommunications, military/weaponry, surveillance, media/marketing, and within the past several years as an investigative journalist, environmental issues, mind control technology, neuro-linguistic programming, Obamacare, education and energy.  My life experiences have helped me understand what I discovered when I stumbled into the “Genesis”, or origination of the U.S. Ignite initiative.
What I discovered will shock the world, but it’s not just the controlling elements of the U.S. Ignite initiative which are startling.  The program is here, in full motion, in it’s final stages of implementation, irreversible, and has been developed since 1987.  It’s ‘The Matrix’ of the future we’ve all been fearing and waiting for, but that’s not the most shocking revelation. Since 1987/1990, the U.S. Government has been advancing the initiative unopposed and without any consideration whatsoever for it’s unconstitutionality.
Please allow me to be as succinct as I possibly can in describing U.S. Ignite:  It’s a new, secondary high-speed internet designed to replace the original internet we know today. The present day internet is the old; obsolete, slow system we inherited from the military industrial complex in the late 80′s.  The U.S. Government knew it was a beast, but in order to accomplish their goal of implementing an all-encompassing technological system of enslavement, they had to come up with a way to pay for the research and development.  The costs back then were considered enormous for what they intended to do, and their intentions are now very transparent as they reach the final stages.
Look at the Central Intelligence Agency and their methods of mind control developed over the past 50+ years.  Take a look at the NSA, their computing power of surveillance, and their total disregard for human rights.  The military industrial complex has now fully metastasized into a stage-4-cancer equivalent of a tumor attached to the entire world, and the prognosis for survival is terminal. The new techno-tronic system of enslavement we live in can be described as the military-media-medical-industrial-complex; better known as The Matrix.  Look into the history of U.S. Ignite and you will quickly discover that the U.S. government — which built and will govern The Matrix  —- gave the initiative  a fascist marketing buzzword, but the best way to accurately describe it is to call it every hyphenated industrial complex in existence.
The Genesis of U.S. Ignite
Our GMN investigative journalists and researchers have assembled a very valuable resource for following the timeline or chronology of events which led up to the present day U.S. Ignite initiative.  We call it the U.S. Ignite Link Library, and we will continue to provide links and data on this page as we continue our research.  The information is overwhelming, but we believe it’s important to at least have a centralized resource that every individual can reference.  The bottom line of our discovery is that there is no central source of material or critical opposition to U.S. Ignite.  It’s stunning to discover that The Matrix has been funded, designed and implemented unchecked, unopposed, unconstitutionally, and until now, undetected.
When you dig; dig; dig into the history of the initiative, you ultimately end up in one place in time and space.  Around 1987, the military and the government had already decided to introduce the “internet” to the public for educational and commercial applications. They knew it would spark an information technology  revolution, but at the same time, they knew that their top secret work in areas of mind control, human enslavement, and global military/political dominance would require a much fast system than the old, obsolete hunk-of-junk they were releasing to the public domain.
Please read CNRI & GigaBit Testbed Initiative which is excerpted below.  (Even if it’s “boring” reading, we suggest you visit and at least know where to reference back to when you discover less boring information down the path of your learning).  This report is at the top of the U.S. Ignite Link Library, and I consider it to be ground zero, or the Genesis of our path towards destruction.
This report summarizes the results of the Gigabit Testbed Initiative, a project involving several dozen participants that ran from 1990 to 1995. The report attempts to put these results into perspective by providing the background, motivation, and current trends impacting the overall work. Detailed descriptions of context and results can be found in the final reports from each of the five testbeds involved in the Initiative [2-6] .
The Initiative had two main goals, both of which were premised on the use of network testbeds: (1) to explore technologies and architectures for gigabit networking, and (2) to explore the utility of gigabit networks to the end user. In both cases the focus was on providing a data rate on the order of 1 Gbps to the end-points of a network, i.e., the points of user equipment attachment, and on maximizing the fraction of this rate available to a user application.
A key objective of the Initiative was to carry out this research in a wide-area real-world context. While the technology for user-level high-speed networking capability could be directly achieved by researchers in a laboratory setting circa 1990, extending this context to metropolitan or wide-area network distances at gigabit per second rates was virtually impossible, due both to the absence of wide-area transmission and switching equipment for end-user gigabit rates and to the lack of market motivation to procure and install such equipment by local and long-distance carriers.
To solve this “chicken-and-egg” problem, a collaborative effort involving both industry and the research communities was established by CNRI with funding from government and industry. NSF and DARPA jointly provided research funding for the participating universities and national laboratories, while carriers and commercial research laboratories provided transmission and switching facilities and results from their internally-funded research. Five distinct testbed collaborations were created. These were called Aurora, Blanca, Casa, Nectar, and Vistanet. (A sixth gigabit testbed called MAGIC [7] was funded by DARPA about 18 months later, but was managed as a separate project and is not further described in this report.)
Computer networking dates from the late 1960s, when affordable minicomputer technology enabled the implementation of wide-area packet switching networks. The Arpanet, begun in 1969 as a research project by DARPA, provided a focal point within the U.S. for packet network technology development. In the 1970s, parallel development by DARPA of radio and satellite-based packet networks and TCP/IP internetworking technology resulted in the establishment of the Internet. The subsequent introduction and widespread use of ethernet, token ring and other LAN technologies in the 1980s, coupled with the expansion of the Internet by NSF to a broader user base, led to increasing growth and a transition of the Internet to a self-supporting operational status in the 1990s.
The Gigabit Testbed Initiative was a major effort by approximately forty organizations representing universities, telecommunication carriers, industry and national laboratories, and computer companies to create a set of very high-speed network testbeds and to explore their application to scientific research. This effort, funded by the National Science Foundation (NSF) and the Defense Advanced Research Projects Agency (DARPA), was coordinated and led by the Corporation for National Research Initiatives (CNRI) working closely with each of the many participating organizations and with the U.S. Government. The U.S. Government was also a participating organization insofar as testbeds were established within several Government laboratories to explore the concepts and technologies emerging from the Initiative.
Let me translate the information you just read in what I always refer to as “meat-and-potatoes” language that everyone can understand.  The military built the internet in the 60′s, while at the same time they developed some high-tech stuff in the laboratory to lock down the entire planet of useless eaters.  The system became old and obsolete, but to build a faster system (gigabit speed), it would be too expensive.  There aren’t enough slave workers in the USA, nor was there enough cocaine or heroin users to generate tax or C.I.A. drug dollars to support the system they needed.  So, they pawned off the old system to the public in the 1990′s.  The dot-com boom was born, we were told it was invented by Al Gore, and everyone said WOW.
Little did the entire population know that we’d all become farm animals on the old internet providing energy and innovation to produce the new internet which DARPA and the “NSF” (DARPA’s sister or spouse; or both if you believe it originated in Kentucky) set out to build since the late 80′s.
The U.S. government essentially decided to use $20 million dollars to spark “innovation” among corporations who were clamoring over market share and wealth in the dot com boom.  The government needed to build out The Matrix which would totally enslave the entire human race by controlling energy/environment, education, communication, surveillance, the human mind, healthcare, manufacturing, media, and homeland security.  What they had to do was compartmentalize each element within the various industries of each of the elements of their desired total control, and initiate the process of innovation and voluntarily acceptance by the public.  The US Government set out to create entire economies of scale that would innovate, engineer, market and profit and ultimately become a vital contributor to The Matrix…without the government spending a dime.  The CNRI Corporation for National Research Initiatives (CNRI) “worked closely with each of the many participating organizations and with the U.S. Government.”.
Simply translated: THE MATRIX WAS BUILT BY FASCISM WITH COVERT INTENTIONS.  We farm animals have been clicking away, carrying phones, inputing every thought, and designing new efficient means of eliminating the jobs of everyone around us.  Better, bigger, faster, smarter…like a bunch of little energetic mice on a conveyor belt.
Who Started This “Initiative”?
Who started this whole thing?  Of the many culprits, the most blame can be directed at the NSF (National Science Foundation).  The NSF is to The Matrix as DARPA is to robots.  Consider the NSF the computer specialists and DARPA mechanical engineers.  In my mind, the NSF is even more dangerous than DARPA, and the while “the-powers-that-be” have had us focused and fearful of dark DARPA projects, the NSF has been coordinating the buildout of The Matrix.
The National Science Foundation (NSF) is an independent federal agency created by Congress in 1950 “to promote the progress of science; to advance the national health, prosperity, and welfare; to secure the national defense…” With an annual budget of $7.2 billion (FY 2014), we are the funding source for approximately 21 percent of all federally supported basic research conducted by America’s colleges and universities. In many fields such as mathematics, computer science and the social sciences, NSF is the major source of federal backing.
In their 2011-2016 Strategic Plan, the NSF Act set forth a mission:  “to promote the progress of science; to advance the national health, prosperity, and welfare; to secure the national defense; and for other purposes.
How Did The Old Internet Fuel The Build-Out of The Matrix?
Obviously, the entire world has benefitted from almost every aspect of the internet revolution.  There’s no disputing the fact that our lives have improved in many respects.  None of us can really complain about it’s revolutionary benefits and production.
What none of us have realized is (a) how intrusive it would become (b) that our privacy would be obliterated and (c) that we as humans would become the “product” that corporations profit from
The innovation and engineering in telecommunications, media, healthcare, etc. has been explosive, and entire economies have been built on improvements and advances in each industry.
Here’s the hidden secret:  None of us knew what the government’s ultimate goal was.  The government knew from the start that the old system was obsolete; especially for what they wanted to use it for.  None of us knew that their idea was to energize corporations to start, build and grow, “create jobs” that would ultimately eliminate jobs, use the profits for research & development at the speed of the internet, and do all of this on the back of the human population.  The advancement in technology and understanding of the human mind would be poured into a centralized system that creeped forward with the help of the partnership between government and business.
The parallel “INTERNET 2″, a high speed; gigabit infrastructure would be built as the old internet evolved with the energy of human input. We “farm animals” have been clicking away, telling the world our thoughts, frappa Latte’s  and meals, and ultimately making the INTERNET 2 much smarter than it ever could have been without our free slave labor.  Heck, Google, Facebook and Twitter even figured out how to read our minds an put advertisements in our face as we worked for them for free.  We click on the ad, and the advertiser pays them.  The old, obsolete, slow internet has been making them money hand over fist.  All they need now is to do it faster, and to force all of us to make them smarter.
If a human knew he was being used as a mouse on a conveyor belt, providing energy and profit to an evil entity, do you think he would have worked so hard?
If a corporation knew it was being used as a cog in the wheel; turning faster and faster towards enslavement of the human population, do you think they would have been so willing to become “partners” in any government venture or initiative?
Well, the NSF and the CIA knew the answers to those questions, so they had to maintain the world’s best kept secret since the late 80′s.  All of us have been ‘virtually’ volunteering to build our own concentration camp of the internet, and we never really have to leave the comforts of our own homes or businesses.
Let’s Fast Forward to June 2012
Barack “Barry Soetoro” Obama (depending on which part of the internet you’ve been corralled to and brainwashed) releases an Executive Order, and the press release was titled:

We Can’t Wait: President Obama Signs Executive Order to Make Broadband Construction Faster and Cheaper

The first paragraph reads as follows:
Tomorrow, the President will sign an Executive Order to make broadband construction along Federal roadways and properties up to 90 percent cheaper and more efficient. Currently, the procedures for approving broadband infrastructure projects on properties controlled or managed by the Federal Government—including large tracts of land, roadways, and more than 10,000 buildings across the Nation—vary depending on which agency manages the property. The new Executive Order will ensure that agencies charged with managing Federal properties and roads take specific steps to adopt a uniform approach for allowing broadband carriers to build networks on and through those assets and speed the delivery of connectivity to communities, businesses, and schools.
Now, if you hadn’t researched or read any of the above history, you would assume by reading the title and the first paragraph that Obama’s neat idea was fresh, hip, new and absolutely so critical that it couldn’t wait for the Congress, Supreme Court, or the public, and he HAD to sign an Executive Order.
Obama was so hurried and excited, that he HAD to convene a meeting at the White House. The very next day, the US Ignite Initiative launch took place at the White House (see the video of the entire launch),  and Obama released a “Broadband Fact Sheet 06-13-2012” which made it sound like the NSF was jumping on board —as if they received an email the day before:
National Science Foundation (NSF) Leverages Investments in Virtual Laboratory to support US Ignite
As the lead Federal agency for US Ignite, NSF will expand its initial 4-year, ~$40 million investment in the Global Environment for Networking Innovations (GENI) project
Since we HAVE read this article and the links attached, we now know that there is evidence that the NSF has been working on this launch since 1987.  The so-called “President” not only could have waited to let everyone know what he was up to, his White House has known since way before the 1960′s what this “exciting new project” was all about.
It was the perfect scenario, and since the Patriot Act/ 9-11 / DHS method of taking over every little town in the USA, we now see how the government operates to get the job done:  They get secrecy and compliance with big bags of cash.  The NSF has spent countless sums of money with Corporations who take it willingly.  No hesitation from the public as well — heck, everyone has a new iPhone or ObamaPhone (depending on which side of town you live, and which color you support in the White House).
Unopposed, Unchecked, Unconstitutional
One of the biggest parts of this story is how this was done with trillions of dollars, billions of mice, over several decades….under the radar.  Every branch of government, corporation and citizen allowing a select few lunatics in the CIA and NSF to build a multi trillion dollar system of bondage & enslavement.  Volunteering to surrender everything –  life, liberty, and pursuit of happiness — all under the guise of “creating jobs” and “being #1″ in the United States of America, home of the brave, land of the free.
Not one politician has come forth to disclose what this system is, and they know better.  Even Ron Paul understands the power of the internet.  Not a peep from him, and he’s been around since this thing was started.
Not one member of the main stream media (of course not, you might as well have a perpetual price tag and receipt on them — they have always been bought & paid for)
Not one member of the so called “Alternative Media” has called this out, and as a member of the Independent Media, I feel I have an obligation to let my fellow man & woman know about it.
Why Worry About US Ignite & GigaBit Fast Internet 2?
These are facts:  Google monitors and stores every single search and website you have visited.  Google, Twitter and Facebook all provide their data to a company called Palantier (means ‘seeing stone’) and Palantier’s function is to do predictive analysis based on our human input and interaction with computers.  Based on what we are typing, saying and searching, not only do marketers/companies know what to sell us, the government knows how to manipulate our opinions and thoughts.   At the speed of fiber optic light, every single spoken word in your home, place of business, car, on the phone, in private, etc. is sent into The Matrix.  If you haven’t seen the movie The Matrix, go watch it and remember how the Agents always know where to find Neo.
Almost every high speed gadget and application which has been developed by companies are AWESOME! and in high demand.  The innovators are geniuses.  They all have great intentions, and are very successfully contributing to making our lives “better”.  Little do they know, their advancements are being used for the common evil purposes of what US Ignite was initiated for.
How do we know?  Because they wouldn’t have kept everything I am describing in this article a secret for so many years.  If the NSF’s, DARPA’s and the CIA’s intentions were good, then why would we keep all of this information from the public for decades?
Please allow me to answer:  Because once they’ve turned the switch on and finally tell the public, each of us will be under their absolute control forever.  No freedom of speech.  No freedom of movement.  No freedom of thought.  No freedom of religion.  No right to defend ourselves.  Nothing God ever gave us will be allowed. We will worship our new God…The Matrix….just ask the Matrix if what I am saying is true and you will see that I am 100% right.  Do you have any Energy, Education, Telecomm, Surveillance, Mind Control, Obamacare, Manufacturing, Media, DHS, Environmental concerns?  No need to call upon your God, US Ignite has everything under control (pun intended).
Conclusion
This concludes what is undoubtedly the very first opposing article written against US Ignite, but certainly not my last.  This is the beginning of a journey, and I ask you to join me.  We need every single member of the main stream media, alternative media, independent media, new media, and citizen media to learn as much as they can, and share their findings.
Can any of this be stopped?  Absolutely not. The only thing we can do is remember how they defended themselves in the movie The Matrix.  Only an EMP will save us.  The “squiddies” are coming, and there is absolutely no way to stop them other than to temporarily render them electronically inoperable.  Even an EMP over the USA won’t stop them.  Although we’d be sent back to the 1800′s, they likely have a way to get us back up and running in a few weeks.
Should our goal be to stop them? I don’t believe so.  As I said earlier :  If a human knew he was being used as a mouse on a conveyor belt, providing energy and profit to an evil entity, do you think they would have worked so hard to get us here?   If a corporation knew it was being used as a cog in the wheel; turning faster and faster towards enslavement of the human population, do you think they would have been so willing to become “partners” in any government venture or initiative?
My goal is to at least treat humans as they deserve to be treated, and to do everything we can to preserve our inalienable rights — and we obviously don’t need a piece of paper to back it up, especially since the one we have is virtually useless.
Let’s just tell everyone what they’re up to and piss them all off.  It seems that the only power they have is secrecy, so let’s expose them & watch them die from the agony of being found out.  For me, my dream is to die a free man, and be known for pissing them all off.

Only An EMP Can Save the USA. US Ignite Will Literally Enslave America

THE UKRAINE, THE USA, GERMANY, AND RUSSIA

Many people have inquired both privately and by way of more direct comments what I think about the current situation in the Ukraine.
Frankly, I scarcely know where to begin, and admittedly, to fully cover this story in the historical depth with which I think it needs to be covered would take not just one blog, nor even several; it would take an entire historical-geopolitical seminar. 
That said, however, what is going on in the Ukraine may be summarized in a simple thesis: the Ukraine is the needed final and essential piece in Washington’s game of the encirclement and emasculation of Russia, and as such, it has become the playground for covert operations and quiet backing of what can only be qualified as Neo-Fascist political groups. You’ll notice how the troubles in the Ukraine escalated rather dramatically after that nation’s recent shift back toward closer cooperation with Moscow. The “opposition” in the Ukraine – for those with an eye on history – recalls the Vlasov Army and German-sponsored “Ukrainian nationalist” – i.e. Fascist – groups in World War Two.
So… enter the two problems – the two flies – in Washington’s ointment: Germany, and Russia, and from all signs, these two powers fully intend to be the final arbiters of what happens in the Ukraine.
In short, the Ukraine has now become the battlefield for three competing geopolitical views of eastern Europe, the USA’s, Germany’s, and Russia’s. If this sounds like a bit of uncomfortable deja vu, that’s because it is.
But first, the USA’s role. Here, once again, our friends at The Daily Bell, are getting it right… at least, partially:
Washington Orchestrated Protests Are Destabilizing Ukraine
Mr. Roberts is entirely correct: the EU(read Germany) has correctly seen that further orchestration of covert opposition in the Ukraine to Washington’s Russian encirclement and emasculation policy is shortsighted, though for a different reason than Mr. Roberts advances: the real reason is that an emasculated Russia also means an emasculated Europe and Germany.
For some time now I’ve been noticing and drawing attention to the fact that Germany’s international diplomacy has increasingly followed a traditional pattern of Mitteleuropa Realpolitik, of playing off East against West, while very subtly, and very deliberately, Germany has been positioning itself for a long-term realignment with the East, i.e., with Russia. I have pointed out in this respect Germany’s quiet acquisition and deployment of the most advanced military technologies, from advanced diesel-electric-fuel cell submarines (some of which have been sold to Israel with cruise missile launching capability), and its major role in the development and manufacture of France’s latest submarine launched thermonuclear ICBM. (http://www.globalresearch.ca/europe-s-five-undeclared-nuclear-weapons-states/17550)That’s simply another way of stating that Germany is a “de facto” thermonuclear power. If you’ve wondered why post-reunification Germany has become such a “player” on the international stage, this is the unstated reason.
But there is a deeper story to this long-term strategic eastward turn, and for those familiar with the history of the post-war Nazi International, it will be a familiar one, for this move was advocated as far back as the beginnings of the Bundesrepublik during the government of Chancellor Konrad Adenauer. It was Adenauer who reminded the nervous foreign ministers of France and the UK that any treaty provision prohibitting German nuclear and thermonuclear weapons development would be subject to the standard of international law, sic res stantibus, as “long as the situation stands,” In other words, he was making it clear that Germany reserved the sovereign right to do so if the situation warranted. What we have seen is the acquisition of such technologies via the “Rapallo method”. In 1922 Germany and Soviet Russia – the two pariahs of post-World War One Europe – signed the Rapallo Treaty. At the insistence of German General Staff Chief, General Hans von Seekt, a secret protocol was added to the treaty: Germany would develop and test the weapons systems prohibited to her by the Versailles Treaty in Russia.  The same strategy was undertaken by Bonn after World War Two, to great, though largely unknown, effect.
The eastward turn was laid out during World War Two, and pursued and advocated afterward, by the Nazi International, though again, it is a largely unknown story. The steps were simple, if not breathtakingly familiar: create a European customs union, follow it by a currency and political union. Germany would be the dominant power in any such scheme. Once this was done, then slowly pry Europe away from Washington’s grip by the “Eastward turn” and the creation of a Europe-Germany-Russian entente that would be an obvious economic and geopolitical powerhouse, and the nightmare of every Anglo-American geopolitician going back to Halford Mackinder in the British Empire.
Now, in corroboration of this, consider the following article (submitted by Mr. K.M.):
Steinmeier visits Russia
One cannot get more direct than the website of the German Foreign Ministry itself, and the implications of this paragraph are clear:
“Steinmeier also reported that the two Foreign Ministers had spent a long time discussing the situation in Ukraine. They were of one mind, he said, on the need for Russian and the EU to talk about long term prospects in Europe in order to prevent any future crisis like the one gripping Ukraine. He said that the crucial thing here was the “reciprocal pledge that each side would ensure greater transparency with regard to its own policy”. Above all, Steinmeier went on, Ukraine must not become a “geopolitical chess match”:
“‘What goes on in Ukraine must not be about securing geopolitical spheres of influence. We have to enable the people of Ukraine to choose freely which path they want to take in future.”
This announcement clearly has Washington, not Moscow, as its intended target, and is a clear indicator that neither Russia nor Germany will support American unilateralism in the Ukraine. And it occurs within the time frame of the leaking of US State Department official Victoria Nuland’s less-than-lady-like comments about the European Union… a comment that was doubtless plucked from the electronic aether either by the Russian intelligence services or Germany’s BND, and probably released after the two countries quietly consulted on the merits of doing so…
Ukraine leader to Sochi as Kremlin warns action against “coup”
Ukraine Leak reveals Anglosphere Directed History
Make no mistake, folks, this one is one to watch, for whatever Germany and Russia are able to do in cooperation in the Ukraine, they will also be able to do regarding policy for the rest of Eastern Europe… so in my opinion the bottom line is this: we are watching the very beginning of a new long-range phenomenon in geopolitics unfold. It won’t happen over night. There will be fits and starts, but the calls for the repatriation of its gold, the sudden volte face of Germany on the Ukraine, is not, in the wider and more general historical context I have outlined about, sudden at all. It was calculated over six decades ago…

New Scientific Study Proves: Concealed Carry lowers crime rates

Submitted by: Susannah Cole,The Pete Santilli Show & The Guerilla Media Network
The Pete Santilli Show broadcasts live on The Guerilla Media Network.  Please join us on the Guerilla Media Network broadcasting your favorite talk shows, political art and news 24/7.  
January 21, 2014 -A scientific study formally reveals what gun owners already know; Concealed carry reduces crime rates.
Anti-gun rights advocates falsely assume and then parrot that restrictive laws on weapons make states a safer place to live but recent research shows the exact opposite is true.

Mark Gius from Quinnipac University, published in Applied Economics Letters, shows that in states with more restrictive concealed carry weapons (CCW) laws there is actually an increase in gun related crime.
Over the period of the study the average murder rate was 3.44, data available in the full article indicates that states with more restrictive CCW laws had a gun-related murder rate that was 10% higher than the average. In addition to this finding, the Federal assault weapons ban seemed to make an even bigger impact, with murder rates 19.3% higher when this ban was in effect.
There are four broad types of CCW laws, unrestricted, which means an individual requires no permit to carry a concealed handgun. Shall issue, in which a permit is required but authorities must issue one to all qualified applicants that request one. May issue, in which authorities can deny a request for a permit and finally no issue, those states that do not allow private citizens to carry a concealed weapon.

Although there have been many studies on gun control, there has been limited research into assault weapon bans and CCW laws. Of those that do currently exist there has been a mix in the exact results, however, Lott and Mustard (1997) found those states with a less restrictive law saw a 7.65% drop in murders.
This new study examines data from 1980 to 2009, one of the biggest time periods in research of this kind. It also looks solely at gun crime, rather than violent crime which is the case in similar research. State level data on gun related murder is taken from the Supplementary Homicide Reports from the United States Department of Justice and the information on CCW laws was obtained from a variety of United States bodies.
In conclusion it would appear that limiting people’s ability to carry concealed weapons may in fact cause murder rates to rise. Gius does admit that more research is warranted in this area.

The purpose of the present study is to determine the effects of state-level assault weapons bans and concealed weapons laws on state-level murder rates. Using data for the period 1980 to 2009 and controlling for state and year fixed effects, the results of the present study suggest that states with restrictions on the carrying of concealed weapons had higher gun-related murder rates than other states. It was also found that assault weapons bans did not significantly affect murder rates at the state level. These results suggest that restrictive concealed weapons laws may cause an increase in gun-related murders at the state level. The results of this study are consistent with some prior research in this area, most notably Lott and Mustard (1997).
Mark Gius. An examination of the effects of concealed weapons laws and assault weapons bans on state-level murder rates. Applied Economics Letters, 2014; 21 (4): 265 DOI: 10.1080/13504851.2013.854294

Unknown
Conceal Carry not  just for men; Ohio see’s more woman training for then obtaining permits
Permits currently require 12 hours of instruction and are valid for five years. A so-called “stand-your-ground” bill that passed the Ohio House and is in the Senate would reduce the required hours from 12 to four and eliminate a requirement that a person attempt to flee a threatening situation rather than use force.
Roger Polk, 52, of Wadsworth Township, has one of the busiest training programs in the area.
Along with a few helpers, he estimates they have instructed more than 14,000 people since 2004.

A postal employee, Polk first taught a firearms classes as a Marine. When the Ohio CCW law was approved, he decided to offer low-cost classes. While some may charge $100 per person in small groups, his program is $46, but may be in groups of more than 100.
They come from across the state, sometimes on church buses, he said.
“Our motivation is very simple,” he said. “What we want to do is educate, train and arm as many law-abiding citizens as we can so that they can help protect our family when they are out and about.”
At first, the students were strong advocates for conceal carry, and most were men. Now, about half are women.
But, he said, “It is not for everyone.”

“Not all my family or your family, or whoever, wants to carry,” he said. “The more people we have who are educated, trained and armed in our community, the better chance we have — me and the coaches involved in the class — the better chance we have of those people protecting our family members.”
Polk said he has never had to draw his gun.

“That’s a good feeling,” he said. “I don’t want to change anybody’s mind. They don’t have to agree with CCW … But the people who want to be able to protect themselves and their family — I want to be able to make a decision whether I am going to live or die and I don’t want it all up to the bad guy…”
Pink guns on sale

Kris Gaugler, gun salesman at Ohio Supply & Tool in Wadsworth, also has noticed the changing demographics of gun ownership.
“I am seeing from 80-year-old women to fathers getting their 21-year-old daughters handguns,” he said.
Gun manufacturers are offering firearms that may be attractive to women, said Gaugler. His store carries a pink Mossberg semi-automatic .22 called a Plinkster.

Teresa Tharan, 51, a licensed practical nurse of Akron, said she was the victim of stalking, so she obtained a license in 2012 and purchased a weapon.
“I had never held a gun before,” she said. “I was kind of scared.”
She took a class at Commence Firearms in Cleveland, and almost all of the students were women, she said.
“I wanted to protect my home and my family and I don’t like how they are trying to take guns out of our hands,” she said.
She said that although she isn’t allowed to carry a weapon to her workplace, she feels safer overall.
“I feel like I am protected if something does happen,” she said.

Akron husband and wife Paul and Sharon Lorentzen are gun owners and both plan to take a CCW class in 2014.
Sharon Lorentzen, 62, said she was attacked more than four decades ago while driving across New Mexico.
“I got the crap beat out of me,” she said. “That never leaves you.”

Paul Lorentzen, 71, a retired architect, said he wants the freedom to carry while walking his dog at night in their Highland Square neighborhood.
“I think it could deter serious crime if they think twice that you may be armed,” he said.
Fear is a major factor, said Akron CCW instructor Rick Starr, 54, who also is a pastor.
He said he is “saving lives spiritually or saving lives physically” by teaching about 1,500 people over five years how to handle a firearm
“It is called carrying concealed because you want to keep it secret and not advertise it to the public,” he said.
He believes there are two forces driving the interest.
“Unfortunately one of them is fear — fear of crime — and the other is fear that we will lose our privileges to carry concealed and that the government will step in and stop us.”

CNRI & GigaBit Testbed Initiative

Executive Summary

The Gigabit Testbed Initiative was a major effort by approximately forty organizations representing universities, telecommunication carriers, industry and national laboratories, and computer companies to create a set of very high-speed network testbeds and to explore their application to scientific research. This effort, funded by the National Science Foundation (NSF) and the Defense Advanced Research Projects Agency (DARPA), was coordinated and led by the Corporation for National Research Initiatives (CNRI) working closely with each of the many participating organizations and with the U.S. Government. The U.S. Government was also a participating organization insofar as testbeds were established within several Government laboratories to explore the concepts and technologies emerging from the Initiative.
Five Testbeds, named Aurora, Blanca, Casa, Nectar and Vistanet, were established and used over a period of several years to explore advanced networking issues, to investigate architectural alternatives for gigabit networks, and to carry out a wide range of experimental applications in areas such as weather modeling, chemical dynamics, radiation oncology, and geophysics data exploration. The five testbeds were geographically distributed across the United States as shown in the figure below.
The Gigabit Testbeds:
At the time the project started in 1990 there were significant barriers to achieving high performance networking, which was falling significantly behind advances in high performance computing. One of the major barriers was the absence of wide-area transmission facilities which could support gigabit research, and the lack of marketplace motivation for carriers to provide such facilities. The testbed initiative specifically targeted this problem through the creation of a multi-dimensional research project involving carriers, applications researchers, and network technologists. A second (and related) barrier was the lack of commercially available high speed network equipment operating at rates of 622 Mbps or higher. Fortunately, several companies were beginning to develop such equipment and the testbed initiative helped to accelerate its deployment.
A key decision in the effort, therefore, was to make use of experimental technologies that were appropriate for gigabit networking. The emphasis was placed on fundamental systems issues involved with the development of a technology base for gigabit networking rather than on test and evaluation of individual technologies. ATM, SONET and HIPPI were three of the technologies used in the program. As a result, the impetus for industry to get these technologies to market was greatly heightened. Many of the networks that subsequently emerged, such as the NSF-sponsored vBNS and the DOD-sponsored DREN, can be attributed to the success of the gigabit testbed program.
The U.S. Government funded this effort with a total of approximately $20M over a period of approximately five years, with these funds used by CNRI primarily to fund university research efforts. Major contributions of transmission facilities and equipment were donated at no cost to the project by the carriers and computer companies, who also directly funded participating researchers in some cases. The total value of industry contributions to the effort was estimated to be perhaps 10 or 20 times greater than the Government funding. The coordinating role of a lead organization, played by CNRI, was essential in helping to bridge the many gaps between the individual research projects, industry, government agencies and potential user communities. At the time this effort began, there did not appear to be a clearly visible path to make this kind of progress happen.
Initiative Impacts
In addition to the many technical contributions resulting from the testbeds, a number of non-technical results have had major impacts for both education and industry.
First and foremost was a new model for network research provided by the testbed initiative. The bringing together of network and application researchers, integration of the computer science and telecommunications communities, academia-industry-government research teams, and government-leveraged industry funding, all part of a single, orchestrated project spanning the country, provided a new level of research collaboration not previously seen in this field. The Initiative created a community of high performance networking researchers that crossed academic/industry/government boundaries.
The coupling of application and networking technology research from project inception was a major step forward for both new technology development and applications progress. Having applications researchers involved from the start of the project allowed networking researchers to obtain early feedback on their network designs from a user’s perspective, and allowed network performance to be evaluated using actual user traffic. Similarly, application researchers learned how network performance impacted their distributed application designs through early deployment of prototype software. Perhaps most significantly, researchers could directly investigate networked application concepts without first waiting for the new networks to become operational, opening them to new possibilities after decades of constrained bandwidth.
The collaboration of computer network researchers, who came primarily from the field of computer science, and the carrier telecommunications community provided another important dimension of integration. The development of computer communications networks and carrier-operated networks have historically proceeded along two separate paths with relatively little cross-fertilization. The testbeds allowed each community to work closely with the other, allowing each to better appreciate the other’s problems and solutions and leading to new concepts of integrated networking and computing.
From a research perspective, the testbed initiative created close collaborations among investigators from academia, government research laboratories, and industrial research laboratories. Participating universities included Arizona, UCBerkeley, Caltech, Carnegie-Mellon, Illinois, MIT, North Carolina, Pennsylvania and Wisconsin; national laboratories included Lawrence Berkeley Laboratory, Los Alamos National Laboratory (LANL), and JPL, and the NSF-sponsored National Center for Supercomputer Applications, Pittsburgh Supercomputer Center, and San Diego Supercomputer Center, while industry research laboratories included IBM Research, Bellcore, GTE Laboratories, AT&T Bell Laboratories, BellSouth Research, and MCNC. The collaborations also included facilities planners and engineers from the participating carriers, which included Bell Atlantic, BellSouth, AT&T, GTE, MCI, NYNEX, Pacific Bell and US West.
Another important dimension of the testbed model was its funding structure, in which government funding was used to leverage a much larger investment by industry. A major industry contribution was made by the carriers in the form of SONET and other transmission facilities within each testbed at gigabit or near-gigabit rates. The value of this contribution cannot be overestimated, since not only were such services otherwise non-existent at the time the project began, but they would have been unaffordable to the research community if they had existed under normal tariff conditions. By creating an opportunity for the carriers to learn about potential applications of high speed networks while at the same time benefiting from collaboration with the government-funded researchers in network technology experiments, the carriers were, in turn, willing to provide new high-speed wide-area experimental transmission facilities and equipment and to fund the participation of their researchers and engineers.
The Initiative resulted in significant technology transfer to the commercial sector. As a direct result of their participation in the project, two researchers at Carnegie-Mellon University founded a local-area ATM switch startup company, FORE Systems. This was the first such local ATM company formed, and provided a major stimulus for the emergence of high speed local area networking products. It also introduced to the marketplace the integration of advanced networking concepts with advanced computing architectures used within their switch.
Other technology transfers included software developed to distribute and control networked applications, the HIPPI measurement device (known as Hilda) developed by MCNC as part of the Vistanet effort, and the HIPPI-SONET wide-area gateway developed by LANL for the Casa testbed. In addition, new high speed networking products were developed by industry in direct response to the needs of the testbeds, for example HIPPI fiber optic extenders and high speed user-side SONET equipment. Major technology transfers also occurred through the migration of students who had worked in the testbeds to industry to implement their work in company products.
At the system level, the testbeds led directly to the formation of three statewide high speed initiatives undertaken by carriers participating in the testbeds. The North Carolina Information Highway (NCIH) was formed by BellSouth and GTE as a result of their Vistanet testbed involvement to provide an ATM/SONET network throughout the state. Similarly, the NYNET experimental network was formed in New York state by NYNEX as a result of their Aurora testbed involvement, and the California Research and Education Network (CalREN) was created by Pacific Bell following their Casa testbed participation.
The testbed initiative also led to the early use of gigabit networking technology by the defense and intelligence communities for experimental networks and global-scale systems, which have become the foundation for a new generation of operational systems. More recently, the U.S. Government has begun to take steps to help create a national level wide-area Gigabit networking capability for the research community.
The key technical areas addressed in the initiative are categorized for this report as transmission, switching, interworking, host I/O, network management, and applications and support tools. In each case, various approaches were analyzed and many were tested in detail. A condensed summary of the key investigations and findings is given at the end of the executive summary and elaborated on more fully in the report.
Future Directions
Among the barriers most often cited to the widespread deployment of very high-speed networks, those often cited are costs of the technology (particularly the cost of its deployment over large geographic areas), the regulated nature of the industry, and lack of market forces for applications that could make use of it and sustain its advance. Moreover, most people find it difficult to invest their own time or resources in a new technology until it becomes sufficiently mature that they can try it out and visualize what they might do with it and when they might use it.
A recent National Research Council report [1] includes a summary of the major advances in the computing and communications fields from the beginning of time-sharing through scalable parallel computing, just prior to when the gigabit testbeds described in this report were producing their early results. Using that report’s model, the gigabit testbeds would be characterized as being in the early conceptual and experimental development and application phase. The first technologies were emerging and people were attempting to understand what could be done with them, long before there was an understanding of what it would take to engineer and deploy the technologies on a national scale to enable new applications not yet conceived.
The Gigabit Testbed Initiative produced a demonstration of what could be done in a variety of application areas, and it motivated people in the research community, industrial sector, and government to provide a foundation for follow-on activities. Within the Federal government, the testbed initiative was a stimulus for the following events:
· The HPCCIT report on Information and Communication Futures identified high performance networking as a Strategic Focus.
· The National Science and Technology Council, Committee on Computing and Communications held a two day workshop which produced a recommendation for major upgrades to networking among the HPC Centers to improve their effectiveness, and to establish a multi-gigabit national scale testbed for pursuing more advanced networking and applications work.
· The first generation of scalable networking technologies emerged based on scalable computing technologies.
· The DoD HPC Modernization program initiated a major upgrade in networking facilities for their HPC sites.
· The Advanced Technology Demonstration gigabit testbed in the Washington DC area was implemented.
· The defense and intelligence communities began to experiment with higher performance networks and applications.
· The NSF Metacenter and vBNS projects were initiated.
· The all-optical networking technology program began to produce results with the potential for 1000x increase in transmission capacity.
To initiate the next phase of gigabit research and build on the results of the testbeds, CNRI proposed that the Government continue to fund research on gigabit networks using an integrated experimental national gigabit testbed involving multiple carriers, with gigabit backbone links provided over secondary (i.e., backup) channels by the carriers at no cost and switches and access lines paid for by the Government and participating sites. However, costs for access lines proved to be excessive, and at the time the Government was also unable to justify the funding needed for a national gigabit network capability — instead, several efforts were undertaken by the Government to provide lower speed networks.
In the not-too-distant future, we expect the costs for accessing a national gigabit network on a continuing basis will be more affordable and the need for it will be more evident, particularly its potential for stimulating the exploration of new applications. The results of the gigabit testbed initiative have clearly had a major impact on breaking down the barriers to putting high performance networking on the same kind of growth curve as high performance computing, thus enabling a new generation of national and global-scale high performance systems which integrate networking and computing.
Investigations and Findings
Four distinct end-to-end network layer architectures were explored in the project. These were a result both of architecture component choices made by researchers after the work was underway and of the a priori testbed formation process. The architectures were (1) seamless WAN-LAN ATM and (2) seamless WAN-LAN PTM, both used in the Aurora testbed, (3) heterogeneous wide-area ATM/local-area networks, used in the Blanca, Nectar and Vistanet testbeds, and (4) wide-area HIPPI/SONET via local switching, used in the Casa testbed.
The following summaries present highlights of the technology and applications investigations. It should be noted that while some efforts are specific to their architectural contexts, in many cases, the results can be applied to other architectures including architectures not considered in the initiative.
Transmission
· OC-48 SONET links were installed in four testbeds over distances of up to 2000 km, accelerating vendor development and carrier deployment of high speed SONET equipment, establishing multiple-vendor SONET interconnects, enabling discovery and resolution of standards implementation compatibility problems, and providing experience with SONET error rates in an operational environment
· Testbed researchers developed a prototype OC-12c SONET cross-connect switch and investigated interoperation with carrier SONET equipment, and developed OC-3c, OC-12, and OC-12c SONET interfaces for hosts, gateways and switches; these activities provided important feedback to SONET chip developers
· Techniques for carrying variable-length packets directly over SONET were developed for use with HIPPI and other PTM technologies, with both layered and tightly coupled approaches explored
· An all-optical transmission system – the first carrier deployment of this technology – was installed and used to interconnect ATM switches over a 300 mile distance using optical amplifier repeaters
· HIPPI technology was used for many local host links and for metropolitan area links through the use of HIPPI extenders and optical fiber; other local link technologies included Glink and Orbit
· Several wide-area striping approaches were investigated as a means of deriving 622 Mbps and higher bandwidths from 155 Mbps ATM or SONET channels; configurations included end-to-end ATM over SONET, LAN-WAN HIPPI over ATM/SONET, and LAN-WAN HIPPI and other variable-length PDUs directly over SONET
· A detailed study of striping over general ATM networks concluded that cell-based striping should be used. This capability can be introduced at LAN-WAN connection points in conjunction with destination host cell re-ordering and an ATM-layer synchronization scheme
Switching
· Prototype high speed ATM switches were developed (or made available) by industry and deployed for experiments in several of the testbeds, supporting 622 Mbps end-to-end switched links using both 155 Mbps striping and single-port 622 Mbps operation
· The first telco central office broadband ATM switch was installed and used for testbed experiments, using OC-12c links to customer premises equipment and OC-48 trunking
· Wide-area variable-length PTM switching was developed and deployed in the testbeds using both IBM’s Planet technology and HIPPI switches in conjunction with collocated wide-area gateways
· Both ATM and PTM technologies were developed and deployed for both local and desk area networking (DAN) experiments, along with the use of commercial HIPPI and ATM switches, which became available as a result of testbed-related work
· A TDMA technique was developed and applied to tandem HIPPI switches to demonstrate packet-based quality-of-service operation in HIPPI circuit-oriented switching environments, and a study of preemptive switching of variable length packets indicated a ten-fold reduction in processing requirements was possible relative to processor-based cell switching
Interworking
· Three different designs were implemented to interwork HIPPI with wide-area ATM networks over both SONET and all-optical transmission infrastructures; explorations included the use of 4×155 Mbps striping and non-striped 622 Mbps access, local HIPPI termination and wide-area HIPPI bridging; resulting transfer rates ranged from 370 to 450 Mbps
· A HIPPI-SONET gateway was implemented which allowed transfer of full 800 Mbps HIPPI rates across striped 155 Mbps wide-area SONET links; capabilities included variable bandwidth allocation of up to 1.2 Gbps and optional use of forward error correction, with a transfer rate of 790 Mbps obtained for HIPPI traffic (prior to host protocol processing)
· Seamless ATM DAN-LAN-WAN interworking was explored through implementation of interface devices which provided physical layer interfacing between 500 Mbps DAN Glink transmission, LAN ATM switch ports, and a wide-area striped 155 Mbps ATM/SONET network.
Host I/O
· Several different testbed investigations demonstrated the feasibility of direct cell-based ATM host connections for workstation-class computers; this work established the basis for subsequent development of high speed ATM host interface chipsets by industry and provided an understanding of changes required to workstation I/O architectures for gigabit networking
· Variable-length PTM host interfacing was investigated for several different types of computers, including workstations and supercomputers; in addition to vendor-developed HIPPI interfaces, specially developed HIPPI and general PTM interfaces were used to explore the distribution of high speed functionality between internal host architectures and I/O interface devices
· TCP/IP investigations concluded that hardware checksumming and data-copying minimization were required by most testbed host architectures to realize transport rates of a few hundred Mbps or higher; full outboard protocol processing was explored for specialized host hardware architectures or as a workaround for existing software bottlenecks
· A 500 Mbps TCP/IP rate was achieved over a 1000-mile HIPPI/SONET link using Cray supercomputers, and a 516 Mbps rate measured for UDP/IP workstation-based transport over ATM/SONET. Based on other workstation measurements, it was concluded that, with a 4x processing power increase (relative to the circa 1993 DEC Alpha processor used), a 622 Mbps TCP/IP rate could be achieved using internal host protocol processing and a hardware checksum while leaving 75% of the host processor available for application processing
· Measurements comparing the XTP transport protocol with TCP/IP were made using optimized software implementations on a vector Cray computer; the results showed TCP/IP provided greater throughput when no errors were present, but that XTP performed better at high error rates due to its use of a selective acknowledgment mechanism
· Presentation layer data conversions required by applications distributed over different supercomputers were found to be a major processing bottleneck; by exploiting vector processing capabilities, revisions to existing floating point conversion software resulted in a fifty-fold increase in peak transfer rates
· Experiments with commercial large-scale parallel processing architectures showed processor interconnection performance to be a major impediment to gigabit I/O at the application level; an investigation of data distribution strategies led to use of a reshuffling algorithm to remap the distribution within the processor array for efficient I/O
· Work on distributed shared memory (DSM) for wide-area gigabit networks resulted in several latency-hiding strategies for dealing with large propagation delays, with relaxed cache synchronization resulting in significant performance improvements
Network Management
· In different quality-of-service investigations, a real-time end-to-end protocol suite was developed and successfully demonstrated using video streams over HIPPI and other networks, and a `broker’ approach was developed for end-to-end/network quality-of-service negotiations in conjunction with operating system scheduling for strict real-time constraints
· An evaluation of processing requirements for wide-area quality-of-service queuing in ATM switches, using a variation of the “weighted fair queuing” algorithm, found that a factor of 8 increase in processing speed was needed to achieve 622 Mbps port speeds relative to the i960/33MHz processor used for the experiments
· Congestion/flow control simulation modeling was carried out using testbed application traffic, with the results showing rapid ATM switch congestion variations and high cell loss rates; also, a speedup mechanism was developed for lost packet recovery in high delay-bandwidth product networks using TCP’s end-to-end packet window protocol
· An end-to-end time window approach using switch monitoring and feedback to provide high speed wide-area network congestion control was developed, and performance was consistent with simulation-based predictions
· A control and monitoring subsystem was developed for real-time traffic measurement and characterization using carrier-based 622 Mbps ATM equipment; the subsystem was used to capture medical application traffic statistics revealing that ATM cell traffic can be more bursty than expected, dictating larger amounts of internal switch buffering than initially thought necessary for satisfactory performance
· A data generation and capture device for 800 Mbps HIPPI link traffic measurement and characterization was developed and commercialized, and was used for network debugging and traffic analysis; more generally, many network equipment problems were revealed through the use of real application traffic during testbed debugging phases
Applications and Support Tools
· Investigations using quantum chemical dynamics modeling, global climate modeling, and chemical process optimization modeling applications identified pipelining techniques and quantified speedup gains and network bandwidth requirements for distributed heterogeneous metacomputing using MIMD MPP, SIMD MPP, and vector machine architectures
· Most of the applications that were tested realized significant speedups when run on multiple machines over a very high speed network; however, a superlinear speedup of 3.3 was achieved using two dissimilar machines for a chemical dynamics application; other important benefits of distributed metacomputing such as large software program collaboration-at-a-distance were also demonstrated, and major advances made in understanding how to partition application software
· Homogeneous distributed computing was investigated for large combinatorial problems through development of a software system which allows rapid prototyping and execution of custom solutions on a network of workstations, with experiments providing a quantification of how network bandwidth impacts problem solution time
· Several distributed applications involving human interaction in conjunction with large computational modeling were investigated; these included medical radiation therapy planning, exploration of large geophysical datasets, and remote visualization of severe thunderstorm modeling
· The radiation therapy planning experiments successfully demonstrated the value of integrating high performance networking and computing for real-world applications; other interactive investigations similarly resulted in new levels of visualization capability, provided new techniques for distributed application communications and control, and provided important knowledge about host-related problems which can prevent gigabit speed operation
· A number of software tools were developed to support distributed application programming and execution in heterogeneous environments; these included systems for dynamic load balancing and checkpointing, program parallelization, communications and runtime control, collaborative visualization, and near-realtime data acquisition for monitoring progress and for analyzing results.

1 Introduction

This report summarizes the results of the Gigabit Testbed Initiative, a project involving several dozen participants that ran from 1990 to 1995. The report attempts to put these results into perspective by providing the background, motivation, and current trends impacting the overall work. Detailed descriptions of context and results can be found in the final reports from each of the five testbeds involved in the Initiative [2-6] .
The Initiative had two main goals, both of which were premised on the use of network testbeds: (1) to explore technologies and architectures for gigabit networking, and (2) to explore the utility of gigabit networks to the end user. In both cases the focus was on providing a data rate on the order of 1 Gbps to the end-points of a network, i.e., the points of user equipment attachment, and on maximizing the fraction of this rate available to a user application.
A key objective of the Initiative was to carry out this research in a wide-area real-world context. While the technology for user-level high-speed networking capability could be directly achieved by researchers in a laboratory setting circa 1990, extending this context to metropolitan or wide-area network distances at gigabit per second rates was virtually impossible, due both to the absence of wide-area transmission and switching equipment for end-user gigabit rates and to the lack of market motivation to procure and install such equipment by local and long-distance carriers.
To solve this “chicken-and-egg” problem, a collaborative effort involving both industry and the research communities was established by CNRI with funding from government and industry. NSF and ARPA jointly provided research funding for the participating universities and national laboratories, while carriers and commercial research laboratories provided transmission and switching facilities and results from their internally-funded research. Five distinct testbed collaborations were created. These were called Aurora, Blanca, Casa, Nectar, and Vistanet. (A sixth gigabit testbed called MAGIC [7] was funded by DARPA about 18 months later, but was managed as a separate project and is not further described in this report.)
Each testbed had a different set of research collaborators and a different overall research focus and objectives. At the same time, there were also common areas of research among the testbeds, allowing different solutions for a given problem to be explored.
The remainder of this report is organized as follows. Section 2, The Starting Point, briefly describes the technical context for the project which existed in the 1989-90 timeframe. Section 3, Structure and Goals, gives an overview of the Initiative structure, including the participants, topology and goals of each testbed. The main body of the report is contained in Section 4, Investigations and Findings, which brings together by technical topic the major work carried out in the five testbeds. Section 5, Conclusion, summarizes the impacts of the Initiative and how they might relate to the future of very high speed networking research. Appendix A lists reports and publications generated by the testbeds during the course of the project.
Readers are strongly encouraged to consult the testbed references and publications for more comprehensive and detailed discussions of testbed accomplishments. This report summarizes much of that work, but is by no means a complete cataloging of all efforts undertaken.

2 The Starting Point

2.1 A Brief History
Computer networking dates from the late 1960s, when affordable minicomputer technology enabled the implementation of wide-area packet switching networks. The Arpanet, begun in 1969 as a research project by DARPA, provided a focal point within the U.S. for packet network technology development. In the 1970s, parallel development by DARPA of radio and satellite-based packet networks and TCP/IP internetworking technology resulted in the establishment of the Internet. The subsequent introduction and widespread use of ethernet, token ring and other LAN technologies in the 1980s, coupled with the expansion of the Internet by NSF to a broader user base, led to increasing growth and a transition of the Internet to a self-supporting operational status in the 1990s.
Wide-area packet switching technology has from its inception made use of the telephone infrastructure for its terrestrial links, with the packet switches forming a network overlay on the underlying carrier transmission system. The links were initially 50 Kbps leased lines in the original Arpanet, progressing to 1.5 Mbps T1 lines in the NSFNET circa 1988 and 45 Mbps T3 lines by about 1992. Thus, at the time the gigabit testbed project began, Internet backbone speeds and large-user access lines were in the 50 Kbps to 1.5 Mbps range and local-area aggregate speeds were typically 10 Mbps or less. Individual peak user speeds ranged from about 1 Mbps for high-end workstations to 9.6 Kbps or less for PC modem connections.
The dominant application which emerged on the Arpanet once the network became usable was not what had been expected when the network was planned. Conceived as a vehicle for resource sharing among the host computers connected to the network, people-to-people communication in the form of email quickly came to dominate network use. The ability to have extended conversations without requiring both parties to be available at the same time, being able to send a single message to an arbitrarily large set of recipients, and automatically having a copy of every message stored in a computer for future reference proved to be powerful stimuli to the network’s use, and is an excellent example of the unforeseen consequences of making a new technology available for experimental exploration.
The computer resource sharing which did take hold was dominated by two applications-namely, file transfer and remote login. Applications which distributed a problem’s computation among computers connected to the network were also attempted and in some cases demonstrated, but they did not become a significant part of the original Arpanet’s use. Packetized voice experiments were demonstrated over the Arpanet in the 1970s, but with limited applicability due to limited bandwidth and long store-and-forward transmission delays at the switches.
The connection of the NSF-sponsored supercomputer centers to the Internet in the late 1980s provided a new impetus for networked resource sharing and resulted in an increase of activity in this application area, but multi-computer explorations were severely limited by network speeds.
2.2 State of Very High-Speed Networking in 1989-90
Prior to the time the testbeds were being formed in 1990, very little hands-on research in gigabit networking was taking place. Work by carriers and equipment vendors focused primarily on higher transmission speeds rather than on networking. There was a good deal of interest in high-speed networking within the research community, consisting mostly of paper studies and simulations, along with laboratory work at the device level. Interest was stimulated in the telecommunications industry by ongoing work on the standardization of Broadband ISDN (B-ISDN), which was intended to eventually address user data rates from about 50 Mbps upwards to the gigabit/s region, within the scientific community, interest in remote data visualization and multi-processor supercomputer-related activities was high.
A few high speed technologies had emerged by 1989, most notably HIPPI and Ultranet for local connections between computers and peripherals. HIPPI, developed at Los Alamos National Laboratory (LANL), was in the process of standardization at the time by an ANSI subcommittee and had been demonstrated with laboratory prototypes. Ultranet was based on proprietary protocols, and Ultranet products were in use at a small number of supercomputer centers and other installations. Both technologies provided point-to-point links between hosts at data rates of 800 Mbps to 1 Gbps.
In wide-area networking, SONET (Synchronous Optical Network) was being defined as the underlying transmission technology for the U.S. portion of B-ISDN by ANSI, and its European counterpart SDH (Synchronous Digital Hierarchy) was undergoing standardization by the CCITT. SONET and SDH were designed to provide wide-area carrier transport at speeds from approximately 50 Mbps to 10 Gbps and higher, along with the associated monitoring and control functions required for reliable carrier operation. While non-standard trunks were already in operation at speeds on the order of a gigabit/s, the introduction of SONET/SDH offered carriers the use of a scalable, all-digital standard with both flexible multiplexing and the prospect of ready interoperability among equipment developed by different vendors.
A number of high-speed switch designs were underway at the time, most focused on ATM cell switching. Examples of ATM switch efforts included the Sunshine switch design at Bellcore and the Knockout switch design at AT&T Bell Labs. Exploration of variable length packet switching at gigabit speeds was also taking place, most notably by the PARIS (later renamed Planet) switch effort at IBM. These efforts were focused on wide-area switching environments – investigation of ATM for local area networking had not yet begun.
Computing performance in 1990 was dominated by the vector supercomputer, with highly parallel supercomputers still in the development stage. The fastest supercomputer, the CRAY-YMP, achieved on the order of 1-2 gigaflops in 1990, while the only commercial parallel computer available was the Thinking Machines Corporation CM-2. Workstations had peak speeds in the 100 MIPS range, with PCs in about the 10 MIPS range. I/O interfaces for these machines consisted mainly of 10 Mbps ethernet and other LAN technologies with similar speeds, with some instances of 100 Mbps FDDI beginning to appear.
Optical researchers were making significant laboratory advances by 1990 in the development of optical devices to exploit the high bandwidth inherent in optical fibers, but this area was still in a very early stage with respect to practical networking components. Star couplers, multiplexors, and dynamic tuners were some of the key optical components being explored, along with several all-optical local area network designs.
The data networking research community had begun to focus on high-speed networking by the late 1980s, particularly on questions concerning protocol performance and flow/congestion control. New transport protocols such as XTP and various lightweight protocol approaches were being investigated through analysis, simulation, and prototyping, and a growing amount of conference and journal papers were focusing on high-speed networking problems.
The regulatory environment which existed in 1990, at the time the Gigabit Testbed Initiative was formed, was quite different from that which is now evolving. A regulated local carrier environment existed consisting of the seven regional Bell operating companies (RBOCs) along with some non-Bell companies such as GTE, which provided tariffed local telephone services throughout the U.S. Long distance services were being provided by AT&T, MCI, and Sprint in competition with each other. Cable television companies had not yet begun to expand their services beyond simple residential television delivery, and direct broadcast satellite services had not yet been successfully established. And while some independent research and development activities had been established within some of the RBOCs, the seven regional carriers continued to fund Bellcore as their common R&D laboratory.
With the passage of the Telecommunications Act of 1996, a more competitive telecommunications industry now seems likely. Mergers and buy-outs among the RBOCS are taking place, cable companies have begun to offer Internet access, and provisions for Internet telephony have begun to be accommodated by Internet service providers.
2.3 Gigabit Networking Research Issues
When the initiative began in 1990, many questions concerning high-speed networking technology were being considered by the research community. At the same time, telephone carriers were struggling with the question of how big the market, if any, might be for carrier services which would provide a gigabit/s service to the end-user. Cost was a major concern here. Research issues existed in most, if not all, areas of networking, including host I/O, switching, flow/congestion and other aspects of network control, operating systems, and application software. Two major questions underlie most of these technical issues: (1) could host I/O and other hardware and software operate at the high speeds involved? and (2) would speed of light delays in WANs degrade application and protocol performance?
These issues can be grouped into three general sets, which are discussed separately below:
· network issues
· platform issues
· application issues
Network Issues
A basic issue was whether existing conceptual approaches developed for lower speed networking would operate satisfactorily at gigabit speeds.Implementation issues were also uppermost in mind. For example, would a radically different protocol design allow otherwise unachievable low-cost implementations. However, most of the conceptual issues were driven by the fact that speed-of-light propagation delay across networks is constant, while data transmission times are a function of the transmission speed.
At a data rate of 1 Gbps, it takes only one nanosecond to transmit one bit, resulting in a link transmission time of 10 microseconds for a 10 kilobit packet. In contrast, for the 50 Kbps link speeds in use when the Arpanet was first designed, the same 10 kilobit packet has a transmission time of 200 milliseconds. The speed-of-light propagation delay across a 1000-mile link for either case, on the other hand, is on the order of 10 milliseconds. The result is that, whereas in the Arpanet case propagation delay is more than an order of magnitude smaller than the transmission time, in the gigabit network the propagation time is more than three orders of magnitude larger than the transmission time!
This difference has both positive and negative consequences. On the positive side, store-and-forward delays introduced by packet switches and routers along an end-to-end path are directly related to transmission time, causing them to become very small at gigabit speeds (barring unusual queuing situations). This removes a major problem inherent in the early Arpanet for packetized voice and other traffic having low delay requirements, since at gigabit speeds the resulting cumulative transmission delays effectively disappear relative to the propagation delay over wide-area distances.
On the negative side, the very small packet transmission time means that information sent to the originating node for feedback control purposes may no longer be useful, since the feedback is still subject to the same propagation delay across the network. Most networks in place in 1990, and particularly the Internet, relied on window-based end-to-end feedback mechanisms for flow/congestion control, for example that used by the TCP protocol. At 50 Kbps, a 200 millisecond packet transmission time meant that feedback from a destination node on a cross-country link could be returned to the sender before it had completed the transmission, causing further transmissions to be suppressed if necessary. At 1 Gbps, this type of short-term feedback control is clearly impossible for link distances of a few miles or more.
The impact of this feedback delay on performance is strongly related to the statistical properties of user traffic. If the peak and average bandwidth requirements of individual data streams are predictable over a time interval which is large relative to the network’s roundtrip propagation delay, then one might expect roundtrip feedback mechanisms to continue to work well. On the other hand, if the traffic associated with a user `session’, such as a file transfer, persists only for a duration comparable to or less than the roundtrip propagation time, then end-to-end feedback will be ineffective in controlling that stream relative to events occurring within the network while the stream is in progress. (And while we might look to the aggregation of large numbers of users to provide statistical predictability, the phenomenon of self-similar data traffic behavior has brought the prospect of aggregate data traffic predictability into question.)
Another control function impacted by the transmission/propagation time ratio is that of call setup in wide-area networks using virtual circuit (VC) mechanisms, for example in ATM networks. The propagation factor in this case can result in a significant delay before the first packet can be sent relative to what would otherwise be experienced. Moreover, for cases in which the elapsed time from the first to last packet sent is less than the VC setup time, inefficient resource utilization will typically result.
The transmission/propagation time ratio also impacts local area technologies. The performance of random access networks such as ethernet is premised on this ratio being much greater than one, so that collisions occurring over the maximum physical extent of the network can be detected at all nodes in much less than one packet transmission time. A factor of 100 increase from the original ethernet design rate of 10 Mbps to 1 Gbps implies that the maximum physical extent must be correspondingly reduced or the minimum packet size correspondingly increased, or some combination of the two, in order to use the original ethernet design without change.
More generally, as new competing technologies such as HIPPI or all-optical networks are introduced to deal explicitly with gigabit speeds, and with the prospect of still higher data rates in the future, issues of scalability and interoperability become increasingly important. Questions of whether ATM and SONET can scale independently of data rate or are in fact constrained by factors such as propagation delay, whether single-channel transmission at ever higher bit rates or striping over lower bit-rate multiple channels will prove more cost-effective, and how interoperability should best be achieved are important questions raised by the push to gigabit networking and beyond.
Along a somewhat different dimension, the proposed use of distributed shared memory (DSM) as a wide-area high speed communication paradigm instead of explicit message passing raised a number of issues. DSM attempts to make communication among a set of networked processors appear the same as if they were on a single machine using shared physical memory. A high bandwidth is required between the machines to allow successful DSM operation, and this had been achieved for local area networking environments. Issues concerning the application of DSM to a wide-area gigabit environment included how to hide speed-of-light latency so that processors do not have to stop and wait for remote memory updates and how far DSM could/should extend into the network; for example, should DSM be supported within network switches? Or, at the other extreme, should it exist only above the transport layer to provide a shared memory API for application programmers.
Platform Issues
A second set of issues concerns the ability of available computer and other technologies to support protocol processing, switching, and other networking functions at gigabit speeds. We use platform here very generally to mean the host computers, switching nodes internal to a network, routers or gateways which may be used for network interconnection, and specialized devices such as low level interfacing equipment.
For host computers the dominant question is the amount of resources required to carry out host-to-host and host-to-network protocol processing — in particular, could the computers available in 1990 support application I/O at gigabit rates, and if not at what future point might they be expected to?
Because of the dominance of TCP/IP in wide-area data networking by 1990, a question frequently asked was whether TCP implementations would scale to gigabit/s operation on workstation-class hosts. Some researchers claimed it would not scale and would have to be replaced by a new protocol explicitly designed for efficient high speed operation, in some cases using special hardware protocol engines. Others did not go to this extreme, but argued that outboard processing devices would be required to offload the protocol processing burden from the host, with the outboard processing taking place either on a special host I/O board or on an external device. Still others held that internal TCP processing at gigabit rates was not a problem if care was taken in its implementation, or that hardware trends would soon provide sufficient processing power.
For network switching nodes, a key question in 1990 was whether hardware switching was required or software-based packet switching could be scaled up to handle gigabit port rates and multi-gigabit aggregate throughputs. Another important question was how much control processing could reasonably be provided at each switch for flow/congestion control and Quality-of-service algorithms that require per-packet or per-cell operations. Routers and gateways were subject to much the same questions as internal network switches.
Switching investigations were largely focused on detailed architectural choices for fixed-size ATM cell switching using a hardware paradigm, with the view that the fixed size allowed cost-effective and scalable hardware solutions. Issues concerned whether a sophisticated Batcher-Banyan design was necessary or relatively simple crossbar approaches could be used, how much cell buffering was needed to avoid excessive cell loss, whether the buffers should be at the input ports, output ports, intermediate points within the switch structure, or some combination of these choices, and whether input and output port controller designs should be simple or complex.
For variable-length PTM switching, issues concerned how to develop new software/hardware architectures to distribute per-port processing at gigabit rates while efficiently moving packets between ports, and how to implement network control functions within the new architectures. A key question was how much, if any, specialized hardware is necessary to move packets at these rates.
Other platform issues concerned the cost of achieving gigabit/s processing in specialized devices such as those needed for interworking different transmission technologies or for SONET crossconnect switching, and whether it was reasonable to accomplish these functions by processing data streams at the full desired end-to-end rate or alternatively to stripe the aggregate rate over multiple lower speed channels.
Software issues also existed within host platforms over and above transport and lower layer protocol processing. One set of issues concerned the operating system software used by each vendor, which like most platform hardware was designed primarily to support internal computation with little, if any, priority given to supporting efficient networking. In addition to questions concerning the environment provided by the operating system for general protocol transactions, an important issue concerned the introduction of multimedia services by external networks and whether sufficiently fast software response times could be achieved for passing real-time traffic between an application and the network interface.
Another host platform software issue concerned the presentation layer processing required to translate between data formats used by different platforms, for example different floating point formats — because the translation must in general be applied to each word of data being transferred, it had the potential for being a major bottleneck.
Highly parallel distributed memory computer architectures which were coming into use in 1990 presented still another set of software issues for gigabit I/O. These architectures consisted of hundreds or thousands of individual computing nodes, each with their own local memory, which communicated with each other and the external world through a hardware interconnection structure within the computer. This gave rise to a number of questions, for example whether TCP and other protocol processing should be done by each node or by a dedicated I/O node or both, how data should be gathered and disseminated between the machine I/O interfaces and each internal node, and how well the different hardware interconnect architectures being used could support gigabit I/O data rates.
Application Issues
The overriding application concern for host-to-host gigabit networking was what classes of applications could benefit from such high data rates and what kind of performance gains or new functionality could be realized.
Prior to the Initiative, many people claimed to have applications needing gigabit/s rates, but most could not substantiate those claims quantitatively. It was the competition for participation in the Initiative that led to ideas for applications that required ~ Gb/s to the end user. Essentially all the applications which were selected had in common the need for supercomputer-class processing power, and these fell into two categories: ‘grand challenge’ applications in which the wall-clock time required to compute the desired results on a single 1990 supercomputer typically ranged from days to years, and interactive computations in which one or more users at remote locations desired to interact with a supercomputer modeling or other computation in order to visually explore a large data space.
The main issue for grand challenge applications was whether significant reductions in wall-clock solution time could be achieved by distributing the problem among multiple computers connected over a wide-area gigabit network. Here again, speed-of-light propagation delay loomed large — could remote processors exchange data over paths involving orders of magnitude larger delays than that experienced within a single multiprocessor computer and still maintain high processor utilization?
While circumventing latency appeared to be a major challenge, another approach offered the promise of major improvements for distributed computing in spite of this problem. This was the prospect of partitioning an application among heterogeneous computer architectures so that different parts of the problem were solved on a machine best matched to its solution. For example, computations such as matrix diagonalizations were typically fastest on vector architectures, while computations such as matrix additions or multiplications were fastest on highly parallel scalar architectures. Depending on the amount of computation time required for the different parts on a single computer architecture, a heterogeneous distribution offered the possibility of superlinearspeedups. (One definition of superlinear speedup is “an increase by more than a factor of N in effective computation speed, using N machines over a network, over that speed which the fastest of the N machines could have achieved by itself.)
Thus issues for this application domain included how to partition application software so as to maximize the resulting speedup for a given set of computers, which types of computers should be used for a particular solution, what computation granularities should be used and what constraints are imposed by the application on the granularities, and how to manage the overall distributed problem execution. The last question required that new software tools be developed to assist programmers in the application distribution, provide run-time execution control, and allow monitoring of solution progress.
The second class of applications, interactive computations, can range from a single user interacting with a remote supercomputer to a large number of collaborators sharing interactive visualization and control of a computation, which is itself distributed over a set of computing resources as described above and which may include very large distributed datasets. An important issue for this application class is determining acceptable user response times, for example 100 milliseconds or perhaps one second elapsed time to receive a full screen display in response to a control input. This should in general provide more relaxed user communication delay constraints than the first application class, since these times are large enough to not be significantly impacted by propagation delay, and will also remain constant as future computation times decrease due to increased computing power.
Other issues for remote visualization include where to generate the rendering, what form the data interface should take between the data generation output and the renderer, how best to provide platform-independent interactive control, and how to integrate multiple heterogeneous display devices. For large datasets, an important issue is how to best distribute the datasets and associated computational resources, for example performing preprocessing on a computer in close proximity to the dataset and moving the results across the network versus moving the unprocessed data to remote computation points.
Each of the above issues were examined in a variety of networking and application contexts and are described more fully in the referenced testbed reports. The investigations and findings are summarized in Section 4.

3 Structure and Goals

3.1 Initiative Formation
The origins of the testbed initiative date back to 1987, when CNRI submitted a proposal to NSF and was subsequently awarded a grant to plan a research program on very high speed networks. The original proposal, which involved participants from industry and the university research community, was written by Robert Kahn of CNRI and David Farber of the University of Pennsylvania. Farber later became an active researcher on the follow-on effort, while CNRI ran the overall initiative. As part of this planning, CNRI issued a call for white papers in October 1988. This call, published in the Commerce Business Daily, requested submissions in the form of white papers from organizations with technological capabilities relevant to very high speed networking.
The selection of organizations to participate in the resulting testbed effort was carried out in accordance with normal government practices. A panel of fourteen members, drawn largely from the government, was assembled to review the white papers and to make recommendations for inclusion in the program. Those recommendations formed the basis for determining the government-funded participants. CNRI then worked with telecommunications carriers to obtain commitments for wide-area transmission facilities and with others in industry to develop a cohesive plan for structuring the overall program.
A subsequent proposal was submitted to NSF in mid-1989 for government funding of the non-industry research participants, with the wide-area transmission facilities and industrial research participation to be provided by industry at no cost to the government. A Cooperative Agreement, funded jointly by NSF and DARPA, was awarded to CNRI later that year to carry out testbed research on gigabit networks. The research efforts were underway by Spring 1990. Government funding over the resulting five-year duration of the project totaled approximately $20M, with these funds used primarily for university research efforts, with total value of industry contributions over this period estimated to be perhaps 10 or 20 times greater than the Government funding..
3.2 Initiative Management
The overall effort was managed by CNRI in conjunction with NSF and DARPA program officials. Within NSF, Darleen Fisher of the CISE directorate, provided program management throughout the entire effort. A series of program managers, beginning with Ira Richer, were responsible for the effort at DARPA. Many others at both NSF and DARPA were also involved over the duration of the effort. In addition, each testbed had its own internal management structure consisting of at least one representative from each participating organization in that testbed; the particular form and style of internal management was left to each testbed’s discretion.
The coordinating role of a lead organization, played by CNRI, was essential in helping to bridge the many gaps between the individual research projects, industry, government ag encies and potential user communities. At the time this effort began, there did not appear to be a clearly visible path to make this kind of progress happen.
To provide an independent critique of project goals and progress, an advisory group was formed by CNRI consisting of six internationally recognized experts in networking and computer applications. A different, yet similar by constituted, panel was formed by NSF to review progress during the second year of the project.
Administrative coordination of the testbeds was carried out in part through the formation of the Gigabit Testbed Coordinating Committee (“Gigatcc”), made up of one to two representatives from each participating testbed organization and CNRI/NSF/DARPA project management. The Gigatcc, chaired by Professor Farber, met approximately 3-4 times per year during the course of the initiative. In addition, each research organization provided CNRI with quarterly material summarizing progress, and each testbed submitted annual reports at the completion of each of the first three years of the initiative. Final reports for each testbed were prepared and are being submitted along with this document.
To encourage cross-fertilization of ideas and information sharing between the testbeds, CNRI held an annual three-day workshop attended by researchers and others from the five testbeds, plus invited attendees from government, industry, and the general networking research community. Attendance at these workshops typically ranged from 200-300 people, and served both as a vehicle for information exchange among project participants and as a stimulus for the transfer of testbed knowledge to industry. CNRI also assisted the U.S. Government in hosting a Gigabit Symposium in 1991,attended by over 600 individuals and chaired by Al Vezza of MIT.
A number of small inter-testbed workshops were also held during the course of the project to address specific testbed-related topics which could especially benefit from intensive group discussion. A total of seven such workshops were held on the following topics: HIPPI/ATM/SONET interworking, gigabit TCP/IP implementation, gigabit applications and support tools, and operating system issues. In addition, an online database was established at CNRI early in the project to make information available via the Internet to project participants about new vendor products relevant to gigabit networking, and to maintain a list of publications and reports generated by testbed researchers.
3.3 The Testbeds
The five testbeds were geographically located around the U.S. as shown in Figure 3-1.
Figure 3-1. Testbed Locations

Share this:http://guerillamedianetwork.com/usignite/cnri-gigabit-testbed-initiative/