Monday, October 27, 2014

The Next NSA Spying Shoe to Drop: “Pre-Crime” Artificial Intelligence ~ hey besides the spying(control files on whoever) & the deep,deep DEEP finance system ? & c'ing who has,does what wit the 'new' technology (coming down the pike(hint,hint ..new "phisss~icks" maybe just maybe ALL the spy~in  IS "they" r look~in fer somebody who kinda looks like US ...& IS kinda "close" 2 US ....but isn't from round here ....but is here & more r coming here (or R on "their" way here )  ..just a thought   & ALL this smoke is just cover 4 THAT ? & maybe just maybe Y all the "spraying "  ALL over the Planet ....ya know 2 make IT kinda un~bear~able ( or reveals them ?)    ....   they know some~body IS coming ...back  ...soon  ?  hummmmmmmmmmmm

:o Oops


1984
NSA Building Big Brother “Pre-Crime” Artificial Intelligence Program
NSA spying whistleblower Edward Snowden’s statements have been verified.    Reporter Glenn Greenwald has promised numerous additional disclosures from Snowden.
What’s next?
We reported in 2008:
A new article by investigative reporter Christopher Ketcham reveals, a governmental unit operating in secret and with no oversight whatsoever is gathering massive amounts of data on every American and running artificial intelligence software to predict each American’s behavior, including “what the target will do, where the target will go, who it will turn to for help”.
The same governmental unit is responsible for suspending the Constitution and implementing martial law in the event that anything is deemed by the White House in its sole discretion to constitute a threat to the United States. (this is formally known as implementing “Continuity of Government” plans).
As Ketcham’s article makes clear, these same folks and their predecessors have been been busy dreaming up plans to imprison countless “trouble-making” Americans without trial in case of any real or imagined emergency.  [Background here.] What kind of Americans? Ketcham describes it this way:
“Dissidents and activists of various stripes, political and tax protestors, lawyers and professors, publishers and journalists, gun owners, illegal aliens, foreign nationals, and a great many other harmless, average people.”
Do we want the same small group of folks who have the power to suspend the Constitution, implement martial law, and imprison normal citizens to also be gathering information on all Americans and running AI programs to be able to predict where American citizens will go for help and what they will do in case of an emergency? Don’t we want the government to — um, I don’t know — help us in case of an emergency?
Bear in mind that the Pentagon is also running an AI program to see how people will react to propaganda and to government-inflicted terror. The program is called Sentient World Simulation:
“U.S defense, intel and homeland security officials are constructing a parallel world, on a computer, which the agencies will use to test propaganda messages and military strategies.Called the Sentient World Simulation, the program uses AI routines based upon the psychological theories of Marty Seligman, among others. (Seligman introduced the theory of ‘learned helplessness’ in the 1960s, after shocking beagles until they cowered, urinating, on the bottom of their cages.)
Yank a country’s water supply. Stage a military coup. SWS will tell you what happens next.
The sim will feature an AR avatar for each person in the real world, based upon data collected about us from government records and the internet.”
The continuity of government folks’ AI program and the Pentagon’s AI program may or may not be linked, but they both indicate massive spying and artificial intelligence in order to manipulate the American public, to concentrate power, to take away the liberties and freedoms of average Americans, and — worst of all — to induce chaos in order to achieve these ends.
PBS Nova reported in 2009:
The National Security Agency (NSA) is developing a tool that George Orwell’s Thought Police might have found useful: an artificial intelligence system designed to gain insight into what people are thinking.
With the entire Internet and thousands of databases for a brain, the device will be able to respond almost instantaneously to complex questions posed by intelligence analysts. As more and more data is collected—through phone calls, credit card receipts, social networks like Facebook and MySpace, GPS tracks, cell phone geolocation, Internet searches, Amazon book purchases, even E-Z Pass toll records—it may one day be possible to know not just where people are and what they are doing, but what and how they think.
The system is so potentially intrusive that at least one researcher has quit, citing concerns over the dangers in placing such a powerful weapon in the hands of a top-secret agency with little accountability.
Known as Aquaint, which stands for “Advanced QUestion Answering for INTelligence” [which is run by the Intelligence Advanced Research Projects Activity (IARPA)], part of the new M Square Research Park in College Park, Maryland. A mammoth two million-square-foot, 128-acre complex, it is operated in collaboration with the University of Maryland. “Their budget is classified, but I understand it’s very well funded,” said Brian Darmody, the University of Maryland’s assistant vice president of research and economic development, referring to IARPA. “They’ll be in their own building here, and they’re going to grow. Their mission is expanding.”
***
In a 2004 pilot project, a mass of data was gathered from news stories taken from the New York Times, the AP news wire, and the English portion of the Chinese Xinhua news wire covering 1998 to 2000. Then, 13 U.S. military intelligence analysts searched the data and came up with a number of scenarios based on the material. Finally, using those scenarios, an NSA analyst developed 50 topics, and in each of those topics created a series of questions for Aquaint’s computerized brain to answer. “Will the Japanese use force to defend the Senkakus?” was one. “What types of disputes or conflict between the PLA [People's Liberation Army] and Hong Kong residents have been reported?” was another. And “Who were the participants in this spy ring, and how are they related to each other?” was a third. Since then, the NSA has attempted to build both on the complexity of the system—more essay-like answers rather than yes or no—and on attacking greater volumes of data.
“The technology behaves like a robot, understanding and answering complex questions,” said a former Aquaint researcher. “Think of 2001: A Space Odyssey and the most memorable character, HAL 9000, having a conversation with David. We are essentially building this system. We are building HAL.” A naturalized U.S. citizen who received her Ph.D. from Columbia, the researcher worked on the program for several years but eventually left due to moral concerns. “The system can answer the question, ‘What does X think about Y?’” she said. “Working for the government is great, but I don’t like looking into other people’s secrets.
A supersmart search engine, capable of answering complex questions such as “What were the major issues in the last 10 presidential elections?” would be very useful for the public. But that same capability in the hands of an agency like the NSA—absolutely secret, often above the law, resistant to oversight, and with access to petabytes of private information about Americans—could be a privacy and civil liberties nightmare. “We must not forget that the ultimate goal is to transfer research results into operational use,” said Aquaint project leader John Prange, in charge of information exploitation for IARPA.
Once up and running, the database of old newspapers could quickly be expanded to include an inland sea of personal information scooped up by the agency’s warrantless data suction hoses. Unregulated, they could ask it to determine which Americans might likely pose a security risk—or have sympathies toward a particular cause, such as the antiwar movement, as was done during the 1960s and 1970s. The Aquaint robospy might then base its decision on the type of books a person purchased online, or chat room talk, or websites visited—or a similar combination of data. Such a system would have an enormous chilling effect on everyone’s everyday activities—what will the Aquaint computer think if I buy this book, or go to that website, or make this comment? Will I be suspected of being a terrorist or a spy or a subversive?
World Net Daily’s Aaron Klein reported earlier this month:
In February, the Sydney Morning Herald reported the Massachusetts-based multinational corporation, Raytheon – the world’s fifth largest defense contractor – had developed a “Google for Spies” operation.
Herald reporter Ryan Gallagher wrote that Raytheon had “secretly developed software capable of tracking people’s movements and predicting future behavior by mining data from social networking websites” like Facebook, Twitter, and Foursquare.
The software is called RIOT, or Rapid Information Overlay Technology.
Raytheon told the Herald it has not sold RIOT to any clients but admitted that, in 2010, it had shared the program’s software technology with the U.S. government as part of a “joint research and development effort … to help build a national security system capable of analyzing ‘trillions of entities’ from cyberspace.”
In April, RIOT was reportedly showcased at a U.S. government and industry national security conference for secretive, classified innovations, where it was listed under the category “big data – analytics, algorithms.”
Jay Stanley, senior policy analyst for the ACLU Speech, Privacy and Technology Project, argued …  that among the many problems with government large-scale analytics of social network information “is the prospect that government agencies will blunderingly use these techniques to tag, target and watchlist people coughed up by programs such as RIOT, or to target them for further invasions of privacy based on incorrect inferences.”
“The chilling effects of such activities,” he concluded, “while perhaps gradual, would be tremendous.”
Ginger McCall, attorney and director of the Electronic Privacy Information Center’s Open Government program, told NBC in February, “This sort of software allows the government to surveil everyone.
“It scoops up a bunch of information about totally innocent people. There seems to be no legitimate reason to get this, other than that they can.”
As for RIOT’s ability to help catch terrorists, McCall called it “a lot of white noise.”  [True ... Big data doesn't work to keep us safe.]
The London Guardian further obtained a four-minute video that shows how the RIOT software uses photographs on social networks. The images, sometimes containing latitude and longitude details, are “automatically embedded by smartphones within so-called ‘exif header data.’
RIOT pulls out this information, analyzing not only the photographs posted by individuals, but also the location where these images were taken,” the Guardian reported.
Such sweeping data collection and analysis to predict future activity may further explain some of what the government is doing with the phone records of millions of Verizon customers. [Background here.]
***
“In the increasingly popular language of network theory, individuals are “nodes,” and relationships and interactions form the “links” binding them together; by mapping those connections, network scientists try to expose patterns that might not otherwise be apparent,” reported the Times.  [Background here.]
In February 2006, more than a year after Obama was sworn as a U.S. senator, it was revealed the “supposedly defunct” Total Information Awareness data-mining and profiling program had been acquired by the NSA.
The Total Information Awareness program was first announced in 2002 as an early effort to mine large volumes of data for hidden connections.
Aaron Klein reported last week that Snowden might have worked at the NSA’s artificial intelligence unit at the University of Maryland:
Edward Snowden, the whistleblower behind the NSA surveillance revelations, told the London Guardian newspaper that he previously worked as a security guard for what the publication carefully described as “one of the agency’s covert facilities at the University of Maryland.”
***
Brian Ullmann, the university’s assistant vice president for marketing and communications, was asked for comment. He would not address the query, posed twice to his department by KleinOnline, about whether the NSA operates covert facilities in conjunction with the university.
Ullmann’s only comment was to affirm that Snowden was employed as a security guard at the university’s Center for the Advanced Study of Languages in 2005.

“Whoever Owns Space Owns the World”: Star Wars or Star Peace?  ~  ALL that bullshit bout the dollar collapse ??? who EVER has control of the "high ground" controls  $$$ !!!  huh ?


spacewar
The US is making huge investments into satellite technology. Back in 2009 US Defence Minister Robert Gates convinced Congress to designate a sum of $10.7 billion to developing this field.
“Whoever owns space also owns the world,” says the former Chief of Arms of the Russian Armed Forces, Colonel-General Anatoly Sitnov. But people in the military are the first to admit that Russia is lagging far behind the USA when it comes to space systems…
At the moment the sky is home to around 500 American orbiters, and just 100 Russian ones.
35 years after George Lucas’s Star Wars was released, there is a greater possibility of a space battle outside the realm of Hollywood.
Two new military satellites, one American, the other Russian, were recently launched into orbit. There is nothing particularly newsworthy about this since different satellites are constantly being sent up into space, but still, the event is yet another indication that space is becoming more militarised. If we are to prevent space from turning into a new kind of warzone, it is essential that international agreements to ban space armaments are developed and signed as a matter of urgency.
Back in 1977, no one would ever have believed George Lucas’s Star Wars Trilogy could become a reality. But today, 35 years after the film was first released, there is apparently a greater possibility of a space battle happening outside the realm of Hollywood fantasy. Space has become a central part of the military and defence policies in many of the world’s biggest states.
In the future a country at war will not try to occupy enemy territory directly. Instead it will concentrate on finding a country’s weak spots before issuing calculated blows. Ground troops and armoured vehicles will soon become a thing of the past, and strategic aviation is also set to take a back seat in the military campaigns of the future. Our understanding of ‘strategic armament’ has shifted from classic ‘nuclear defence triads’ towards non-nuclear armaments which rely on high-precision weapons systems and various means of deployment.
Wars of the future are expected to involve a lot of orbiters to ensure a country’s security: satellite reconnaissance, warning, forecasting and targeting systems – objects which themselves will need to be defended and armed.
The US is making huge investments into satellite technology. Back in 2009 US Defence Minister Robert Gates convinced Congress to designate a sum of $10.7 billion to developing this field. His successor in Barack Obama’s administration, Leon Panetta, clearly has no intention of lowering this sum.
Authoritative military analysts like for example, General Vladimir Slipchenko (who recently passed away), predict that by 2020 the world’s leading countries will have between 70,000-90,000 precision weapons. We can only imagine the number of satellite systems these will require. And without satellites, the cruise missiles and smart bombs that can be programmed to wipe out something as small as a mosquito are no more than useless lumps of metal.
And so it is only a matter of time before orbital systems are developed that will be able to independently hit targets in space, in the atmosphere or on the Earth itself. But just because the technology exists (or soon will do) it does not make it necessary to send military space stations into orbit, and this certainly should not mean that reconnaissance or meteorological satellites should have to be armed. In reality, the problems of satellite defence could be effectively dealt with from Earth.
“Whoever owns space also owns the world,” says the former Chief of Arms of the Russian Armed Forces, Colonel-General Anatoly Sitnov. But people in the military are the first to admit that Russia is lagging far behind the USA when it comes to space systems.
At the moment the sky is home to around 500 American orbiters, and just 100 Russian ones. According to Russian experts the American satellite fleet is more than four times the size of the Russia’s. Plus which, not all of Russia’s orbiters are in good working condition. In the middle of June the experimental space-craft X-37B completed a successful autonomous landing after more than 15 months orbiting the Earth. X-37B’s Programme Manager Lt Col Tom McIntyre noted that following the retirement of the space shuttle fleet, the X-37B OTV programme would bring “a singular capability to space technology development.” The Americans do not hide the fact that this sort of technology could first and foremost be applied to create new armament opportunities.
In this respect Russia’s position is very different from that of the Americans. In May 2008 Commander of the Space Forces General Vladimir Popovkin (who is now in charge of Roscosmos) said: “We are categorically against placing or launching any sort of armaments into space, because space is one of the few areas where there are no borders. Introducing arms to space will upset the balance in the world.”
According to Popovkin space systems and complexes are technically very difficult and could easily fail. “As the Commander of Space Forces (in this case) I cannot guarantee that the object’s failure was not caused by the actions of a potential enemy”.
According to military experts, strategic nuclear stability, i.e. guarantees against a sudden nuclear missile strike, rely heavily on the efficacy of early warning satellites that detect missile launches, and also on the constant work of reconnaissance satellites. If one of these orbiters ceases to function, the security of the state that launched it may end up in jeopardy. This could in turn create an atmosphere of distrust and uncertainty, which could ultimately lead to a military catastrophe.
It would seem that Harrison Ford, who played Han Solo, one of the most important characters in the Star Wars films, was right when he said that the main secret of the film’s success was that it was “not about space, but about people; this is primarily a film about human relationships.” It is up to us humans to decide whether space shall remain as a peaceful realm or whether it will become another arena for military conflict.

XKeyscore: NSA tool collects 'nearly everything a user does on the internet'

• XKeyscore gives 'widest-reaching' collection of online data
• NSA analysts require no prior authorization for searches
• Sweeps up emails, social media activity and browsing history
NSA's XKeyscore program – read one of the presentations

XKeyscore map
One presentation claims the XKeyscore program covers 'nearly everything a typical user does on the internet'
A top secret National Security Agency program allows analysts to search with no prior authorization through vast databases containing emails, online chats and the browsing histories of millions of individuals, according to documents provided by whistleblower Edward Snowden.
The NSA boasts in training materials that the program, called XKeyscore, is its "widest-reaching" system for developing intelligence from the internet.
The latest revelations will add to the intense public and congressional debate around the extent of NSA surveillance programs. They come as senior intelligence officials testify to the Senate judiciary committee on Wednesday, releasing classified documents in response to the Guardian's earlier stories on bulk collection of phone records and Fisa surveillance court oversight.
The files shed light on one of Snowden's most controversial statements, made in his first video interview published by the Guardian on June 10.
"I, sitting at my desk," said Snowden, could "wiretap anyone, from you or your accountant, to a federal judge or even the president, if I had a personal email".
US officials vehemently denied this specific claim. Mike Rogers, the Republican chairman of the House intelligence committee, said of Snowden's assertion: "He's lying. It's impossible for him to do what he was saying he could do."
But training materials for XKeyscore detail how analysts can use it and other systems to mine enormous agency databases by filling in a simple on-screen form giving only a broad justification for the search. The request is not reviewed by a court or any NSA personnel before it is processed.
XKeyscore, the documents boast, is the NSA's "widest reaching" system developing intelligence from computer networks – what the agency calls Digital Network Intelligence (DNI). One presentation claims the program covers "nearly everything a typical user does on the internet", including the content of emails, websites visited and searches, as well as their metadata.
Analysts can also use XKeyscore and other NSA systems to obtain ongoing "real-time" interception of an individual's internet activity.
Under US law, the NSA is required to obtain an individualized Fisa warrant only if the target of their surveillance is a 'US person', though no such warrant is required for intercepting the communications of Americans with foreign targets. But XKeyscore provides the technological capability, if not the legal authority, to target even US persons for extensive electronic surveillance without a warrant provided that some identifying information, such as their email or IP address, is known to the analyst.
One training slide illustrates the digital activity constantly being collected by XKeyscore and the analyst's ability to query the databases at any time.
KS1
The purpose of XKeyscore is to allow analysts to search the metadata as well as the content of emails and other internet activity, such as browser history, even when there is no known email account (a "selector" in NSA parlance) associated with the individual being targeted.
Analysts can also search by name, telephone number, IP address, keywords, the language in which the internet activity was conducted or the type of browser used.
One document notes that this is because "strong selection [search by email address] itself gives us only a very limited capability" because "a large amount of time spent on the web is performing actions that are anonymous."
The NSA documents assert that by 2008, 300 terrorists had been captured using intelligence from XKeyscore.
Analysts are warned that searching the full database for content will yield too many results to sift through. Instead they are advised to use the metadata also stored in the databases to narrow down what to review.
A slide entitled "plug-ins" in a December 2012 document describes the various fields of information that can be searched. It includes "every email address seen in a session by both username and domain", "every phone number seen in a session (eg address book entries or signature block)" and user activity – "the webmail and chat activity to include username, buddylist, machine specific cookies etc".

Email monitoring

In a second Guardian interview in June, Snowden elaborated on his statement about being able to read any individual's email if he had their email address. He said the claim was based in part on the email search capabilities of XKeyscore, which Snowden says he was authorized to use while working as a Booz Allen contractor for the NSA.
One top-secret document describes how the program "searches within bodies of emails, webpages and documents", including the "To, From, CC, BCC lines" and the 'Contact Us' pages on websites".
To search for emails, an analyst using XKS enters the individual's email address into a simple online search form, along with the "justification" for the search and the time period for which the emails are sought.
KS2
KS3edit2
The analyst then selects which of those returned emails they want to read by opening them in NSA reading software.
The system is similar to the way in which NSA analysts generally can intercept the communications of anyone they select, including, as one NSA document put it, "communications that transit the United States and communications that terminate in the United States".
One document, a top secret 2010 guide describing the training received by NSA analysts for general surveillance under the Fisa Amendments Act of 2008, explains that analysts can begin surveillance on anyone by clicking a few simple pull-down menus designed to provide both legal and targeting justifications. Once options on the pull-down menus are selected, their target is marked for electronic surveillance and the analyst is able to review the content of their communications:
KS4

Chats, browsing history and other internet activity

Beyond emails, the XKeyscore system allows analysts to monitor a virtually unlimited array of other internet activities, including those within social media.
An NSA tool called DNI Presenter, used to read the content of stored emails, also enables an analyst using XKeyscore to read the content of Facebook chats or private messages.
KS55edit
An analyst can monitor such Facebook chats by entering the Facebook user name and a date range into a simple search screen.
KS6
Analysts can search for internet browsing activities using a wide range of information, including search terms entered by the user or the websites viewed.
KS7
As one slide indicates, the ability to search HTTP activity by keyword permits the analyst access to what the NSA calls "nearly everything a typical user does on the internet".
KS8
The XKeyscore program also allows an analyst to learn the IP addresses of every person who visits any website the analyst specifies.
KS9
The quantity of communications accessible through programs such as XKeyscore is staggeringly large. One NSA report from 2007 estimated that there were 850bn "call events" collected and stored in the NSA databases, and close to 150bn internet records. Each day, the document says, 1-2bn records were added.
William Binney, a former NSA mathematician, said last year that the agency had "assembled on the order of 20tn transactions about US citizens with other US citizens", an estimate, he said, that "only was involving phone calls and emails". A 2010 Washington Post article reported that "every day, collection systems at the [NSA] intercept and store 1.7bn emails, phone calls and other type of communications."
The XKeyscore system is continuously collecting so much internet data that it can be stored only for short periods of time. Content remains on the system for only three to five days, while metadata is stored for 30 days. One document explains: "At some sites, the amount of data we receive per day (20+ terabytes) can only be stored for as little as 24 hours."
To solve this problem, the NSA has created a multi-tiered system that allows analysts to store "interesting" content in other databases, such as one named Pinwale which can store material for up to five years.
It is the databases of XKeyscore, one document shows, that now contain the greatest amount of communications data collected by the NSA.
KS10
In 2012, there were at least 41 billion total records collected and stored in XKeyscore for a single 30-day period.
KS11
Legal v technical restrictions
While the Fisa Amendments Act of 2008 requires an individualized warrant for the targeting of US persons, NSA analysts are permitted to intercept the communications of such individuals without a warrant if they are in contact with one of the NSA's foreign targets.
The ACLU's deputy legal director, Jameel Jaffer, told the Guardian last month that national security officials expressly said that a primary purpose of the new law was to enable them to collect large amounts of Americans' communications without individualized warrants.
"The government doesn't need to 'target' Americans in order to collect huge volumes of their communications," said Jaffer. "The government inevitably sweeps up the communications of many Americans" when targeting foreign nationals for surveillance.
An example is provided by one XKeyscore document showing an NSA target in Tehran communicating with people in Frankfurt, Amsterdam and New York.
KS12
In recent years, the NSA has attempted to segregate exclusively domestic US communications in separate databases. But even NSA documents acknowledge that such efforts are imperfect, as even purely domestic communications can travel on foreign systems, and NSA tools are sometimes unable to identify the national origins of communications.
Moreover, all communications between Americans and someone on foreign soil are included in the same databases as foreign-to-foreign communications, making them readily searchable without warrants.
Some searches conducted by NSA analysts are periodically reviewed by their supervisors within the NSA. "It's very rare to be questioned on our searches," Snowden told the Guardian in June, "and even when we are, it's usually along the lines of: 'let's bulk up the justification'."
In a letter this week to senator Ron Wyden, director of national intelligence James Clapper acknowledged that NSA analysts have exceeded even legal limits as interpreted by the NSA in domestic surveillance.
Acknowledging what he called "a number of compliance problems", Clapper attributed them to "human error" or "highly sophisticated technology issues" rather than "bad faith".
However, Wyden said on the Senate floor on Tuesday: "These violations are more serious than those stated by the intelligence community, and are troubling."
In a statement to the Guardian, the NSA said: "NSA's activities are focused and specifically deployed against – and only against – legitimate foreign intelligence targets in response to requirements that our leaders need for information necessary to protect our nation and its interests.
"XKeyscore is used as a part of NSA's lawful foreign signals intelligence collection system.
"Allegations of widespread, unchecked analyst access to NSA collection data are simply not true. Access to XKeyscore, as well as all of NSA's analytic tools, is limited to only those personnel who require access for their assigned tasks … In addition, there are multiple technical, manual and supervisory checks and balances within the system to prevent deliberate misuse from occurring."
"Every search by an NSA analyst is fully auditable, to ensure that they are proper and within the law.
"These types of programs allow us to collect the information that enables us to perform our missions successfully – to defend the nation and to protect US and allied troops abroad."

Secret Space War: America’s Former Nazi Scientists Dream of Ruling the World


spaceplane
In an exclusive interview with the Voice of Russia, Bruce Gagnon shares little known facts about the militarization of space by the United States, the development of first strike space drones and the foundation of the US military industrial complex by Nazi scientists bent on victory in World War III. If you thought missile defense and drones were bad, you haven’t heard anything yet.
Robles: According to your organization the US Space Command has publicly stated they intent to control space in order to protect US interests and investments. Is space now US territory?
Gagnon: Well, indeed the United States likes to believe that it owns space, and particularly the Space Command, who on their headquarters building in Colorado Springs, just above the doorway they have their logo that reads “Master of Space”. So, I think that it is quite evident that the Space Command does indeed view space as US territory that must be controlled because they clearly understand that all warfare on the earth today is coordinated by space technology and that whoever essentially controls space will control the planet below, in this case on behalf I believe of corporate globalization. And so the Space Command in our thinking has become the military arm of corporate globalization.
And so today the US is developing a whole host of technologies to allow it to fight war from space, through space and in space, controlling not only the Earth but also the pathway on and off the planet Earth, the pathway to other planetary bodies as resources are discovered on other planets: magnesium, cobalt, uranium, gold, water etc.
In a congressional study done back in the 1980s, the Congress gave the Pentagon the mandate to develop the technologies to control the pathway on and off the planet Earth. So, the Space Command sees its role in a very-very robust kind of way.
Robles: Several questions just popped up after what you just said. First one: how do they intend to “control the pathway”, I mean there is not only one pathway off the planet, I mean, how are they going to do that?
Gagnon: Well, in this particular study entitled “Military Space Forces the Next 50 Years”, they talk about the Earth-Moon Gravity Well, that whoever controls the Earth-Moon Gravity Well, essentially with bases on the Moon and armed space stations between, what they said were the L4 and L5 positions in space, they would be able to control these.
And interestingly enough, we know that it was in fact the former Nazi scientists that were brought to the United States following World War II under a program, a secret program, called Operation Paper Clip. These Nazi scientists that ran Hitler’s V1 and V2 rocket programs, they were the first to bring to the Congress of the United States, this idea of having orbiting battle stations controlling the pathway on and off the planet as well as the Earth below.
So, today again there is the whole host of technologies that are being developed by the Space Command. They say at the Pentagon that we are not going to get all of these technologies to work, but through the investment and the research and development in these various technologies, things like “Rods from God”: orbiting battle stations with tungsten-steel rods they would be able to hit targets on the Earth below…
Robles: They call those “Rods from God”?
Gagnon: Yes, they call it “Rods from God”. The new military space plane that is being tested now by the Pentagon, it has shown its ability to stay in orbit for a whole year at a time: an unpiloted space drone essentially. And then with ground stations all over the planet that the United States has established, what they call downlink stations that communicate with US military satellites all over the planet. This whole network has been put into place to really give the US, as they say in one of their planning documents, “control and domination of space”.
Robles: More questions: The space drone that you just mentioned, it is actually…it’s operational right now?
Gagnon: It is called the X37B, it’s been over the past couple of years. The testing program has accelerated and they’ve had three successful launches of it now. Just recently, I believe it was just at the end 2012, was the last of the missions, the third mission actually. But prior to that they had one of them spend a whole year in space.
The role of this X37B, or the military space plane, is somewhat in dispute. Some people believe that it is for surveillance, to spy on various countries, like Russia and China. Or others believe that it is actually a first strike weapons system whose job would be to fly down from orbit, drop an attack on a particular country.
In fact the Space Command annually war games a first strike attack on China set in the year 2016. And in one of the articles, in one of the industry publications, Aviation Week and Space Technology, I read a report about the first weapon that was used in one of these computer war game attacks of China, was this military space plane. So, indeed they are war gaming with it as a first strike weapon.
Robles: Now, you mentioned Nazi scientists a minute ago. I mean, it is not a very widely known fact that after World War II, I believe it was, about 400,000 Nazis found refugee in the United States. Can you tell us a little more on the scientists that were developing these programs and working with the US government? Can you expand on that a little bit?
Gagnon: Under Operation Paper Clip, 1,200 Nazis were brought into the United States, former Nazi intelligence. They were brought in to help create the CIA.
Wernher von Braun, the Nazi scientist that ran the V1 and V2 operations, was brought in. He became one of the leaders of NASA and he built the first successful rockets that were launched by the US military after the Kennedy administration wanted to respond to the Soviet Union’s launch of Sputnik.
Other Nazi scientists were brought in to create US Flight Medicine programs, the MKUltra LSD-drug experiments of the 1960s in the United States, where people were jumping out of windows and killing themselves because they were given drugs.
The people that were running these were the former Nazi scientists who had been doing similar tests on prisoners of war and Jews and other people in concentration camps inside of Germany.
So, the entire military industrial complex was seeded with these top Nazi operatives. And I’ve always maintained that when you do that: “Is there an ideological contamination that comes along with that?” My belief is: indeed there is.
Robles: That’s exactly the point I wanted to make myself.
Gagnon: Major-General Walter Dornberger was the Head of Hitler’s secret Space Development Program. He was brought to the United States to work for Bell Aerospace in New York State after the war.
He testified before the Congress in the 1950s. And I can quote him, he said to the Congress: “Gentlemen, I didn’t come to this country to lose the third world war, I lost two already.” And he again was one of the first to lay out this vision of control of space, giving the US full control of the planet Earth.
Bruce Gagnon is the coordinator for the Global Network Against Weapons and Nuclear Power in Space.

US Patent Application for Ebola Virus – Five Years Ago Today

Region:

ebola-cia
Below is a screenshot depicting the US government’s application for the Ebola Bundibugyo (EboBun) virus.
PATENT – Human Ebola Virus Species and Compositions and Methods Thereof – US 20120251502 A1 – ABSTRACT: Compositions and methods including and related to the Ebola Bundibugyo virus (EboBun) are provided…Inventive methods are directed to detection and treatment of EboBun infection. [See also: Why does the CDC own a patent on Ebola 'invention?' 3 Aug 2014.]

ESnet: The 100-gigabit shadow internet that only the US government has access to  ~ gee i wonder how far ahead of US is the deep.deep,Deep,DEEP black world  ???  ya know the 1  we the people ...paid 4 !

The Doors (The Crystal Ship)


588,794
Uploaded on Feb 24, 2007
this video is not new but ah i like it ....
Accelerating InternetOne day, as I surfed the web on my laptop and lamented how long it takes a YouTube video to load, I found myself wondering if employees of the US government — DoD researchers, DoE scientists, CIA spies — are also beholden to the same congestion and shoddy peering that affects everyone else on the internet. Surely, as hundreds of scientists at Fermi Lab near Chicago wait for petabytes of raw data to arrive from the Large Hadron Collider in Europe, they don’t suffer interminable connection drops and inexplicable lag. And, as it turns out, they don’t: the US government and its national laboratories all have exclusive access to ESnet — a shadow internet that can sustain 100-gigabits-per-second transfers between any of the major Department of Energy labs. And today, the DoE announced that the 100-gigabit ESnet will be extended across the Atlantic to our Old World comrades, who occasionally manage to dazzle us with their scientific endeavors.
ESnet, or the Energy Sciences Network to give its full name, has existed in some form or another since 1986. Throughout the history of telecommunications, networking, and the internet, it isn’t unusual for non-profits and government agencies to set up their own networks for their own specific needs — and indeed, the internet itself started as the ARPAnet, a defense-and-research-oriented packet-switched network that offered much higher transfer speeds and more utility than the existing circuit-switched telephone networks. ARPAnet eventually became open-access, gaining equal measures of awesomeness and terribleness – which in turn triggered the creation of various high-speed specialized networks that sought to bypass the internet, such as Internet2 (US research and education), JANET (British), GEANT (European), and ESnet.
Read: Why Netflix streaming is getting slower, and probably won’t get better any time soon

ESnet network map
ESnet network map
As you can see in the network map above, ESnet spans the US, providing a network of 100-gigabit links between many of the country’s major cities and all of the Department of Energy’s national laboratories (Ames, Argonne, Berkeley, Oak Ridge, Fermi, Brookhaven, etc.) There are also a handful of peering connections to commercial networks (i.e. the internet) and to other research/education networks around the world.
The ESnet’s links to Europe are of significant importance, as the world’s largest science experiment — CERN’s Large Hadron Collider in Switzerland — produces tens of petabytes (tens of thousands of terabytes) of data every year, and the supercomputers at Brookhaven and Fermi labs in the US are used to process that data. This morning, ESnet said it is deploying four separate links between Boston, New York, and Washington DC to London, Amsterdam, and Geneva. The four links will have a total capacity of 340 gigabits per second. The four links will take different paths across the Atlantic, which is a savvy move to increase redundancy (submarine cables get damaged fairly regularly).
Internet submarine cable map
A map of the world’s submarine cables
Read: The secret world of submarine cables
While 100-gigabit fiber-optic links are fairly old hat by this point (commercial 100 GbE switches have been around since 2010), ESnet is fairly unique in that its users can actually obtain end-to-end transfer speeds that are close to the theoretical maximum. It’s one thing to push hundreds of gigabits or even terabits of data second over a single stretch of optical fiber, but a much, much more difficult proposition to create a stable 100-gigabit connection across the breadth of the US, traversing thousands of miles and a dozen routers. Back in November last year, ESnet managed a solid disk-to-disk transfer speed of 91-gigabits-per-second from Denver to Maryland. That’s about 11 gigabytes per second — or 11 movies, if you prefer — copied from one massive high-speed disk cluster to another, over a distance of around 1,700 miles (2,700 km). As far as we’re aware, this is still the fastest long-distance connection ever created.
There’s no word on whether the entirety of ESnet is now enjoying 100-gigabit connections from one end of the country to the other, but it’s probably a work in progress for the DoE. Remember, having the physical fiber-optic links and routers is just one part of the equation — you also need a storage solution on each end of the connection that’s capable of 100Gbps I/O, which isn’t cheap and probably not even necessary for most national labs, unless they’re working on something big like the LHC.
The LHC's CMS detector
Big experiments, like the LHC’s CMS detector, create petabytes of data per year that needs to be sent thousands of miles from Geneva to supercomputers in the US for analysis

Moving forward, I’m sure the ESnet’s 100-gigabits-per-second won’t be bleeding edge for much longer. Most of the world’s large research and education networks — such as the UK’s JANET and Europe’s GEANT — have had 100-gigabit backbones for a few years now. The IEEE is currently working on the next high-speed network standard — somewhere between 400Gbps and 1,000Gbps (1Tbps) — which should be ready by 2017.
Another bundle of twisted pair copper wires
Delivering 100-gigabit speeds over the last mile of plain ol’ copper wires is a slightly more difficult proposition.
Finally, while you might be impressed by the speed of ESnet and the other networks that make up the shadow internet, you’re probably wondering when your internet — the internet — will see anything approaching these kinds of speeds. As of 2014, most of the internet is still made up 1, 10, and 40Gbps links. So far, despite protestations from the likes of Verizon and other US ISPs, there’s still plenty of headroom in the data centers and peering exchange points that make up the core backbone links of the internet. With the amount of bandwidth available across a single pair of optic fibers, and the relative simplicity of upgrading a few core routers, it won’t be hard to upgrade the internet backbone to 100 gigabits, and then later to 400Gbps and beyond.
The real difficulty of bringing high-speed internet access to the consumer — to your home, your office, your smartphone — is the last mile. It’s one thing to connect two filing cabinet-sized routers with a 100-mile stretch of fiber, but a completely different problem — on a completely different scale — to somehow connect billions of consumers to that same network. It can theoretically be done by running fiber all the way into your home, as Google is slowly doing with with its Fiber project — and perhaps, eventually, with millimeter-wave wireless networks — but we’re still a good few years away from the specific, commercialized technologies that will allow us to cost effectively bring gigabit-and-faster connections to the clamoring masses.

Brookings Institution: Our Cyborg Future – Law and Policy Implications




Introduction

In June 2014, the Supreme Court handed down its decision in Riley v. California, in which the justices unanimously ruled that police officers may not, without a warrant, search the data on a cell phone seized during an arrest. Writing for eight justices, Chief Justice John Roberts declared that “modern cell phones . . . are now such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy.”1
This may be the first time the Supreme Court has explicitly contemplated the cyborg in case law—admittedly as a kind of metaphor. But the idea that the law will have to accommodate the integration of technology into the human being has actually been kicking around for a while.
Speaking at the Brookings Institution in 2011 at an event on the future of the Constitution in the face of technological change, Columbia Law Professor Tim Wu mused that “we’re talking about something different than we realize.” Because our cell phones are not attached to us, not embedded in us, Wu argued, we are missing the magnitude of the questions we contemplate as we make law and policy regulating human interactions with these ubiquitous machines that mediate so much of our lives. We are, in fact, he argued, reaching “the very beginnings of [a] sort of understanding [of] cyborg law, that is to say the law of augmented humans.” As Wu explained,
[I]n all these science fiction stories, there’s always this thing that bolts into somebody’s head or you become half robot or you have a really strong arm that can throw boulders or something. But what is the difference between that and having a phone with you—sorry, a computer with you—all the time that is tracking where you are, which you’re using for storing all of your personal information, your memories, your friends, your communications, that knows where you are and does all kinds of powerful things and speaks different languages? I mean, with our phones we are actually technologically enhanced creatures, and those technological enhancements, which we have basically attached to our bodies, also make us vulnerable to more government supervision, privacy invasions, and so on and so forth.
And so what we’re doing now is taking the very first, very confusing steps in what is actually a law of cyborgs as opposed to human law, which is what we’ve been used to. And what we’re confused about is that this cyborg thing, you know, the part of us that’s not human, non-organic, has no rights. But we as humans have rights, but the divide is becoming very small. I mean, it’s on your body at all times.2
Humans have rights, under which they retain some measure of dominion over their bodies.3 Machines, meanwhile, remain slaves with uncertain masters. Our laws may, directly and indirectly, protect people’s right to use certain machines—freedom of the press, the right to keep and bear arms. But our laws do not recognize the rights of machines themselves.4 Nor do the laws recognize cyborgs—hybrids that add machine functionalities and capabilities to human bodies and consciousness.5
As the Riley case illustrates, our political vocabulary and public debates about data, privacy, and surveillance sometimes approach an understanding that we are—if not yet Terminators—at least a little more integrated with our machines than are the farmer wielding a plow, the soldier bearing a rifle, or the driver in a car. We recognize that our legal doctrines make a wide swath of transactional data available to government on thin legal showings—telephone call records, credit card transactions, banking records, and geolocation data, for example—and we worry that these doctrines make surveillance the price of existence in a data-driven society. We fret that the channels of communication between our machines might not be free, and that this might encumber human communications using those machines.
That said, as the Supreme Court did in Riley, we nearly always stop short of Wu’s arresting point. We don’t, after all, think of ourselves as cyborgs. The cyborg instead remains metaphor.
But should it? The question is a surprisingly important one, for reasons that are partly descriptive and partly normative. As a descriptive matter, sharp legal divisions between man and machine are turning into something of a contrivance. Look at people on the street, at the degree to which human-machine integrations have fundamentally altered the shape of our daily lives. Even beyond the pacemakers and the occasional robotic prosthetics, we increasingly wear our computers—whether Google Glass or Samsung Galaxy Gear. We strap on devices that record our steps and our heart rates. We take pictures by winking. Even relatively old-school humans are glued to their cell phones, using them not just as communications portals, but also for directions, to spend money, for informational feeds of varying sorts, and as recorders of data seen and heard and formerly—but no longer—memorized. Writes one commentator:
[E]ven as we rebel against the idea of robotic enhancement, we’re becoming cyborgs of a subtler sort: the advent of smartphones and wearable electronics has augmented our abilities in ways that would seem superhuman to humans of even a couple decades ago, all without us having to swap out a limb or neuron-bundle for their synthetic equivalents. Instead, we slip on our wristbands and smart-watches and augmented-reality headsets, tuck our increasingly powerful smartphones into our pockets, and off we go—the world’s knowledge a voice-command away, our body-metrics and daily activity displayable with a few button-taps.6


About the Authors

Benjamin Wittes
Benjamin Wittes is a senior fellow in Governance Studies at The Brookings Institution. He co-founded and is the editor-in-chief of the Lawfare blog, which is devoted to sober and serious discussion of "Hard National Security Choices.”
Jane Chong
Jane Chong is a 2014 graduate of Yale Law School, where she was an editor of the Yale Law Journal. She spent a summer researching national security issues at Brookings as a Ford Foundation Law School Fellow.

The Robots are Coming: The Project on Civilian Robotics
This paper is part of series focused on the future of civilian robotics, which seeks to answer the varied legal questions around the integration of robotics into human life.
Read other papers in the series »


No, the phones are not encased in our tissue, but our reliance on them could hardly be more narcotic if they were. Watch your fellow passengers the next time you’re on a plane that lands. No sooner does it touch down than nearly everyone engages their phones, as though a part of themselves has been shut down during the flight. Look at people on a bus or on a subway car. What percentage of them is in some way using phones, either sending information or receiving some stimulus from an electronic device? Does it really matter that the chip is not implanted in our heads—yet? How much of your day do you spend engaged with some communications device? Is there an intelligible difference between tracking it and tracking you?
This brings us to the normative half of the inquiry. Should we recognize our increasing cyborgization as more than just a metaphor, given the legal and policy implications both of doing so and of failing to do so? Our law sees you and your cell phone as two separate entities, a person who is using a machine. But robust protections for one may be vitiated in the absence of concurrent protections for the other. Our law also sees the woman with a pacemaker and the veteran with a robotic prosthesis or orthosis as people using machines. Where certain machines are physically incorporated into or onto the body, or restore the body to its “normal” functionality rather than enhance it, we might assume they are more a part of the person than a cell phone. Yet current laws offer no such guarantees. The woman is afforded no rights with respect to the data produced by her pacemaker,7 and the quadriplegic veteran has few rights beyond restitution for property damage when an airline destroys his mobility assistance device and leaves him for months without replacement.8
As we will explain, the general observation that humans are becoming cyborgs is not new.9 But commentators have largely used the term “cyborg” to capture, as a descriptive matter, what they see as unprecedented merger between humans and machines, and to express concerns about the ways in which the body and brain are increasingly becoming sites of control and commodification. In contrast, with normative vigor, we push the usefulness of the concept, suggesting the ways in which conceptualizing our changing relationship to technology in terms of our cyborgization may facilitate the development of law and policy that sensitively accommodates that change.
The shift that comes of understanding ourselves as cyborgs is nowhere more apparent than in the surveillance realm, where discussion of the legal implications of our technology dependence is often couched in and restricted to privacy terms. Under this conventional construction, it is privacy that is key to our identities, and technology is the poisoned chalice that enables, on the one hand, our most basic functioning in a highly networked world, and on the other, the constant monitoring of our activities. For example, in a 2013 Christmas day broadcast, Edward Snowden borrowed a familiar trope to portend a dark fate for a society threatened by the popularization of technologies that George Orwell had never contemplated. “We have sensors in our pockets that track us everywhere we go. Think about what this means for the privacy of the average person,”10 he urged. With some poignancy, Snowden went on to pay special homage to all that privacy makes possible. Privacy matters, according to Snowden, because “privacy is what allows us to determine who we are and who we want to be.”11
There is, however, another way to think about all of this: what if we were to understand technology itself as increasingly part of our very being?
Indeed, do we care so much about whether and how the government accesses our data perhaps because the line between ourselves and the machines that generate the data is getting fuzzier? Perhaps the NSA disclosures have struck such a chord with so many people because on a visceral level we know what our law has not yet begun to recognize: that we are already juvenile cyborgs, and fast becoming adolescent cyborgs; we fear that as adult cyborgs, we will get from the state nothing more than the rights of the machine with respect to those areas of our lives that are bound up with the capabilities of the machine.
In this paper, we try to take Wu’s challenge seriously and think about how the law will respond as the divide between human and machine becomes ever-more unstable. We survey a variety of areas in which the law will have to respond as we become more cyborg-like. In particular, we consider how the law of surveillance will shift as we develop from humans who use machines into humans who partially are machines or, at least, who depend on machines pervasively for our most human-like activities.
We proceed in a number of steps. First, we try to usefully define cyborgs and examine the question of to what extent modern humans represent an early phase of cyborg development. Next we turn to a number of controversies—some of them social, some of them legal—that have arisen as the process of cyborgization has gotten under way. Lastly, we take an initial stab at identifying key facets of life among cyborgs, looking in particular at the surveillance context and the stress that cyborgization is likely to put on modern Fourth Amendment law’s so-called third-party doctrine—the idea that transactional data voluntarily given to third parties is not protected by the guarantee against unreasonable search and seizure.


Reuters/Yuri Gripas - Guests use their cell phones to take pictures of U.S. President Barack Obama at a reception to observe Lesbian, Gay, Bisexual and Transgender (LGBT) Pride Month at the White House in Washington June 15, 2012


Reuters/Herwig Prammer - A figure from the movie The Terminator is displayed inside the house where Austrian actor, former champion bodybuilder and former California governor Arnold Schwarzenegger was born, in the southern Austrian village of Thal, October 7, 2011.


What Is a Cyborg and Are We Already Cyborgs?

Human fascination with man-machine hybrids spans centuries and civilizations.12 From this rich history we extract two lineages of modern thought to help elucidate the theoretical underpinnings of the cyborg.
In 1960, Manfred Clynes coined the term “cyborg”13 for a paper he coauthored with Nathan Kline for a NASA conference on space exploration.14 As conceived by Clynes and Kline, the cyborg—a portmanteau of “cybernetics” and “organism”15—was not merely an amalgam of synthetic and organic parts. It represented, rather, a particular approach to the technical challenges of space travel—physically adapting man to survive a hostile environment, rather than modifying the environment alone.16
The proposal would prove influential. Soon after the publication of Clynes and Kline’s paper, NASA commissioned “The Cyborg Study.” Released in 1963, the study was designed to assess “the theoretical possibility of incorporating artificial organs, drugs, and/or hypothermia as integral parts of the life support systems in space craft design of the future, and of reducing metabolic demands and the attendant life support requirements.”17 This sort of cyborg can be understood as a commitment to a larger project. As a “self-regulating man-machine,” the cyborg was designed “to provide an organization system in which . . . robot-like problems are taken care of automatically and unconsciously, leaving man free to explore, to create, to think, and to feel.”18 Distinguishing man’s “robot-like” functions from the higher-order processes that rendered him uniquely human, Clynes and Kline presented the cyborg as the realization of a concrete transhumanist goal: man liberated from the strictly mechanical (“robot-like”) limitations of his organism and the conditions of his environment by means of mechanization.
Outside the realm of space exploration, use of the term “cyborg” has evolved to encompass an expansive mesh of the mythological, metaphorical and technical.19 According to Chris Hables Gray, who has written extensively on cyborgs and the politics of cyborgization, “cyborg” has become “as specific, as general, as powerful, and as useless a term as tool or machine.”20 Perhaps because of its plasticity, the term has become more popular among science-fiction writers and political theorists than among scientists, who prefer more exacting vocabularies—using terms like biotelemetry, teleoperators, bionics and the like.21
The idea that we are already cyborgs—indeed, that we have always been cyborgs—has been out there for some time. For example, in her seminal 1991 feminist manifesto, Donna Haraway deployed the term for purposes of building an “ironic political myth,” one that rejected the bright-line identity markers purporting to separate human from animal, animal from machine. She famously declared, “[W]e are all chimeras, theorized and fabricated hybrids of machine and organism; in short, we are cyborgs.”22
Periodically repackaged as a radical idea, the claim has not remained confined to the figurative or sociopolitical realms. Technologists, too, have proposed that humans have already made the transition to cyborgs. In 1998, Andy Clark and David Chalmers proposed that where “the human organism is linked with an external entity in a two-way interaction” the result is “a coupled system that can be seen as a cognitive system in its own right.”23 Clark expanded on these ideas in his 2003 book Natural-Born Cyborgs:
My body is an electronic virgin. I incorporate no silicon chips, no retinal or cochlear implants, no pacemaker. I don’t even wear glasses (though I do wear clothes), but I am slowly becoming more and more a cyborg. So are you. Pretty soon, and still without the need for wires, surgery, or bodily alterations, we shall all be kin to the Terminator, to Eve 8, to Cable . . . just fill in your favorite fictional cyborg. Perhaps we already are. For we shall be cyborgs not in the merely superficial sense of combining flesh and wires but in the more profound sense of being human-technology symbionts: thinking and reasoning systems whose minds and selves are spread across biological brain and nonbiological circuitry.24




The idea that humans are already cyborgs has met with resistance from those who note that “[p]ointing to something like cell-phone use and saying ‘we’re all cyborgs’ is not substantially different from pointing to cooking or writing and saying “we’re all cyborgs.”25 But this is actually Clark’s point. As suggested by the title of his book, Clark does not regard the human “tendency toward cognitive hybridization” as a modern phenomenon. He sees the history of humanity as marked by a series of “mindware upgrades,” from the development of speech and counting, to the production of moveable typefaces and digital encodings.26 Although he recognizes the particular postmodern appeal of the cyborg, “a potent cultural icon of the late twentieth century,” Clark suggests that when whittled down, our futuristic conception of human-machine hybrids amounts to nothing more than “a disguised vision of (oddly) our own biological nature.”27
Not all cyborg theorists find the notion that we have always been cyborgs compelling. In the introduction to his influential 1995 collection The Cyborg Handbook, Chris Hables Gray addresses, and rejects, this conflation and essentialization of the primitive and the modern:
But haven’t people always been cyborgs? At least back to the bicycle, eyeglasses, and stone hammers? This is an argument many people make, including early cyborgologists like Manfred Clynes and J.E. Steele. The answer is, in a word, no . . . . Cyborgian elements of previous human-tool and human-machine relationships are only visible from our current point of view. In quantity, and quality, the relationship is new.”28
Similarly, cyborg anthropologist29 Amber Case maintains that there is a meaningful distinction to be made between past technologies and those developed in recent decades, based on the ways in which and extent to which they shape and change how humans connect to one another.30
There are those who have suggested that we may not yet be cyborgs but that, given the exponential growth of computing power and technological development, we will soon be. According to Ray Kurzweil—the preeminent inventor, futurist and now Google's director of engineering—we are at the “knee of the curve.” Kurzweil’s exploration of the coming obliteration of the distinction between human and machine recalls many of the bio-transcendent ideas and ambitions of Clynes and Kline and their intellectual predecessor, Norbert Weiner. Kurzweil states,
Our version 1.0 biological bodies are likewise frail and subject to a myriad of failure modes . . . . While human intelligence is sometimes capable of soaring in its creativity and expressiveness, much human thought is derivative, petty and circumscribed. The Singularity will allow us to transcend these limitations of our biological bodies and brains.31
A second history runs parallel to the Clynes-and-Kline narrative, one that begins around the same time but in a slightly different theoretical space. Wu, an apparent proponent of the “subtler cyborg” theory, is among those who attribute the beginnings of the “project of human augmentation” to J.C.R. Licklider.32 Licklider is sometimes referred to as the father of the Internet in part for his role in shaping the Pentagon’s funding priorities as head of the Information Processing Techniques Office, a division of the Advanced Research Projects Agency (ARPA).33 In 1960—the same year Clynes and Kline published “Cyborgs and Space”—Licklider published “Man-Computer Symbiosis,” in which he predicted the “close coupling” of man and the electronic computer. Together they would constitute a “semiautomatic system,” in contradistinction to the symbiotic system that was the mechanically extended man.34 Licklider wrote, “‘Mechanical extension’ has given way to replacement of men, to automation, and the men who remain are there more to help than to be helped.”35
Licklider’s ideas offer a fundamentally different way of thinking about the cyborg. In his view, the human does not remain a central part of the picture. Where Clynes, Kline and Clark arguably see fusion (between machine and man), a Lickliderite might be said to see substitution (machine for man). Under the Licklider view, no longer is the cyborg a project centered on unleashing man’s potentiality; the cyborg is man getting out of the way.
We don’t mean to settle these largely philosophical arguments about the nature of the cyborg’s technological trajectory here. For present purposes, let us begin with a working definition of the term “cyborg” and posit that a process of cyborgization of society is taking place. Steve Mann, inventor of the “wearable computer,” defines cyborg in terms of hybridization, as “a person whose physiological functioning is aided by or dependent upon a mechanical or electronic device.”36 The Oxford English Dictionary, meanwhile, defines cyborg more explicitly in terms of augmentation: as “[a] person whose physical tolerances or capabilities are extended beyond normal human limitations by a machine or other external agency that modifies the body's functioning; an integrated man–machine system.”37 Under either definition, different people fall in different places on the spectrum of pure human to consummate cyborg. But quite a number of us are inching closer to a subtle Arnold Schwarzenegger. And the increasing cyborgization of the populace—however we choose to define the phenomenon—raises important questions about access,38 discrimination,39 military action,40 privacy,41 bodily integrity,42 autonomy,43 property44 and citizenship.44 These are questions that, ultimately, the law will have to address.


cy·borg
/ˈsīˌbôrg/ noun
1. a person whose physiological functioning is aided by or dependent upon a mechanical or electronic device;
2. a person whose physical tolerances or capabilities are extended beyond normal human limitations by a machine or other external agency that modifies the body's functioning; an integrated man–machine system.


Reuters/Sergio Perez - A man wears a pair of Google glasses as he stands at Madrid's Puerta del Sol square December 14, 2013.


Cyborgs Among Us: Controversies and Policy Implications

One way to think about the policy issues that cyborgs will force us to address is to examine the controversies already arising out of our incipient cyborgization.
Many of these controversies track familiar binaries, such as substitution for missing or defective human parts versus enhancement or extension of normal human capabilities. The significance of this particular distinction has long been recognized in the bioethics sphere. Dieter Sturma, director of the German Reference Centre for Ethics in the Life Sciences, has stated that we should be open to “technical systems [that] reduce the suffering of a patient,” while warning that enhancement technology could become a problem.46
For starters, human enhancement is often associated with cheating and unfair advantage. Everything from professional athletes’ use of performance-enhancement drugs and government investment in military superwarriors has been criticized on these grounds.47 Vaccines, on the other hand, go largely unchallenged, no doubt because though they might be said to enhance the immune system, their function is to serve as prophylactic treatment against disease and are ideally administered to all.
But the distinction between substitution and enhancement is less stable than might be assumed.48 For it turns on how society chooses to define what constitutes health and what constitutes deficiency. That distinction is itself subject to variation depending on social context and individual goals, and subject to change with advances in science that allow for the manipulation of previously unalterable biological conditions.49 VSP, the country's largest optical health insurance provider, for example, recently signed on to offer Google Glass subsidized frames and prescription lenses.50 By giving wearable devices “a medical stamp of approval,”51 the move suggests the beginnings of a breakdown in the distinction between mediated vision and mediated reality.


Neil Harbisson, a cyborg activist and artist born with a form of extreme colorblindness that limits him to seeing in only black and white, is equipped with an “eyeborg,” a device implanted in his head that allows him to “hear” color.52 In 2012, police officers attacked Harbisson and broke the camera off his head because they believed, mistakenly, that he was filming them during a street demonstration.53
The functionality of the eyeborg and the injuries Harbisson sustained raise questions not only about the enhancement-substitution distinction but also about the distinction between embedded and external devices. Our intuition might be to separate embedded from external technologies on the grounds that the difference often tracks whether the technologies are “integral” to the functioning of the human body. This could seem reasonable depending on the particular technologies we decide to compare: for example, smartphones are external to the body and presently not allowed in many federal courtrooms; barring someone fitted with a pacemaker from entry, on the other hand, would seem untenable.
But the difference between the embedded and the external is not so easily reduced to the difference between the integral and the superficial. Consider devices designed to compensate for physical deficiency but in ways not readily perceived by our society as prosthetic in form and function.54 Linda MacDonald Glenn, a bioethicist and lawyer, cites the case of a disabled Vietnam veteran who, as an incomplete quadriplegic, is entirely dependent on a powered mobility assistance device (MAD) to not only move and travel but also to protect himself against hypotensive episodes.55 An airline damaged his MAD beyond repair in October 2009 and left him without a replacement for a year, causing him to be bedridden for eleven months and suffer ulcers as a result.56 The airline offered minimal damages—$1500 in compensation—on the grounds that they harmed not their customer but his device only.57 Glenn argues that modern day assistive devices are no longer “inanimate separate objects” but “interactive prosthetics”: these include implants, transplants, embedded devices, nanotechnology, neural prosthetics, wearables and bioengineering.58 As such, liability rules that cover damage to or interference with use of traditional property could need reassessment in the interactive-prosthetic context.
This is not to dismiss the difference between what is embedded and what is external. After all, whether or not a technology can be considered medically superficial in function, once we incorporate it into the body such that it is no longer easily removed, it is integral to the person in fact. A number of bars,59 strip clubs60 and casinos61 have banned the use of Google Glass based on privacy protection concerns, and movie theaters have banned it for reasons related to copyright protection.62 But such bans could pose problems when the equivalent of Google Glass is physically screwed into an individual’s head.




Take Steve Mann. Unlike Harbisson, Mann suffers no visual impairment. But Mann wears an EyeTap, a small camera that can be connected to his skull to mediate what he sees—for instance, he can filter out annoying billboards—and stream what he sees onto the Internet. In 2012, Mann was physically assaulted by McDonald's employees in what the press described as the world’s first anti-cyborg hate crime; Mann was able to prove the attack happened by releasing images taken with the EyeTap.63 The episode naturally raised questions about Mann’s rights as a cyborg, but the fact that Mann’s eyepiece affords him the ability to record everything he sees also raises questions about the privacy rights of noncyborgs when faced with individuals embedded with technologies with potentially invasive capabilities. These technologies are potentially invasive not only for third parties but also for the “users” themselves. In 2004, tiny RFID chips were implanted in 160 of Mexico's top federal prosecutors and investigators to grant them automatic access to restricted areas. Ostensibly a security measure, the chips ensured certainty as to who accessed sensitive data and when,64 but also raise questions about when an individual may be compelled to undergo modification—perhaps as a condition of sensitive employment, or perhaps for other, murkier reasons.
A 2009 U.S. National Science Foundation report on the “Ethics of Human Enhancement” suggests moral significance in the distinction between a tool incorporated as part of our bodies and a tool used externally to the body, although—for example—a neural implant that gives an individual access to the Internet may not seem different in kind to a laptop.65 Specifically, the report argues that “assimilating tools into our persons creates an intimate or enhanced connection with our tools that evolves our notion of personal identity.” The report does, however, note that the “always-on or 24/7 access characteristic” might work against attempts to distinguish a neural chip from a Google Glass-like wearable computer.66
The “always-on” aspect of certain intimate technologies, embedded or not, gives people a certain pause when considering the ownership rights, data rights and security needs of cyborgs. Southwestern Law Professor Gowri Ramachandran has emphasized the potential need to specially regulate technologies that aid bodily function and mobility:
[I]t is completely unremarkable for property rights to exist in electronic gadgets. But we might be concerned if owners of patents on products such as pacemakers and robotic arms were permitted to enforce “end user license agreements” (“EULAs”) against patients. These EULAs could in effect restrict what patients can do with products that have become merged with their own bodies. And we should rightly, I argue, be similarly concerned with the effects of property rights in wheelchairs, cochlear implants, tools used in labor, and other such devices on the bodies of those who need or desire to use them.67
As it turns out, the state of the law with respect to pacemakers and other implanted medical devices provides a particularly vivid illustration of a cyborg gap. Most pacemakers and defibrillators are outfitted with wireless capabilities that communicate with home transmitters that then send the data to the patient’s physician.68 Experts have demonstrated the existence of enormous vulnerabilities in these software-controlled, Internet-connected medical devices, but the government has failed to adopt or enforce regulations to protect patients against hacking attempts.69 To date there have been no reports of such hacking—but then again, it would be extremely difficult to detect this type of foul play.70 The threat is sufficiently viable that former Vice President Dick Cheney’s doctor ordered the disabling of his heart implant’s wireless capability, apparently to prevent a hacking attempt, while Cheney was in office.71




And then there’s the more mundane question of the rights that should be afforded to people who engage in cyborgism that is seemingly simply recreational in nature. The explosive popularity of “wearable” technology points to the coming seamless integration of bodily and technological functions and suggests not a trend but an emerging way of life. In December 2013, Google introduced a new update to Google Glass—a feature that allows users to take a photo by simply winking.72 And this is only the beginning: one day a computer that interfaces with human perception may be able to overlay these insights on our own, imperceptibly enhancing our powers of analysis.73 This stuff will be great fun for lots of people who do not in any strict sense need it. It will also be inexpensive and readily available. And its use will be highly annoying—and intrusive—to other people, at least some of the time. The result will be disputes that the law will need to mediate.
Commentators who favor an expansive interpretation of who qualifies as a cyborg reject the idea that the physical body need undergo modification, in favor of focusing on the ways in which technology changes the brain. This approach introduces a third dimension to the otherwise contrived binaries between restorative and enhancing technologies and between embedded and external ones. For example, the invention of the printing press has been distinguished from the development of other tools in that it is “an invention that boosts our cognitive abilities by allowing us to off-load and communicate ideas out into the environment.”74 Of course, the smartphone takes off-loading to a new level. As Wu suggests, off-loading is a major feature of our relationship with and growing reliance upon modern tools that significantly enhance our abilities as cyborgs but reduce our capabilities as man qua man. Recently he wrote:
With our machines, we are augmented humans and prosthetic gods, though we're remarkably blasé about the fact, like anything we're used to. Take away our tools, the argument goes, and we’re likely stupider than our friend from the early twentieth century, who has a longer attention span, may read and write Latin, and does arithmetic faster. 75
How exactly we will mediate between the rights of cyborgs and the rights of anti-cyborgs remains to be seen—but we are already seeing some basic principles emerge. For example, the proposition that individuals should have special rights with respect to the use of therapeutic or restorative technologies appears to be so accepted that it has prompted a kind of intuitive carve-out for those who otherwise oppose wearable and similar technologies. Such is the case with Stop the Cyborgs, an organization that emerged directly in response to the public adoption of “wearable” technologies such as Google Glass. On its website, the group promotes “Google Glass ban signs” for owners of restaurants, bars and cafes to download and encourages the creation of “surveillance-free” zones.76 Yet the site also expressly requests that those who choose to ban Google Glass and “similar devices” from their property to also respect the rights of those who rely on assistive devices.77




Reuters/Marie Arago - Stevenson Joseph practices using a 3D-printed prosthetic hand at an orphanage near Port-au-Prince, Haiti, April 28, 2014


Principles for Juvenile Cyborgs: Surveillance and Beyond

We can expect our increasing cyborgization to have the most significant immediate effects on the law in the surveillance arena. And here cyborgism is a two-edged sword. For the cyborg both enables surveillance and is unusually subject to it.
We are, at this stage, at most juvenile cyborgs—more likely still infant cyborgs. We do not yet have a detailed sense of the scope, speed, or depth of our ongoing integration with machines. Will it remain, as it mostly is now, a sort of consumer dependence on objects and devices that make themselves useful, and eventually essential? Or will it evolve into something deeper—a physically more intimate connection between human and machine, and a dependence among more people for functions that we regard, or come to regard, as core human activity?
The cliché goes that an order-of-magnitude quantitative change is a qualitative change. Put differently, it is not merely that technology gets faster or more sophisticated; when the original speed or complication is raised to the power of ten, the change is one in kind, rather than simply in degree.
Today we may be baby cyborgs, our reliance on certain technologies increasing quantifiably, but at some point we will be looking at a qualitative change—a point at which we are truly no longer using those technologies but have sufficiently fused with them so as to reduce the government’s claims of tracking “them” and not “us” to an untenable legal fiction. Contrary to the layman’s assumption, that need not be the point at which we surgically implant chips into our wrists or introduce nanobots into our bloodstreams; it could be a simple question of the extent of our reliance or frequency and pervasiveness of our use.
Until we know how far down the cyborg spectrum how many of us are going to travel, it is folly to imagine that we can fashion definitive policy for surveillance—or anything else—in a world of cyborgs. It is more plausible, however, to imagine that we might discern certain principles and considerations that should inform policy as we mature into adolescent cyborgs and ultimately into adult cyborgs. Indeed, such an examination is vital if we are to make deliberate choices about whether and to what extent the protections and liberties we enjoy as humans are properly afforded to—or forfeited by—cyborgs.




1. Data Generation
The first consideration that must factor into our discussions is that cyborgs inherently generate data. Human activity by default does not—at least, not beyond footprints and fingerprints and DNA traces. We can think and move without leaving meaningful traces; we can speak without recording. Digital activity, by contrast, creates transactional records. A cyborg’s activity is thus presumptively recorded and that data may be stored or transmitted. To record or to transmit data is also to enable collection or interception of that data. Unless one specifically engineers the cyborg to resist such collection or interception, it will by default facilitate surveillance. And even if one does engineer the cyborg to resist surveillance, the data still gets created. In other words, a world of cyborgs is a world awash in data about individuals, data of enormous sensitivity and, the further cyborgization progresses, ever-increasing granularity.
Thus the most immediate impact of cyborgization on the law of surveillance will likely be to put additional pressure on the so-called third-party doctrine, which underlies a great deal of government collection of transactional data and business records. Under the third-party doctrine, an individual does not have a reasonable expectation of privacy with respect to information he voluntarily discloses to a third party, like a bank or a telecommunications carrier, and the Fourth Amendment therefore does not regulate the acquisition of such transactional data from those third parties by government investigators. The Supreme Court declined to extend constitutional protections to bank records in United States v. Miller78 based on the theory that “the Fourth Amendment does not prohibit the obtaining of information revealed to a third party and conveyed by [the third party] to Government authorities, even if the information is revealed on the assumption that it will be used only for a limited purpose and the confidence placed in the third party will not be betrayed.”79 The third-party doctrine underlies a huge array of collection, everything from the basic building blocks of routine criminal investigations to the NSA’s bulk telephony metadata program.
The third-party doctrine has long been controversial, even among humans. It has attracted particular criticism as backward in an era in which third-party service providers hold increasing amounts of what was previously considered personal information. Commentators have urged everything from overruling the doctrine entirely80 to adapting the doctrine to extend constitutional protections to Internet searches.81
But the doctrine seems particularly ill-suited to cyborgs. A world of humans can, after all, indulge the fiction that we each have a meaningful choice about whether to engage the modern banking system or the telephone infrastructure. It can adopt the position—however unrealistic in practice—that we have the option of not using telephones if we prefer not to give our metadata to telephone companies, and that we can pay cash for everything if we do not like the idea of the FBI getting our credit card records without a warrant. But the cyborg does not meaningfully have choice. Digital machines produce data as an inherent feature of their existence. The more we come to see the machine as an extension of the person—first by the pervasiveness of its use, then by its physical integration with its user, and ultimately through cybernetic integration with the user—the less plausible will seem the notion that these are simply tools which we choose to use and whose data we thus voluntarily turn over to service providers. The more like cyborgs we become, the more that data will seem like the inevitable byproduct of our lives, and thus entitled to heightened legal protection.
For an example, let’s return to the pacemaker. The pacemaker does not just control a patient’s heartbeat. It also monitors it, along with blood temperature, breathing, and heart electrical activity. In some very technical sense, we might consider this extracted data to be information voluntarily given to a third party. One does not have to get a pacemaker, after all. Nor does one have to get it monitored. These are choices, albeit choices with unusually high stakes. Yet even to formulate the issue thus brings a smile to one’s lips. And it’s hard to believe that Smith v. Maryland82 or Miller would have come out the same way had the data in question been produced by a device necessary to keep someone alive, had it quite literally bore on the person’s most vital bodily functions, and had it been physically embedded within the person’s chest.




It is not merely that pacemaker users lack control over who accesses their data. Current laws impose no affirmative duty on manufacturers to allow pacemaker users access to their own data. The top five manufacturers do not allow patients to access the data produced by their pacemakers at all.83 This state of affairs has prompted one activist, Hugo Campo, to fight for the right to access the data collected by his own defibrillator, for years without success.84 This perverse state of affairs is an outgrowth of our failure to conceive of the relationship between the patient and pacemaker as one of integration, rather than mere use. The same logic would see pacemaker data as covered by the third-party doctrine.
The cyborg, in other words, is uniquely vulnerable to surveillance. The explosion in wearable technology underscores the dated nature of current laws in addressing the privacy concerns of the technology user—specifically, about the secondary uses of inherently intimate consumer data.85 The type of information collected by wearables ranges from an individual’s physiological responses to environmental factors to data more commonly associated with computer use—geolocation and personal interests. To a lesser extent, this is true too of smartphones, which all contain and collect in real time data that humans did not use to carry with them in traceable forms. All of this is collectable—the issue that bothered the justices in Riley.
2. Data Collection
While cyborgs generate the kind of comprehensive data that subjects them to surveillance, cyborgs also collect data, making them a powerful instrument of surveillance.
Cyborg data collection can be benign; much of it consists these days of people posting a lot of selfies and pictures of their kids on Facebook, for example, or people recording their own experiences. But the result is also a world in which one has to interact with others on the assumption that they are, or that they may be, recording aspects of the engagement. This is what animates those—like the people who run the Stop the Cyborgs website—who fear the ubiquitous presence of small, low-visibility surveillance devices. Cyborgization innately transforms people into agents capable of collection and retention and processing of large volumes of information.
The cyborg is, indeed, an instrument of highly distributed surveillance. Mann’s wearable computer is an outgrowth of a political stance: he has long advocated using technology to “invert the panopticon” and turn the tables on surveillance authorities. A play on the term “surveillance,” which translates from the French as watching from above, “sousveillance” reflects the idea of a populace watching the state from below.86 And wearable technologies with recording functions could indeed secure individuals in a number of ways, notably by deterring and documenting crime.
Sousveillance may secure the cyborg but it also imposes costs on cyborgs and noncyborgs alike. “Google Glass is possibly the most significant technological threat to ‘privacy in public’ I've seen,” Woodrow Hartzog, an affiliate scholar at the Center for Internet and Society at Stanford Law School, told Ars Technica last year. “In order to protect our privacy, we will need more than just effective laws and countermeasures. We will need to change our societal notions of what ‘privacy’ is.”87 But efforts to combat the perceived privacy threat posed by certain technologies raise their own set of ethical and legal issues. For instance, technologies have been developed to detect and blind cameras, as well as neutralize vision aids and other assistive technologies.88 The cyborg thus raises the problem of how society—and its law—will respond to large numbers of people recording their routine interactions with others.
In short, the further down the cyborg spectrum we go, the more we are both agents of and subject to surveillance.




3. Constructive Integration
When dealing with a world of cyborgs, to conduct surveillance against a machine is to conduct surveillance against a person. There will often be no distinction between the two. This does not necessarily mean one should be more reticent about the surveillance of machines; we may well decide as a society to lighten up about the surveillance of people. The point is that we should not kid ourselves about what we are doing when we collect on the non-biological side of the cyborg. When we learn the GPS coordinates of a phone, there is a person attached to that phone. When we check how our Fitbit data compares to that of our friends, we are not comparing our wristbands to other wristbands but our physical performance to that of other people. When we examine a computer’s search history, we are looking at the trajectory of a person’s thought.
Imagine for a moment the logical technological terminus of the cellphone revolution. Instead of carrying a device, dialing it, and speaking into it, the individual would simply have the ability to talk—as though her interlocutor were present—with any individual in the world. She would merely identify mentally to whom she was talking, and the telecommunications chip in her head would connect with the telecommunications chip in the recipient’s head. She would not even be conscious of using technology. What would separate this system from magic is that there would be actual telecommunications going on. Two communications devices, each interfacing with a human brain, would connect before sending and receiving signals. That means that they would produce metadata about which systems they had connected to, and the signals they sent would be subject to interception. That, in turn, would mean that a list of everyone you had talked to was readily available, at least in some technological sense. We submit, however, that if this were our technical reality, the notion that users voluntarily chose to communicate using a device they knew entrusted metadata to a phone company and that they had no reasonable expectation of privacy in those data would be as silly as is the idea that you have no expectation of privacy in the data produced by your heart.
In other words, the more essential the role our machines play in our lives, the more integral the data they produce are to our human existences, and the more inextricably intertwined the devices become with us—socially, physically, and biologically—the less plausible will seem the notion that the data they produce is material we voluntarily turn over to a third party like some file cabinet we give to a friend. A society of cyborgs—or a society that understands itself as on the cyborg spectrum—will have a whole different cultural engagement with the idea of electronic surveillance than will a society that understands itself as composed of humans using tools.
This shift may explain, at a subconscious level anyway, some of the fury over the past year about the NSA disclosures. People around the world infuriated by what they have learned about American intelligence collection practices are certainly not consciously thinking of themselves as cyborgs. But the point is that we no longer experience surveillance of the phone networks simply as surveillance of machines either. There was a time, not that long ago, when NSA coverage of large volumes of overseas calls—including those with one end in the U.S.—did not bother people all that much. It was no secret that NSA captured a huge amount of such material and that it incidentally captured U.S. person communications along the way, weeding out these communications using minimization procedures. But we did not make that many international calls and we did not Skype with people overseas very often. We did not send emails all over the world many times a day. We were not constantly engaging the network as though it were part of ourselves. Today, as juvenile cyborgs, we experience surveillance of that architecture very directly as surveillance of us. We can no longer disassociate ourselves from those machines. Our engagement with them is pervasive enough that systematic collection of data from those networks—even if accompanied by appropriate procedures and limiting rules—inevitably appears as collection on our innermost thoughts and private lives, closer and more oppressive than it did when the network and we were further apart.
As cyborgization progresses, we will therefore be faced with constant choices about whether to invest the machines with which we are integrating with some measure of the rights of humans or whether to divest humans of some rights they expected before they developed machine parts. The construction we have traditionally given this problem, that of the rights of human in the use of machines, will break down as the line between human activity and machine activity continues to blur. The person who carries a smartphone we might still construe as using a machine. And perhaps we might even think that of the person who wears an electronic insulin pump. But an eyeborg or a pacemaker?




After losing her left arm in a motorcycle accident, Claudia Mitchell became the first person to receive a bionic arm. The device detects the movements of a chest muscle rewired to the stumps of nerve once connected to her former limb.89 Would we really say she is using a machine? Or would we say she has machine parts? And if the latter, do we think about those machine parts as sharing in Mitchell’s rights as a human or do we think about her rights as a human as limited by our surveillance capabilities with respect to those machine parts?
The answers to these questions will not always be the same. We will not feel the same way about the privacy of what you see with your bionic eye—and your right not to incriminate yourself with the images it collects—as we will feel about your right to shield physiological data you collect on yourself recreationally or to incentivize your own fitness. To the extent you opt to film everything you see with Google Glass, you may be out of luck. Our choices will hinge on the depth of integration of human and technology, the function the technology is playing in our lives and the seriousness we attach to that function, and probably the perceived but ineffable inherency of that function to the irreducibly human.



Reuters/Jason Reed - Claudia Mitchell (R), the first woman to receive a bionic arm, uses her new prosthetic arm to "high five" with the first bionic arm recipient Jesse Sullivan at a news conference in Washington, DC September 14, 2006.


Endnotes

[1] 573 U. S. ____ (2014). Justice Alito wrote a separate opinion concurring in part and concurring in the judgment.
[2] Timothy Wu, Professor of Law, Colum. L. School, Brookings Inst. Judicial Issues Forum: Constitution 3.0: Freedom, Technological Change and the Law (Dec. 13, 2011), http://www.brookings.edu/~/media/events/2011/12/13%20constitution%20technology/20111213_constitution_technology.pdf.
[3] Some of the most contentious political issues of the day might be described as a struggle over the abrogation of individuals’ ability to exercise that dominion—for example, as with laws concerning abortion or euthanasia.
[4] We have already seen fit to take the leap of wondering when machines might be elevated to the status of humans—that is, whether machines one day endowed with artificial intelligence should be granted the rights and recognition that accompany personhood. See, e.g., Alex Knapp, Should Artificial Intelligences Be Granted Civil Rights?, Forbes (Apr. 4, 2011, 1:42 AM), http://www.forbes.com/sites/alexknapp/2011/04/04/should-artificial-intelligences-be-granted-civil-rights. Yet relatively little has been said about the status of humans mediated by machines.
[5] We use “machine” as shorthand but recognize that the “machine” components of a man-machine hybrid need not comprise inorganic material. In fields like neurobotics and nanobiotechnology, scientists increasingly derive from biomolecular materials the blueprint and the physical building blocks for developing next-level computational power. See Chris Hables Gray, Cyborg Citizen: Politics in the Posthuman Age 183 (2002).
Vasi Van Deventer sums it up this way: “If first-order cybernetics modeled the person after the machine, second-order cybernetics explores the machine in terms of the characteristics of living systems.” Vasi Van Deventer, Cyborg Theory and Learning, in Connected Minds, Emerging Cultures: Cybercultures in Online Learning 173 (Steve Wheeler, ed. 2009). One interesting corollary is that the organism need not be the beneficiary of the functions provided by the artificial components in question; the organism can itself be converted into a machinic means to an end. Researchers have already succeeded in inserting electrodes into rats and using their baroreflex systems to run complex biochemical calculations that digital computers cannot yet handle. Gray, supra, at 183.
[6] Nick Kolakowski, We're Already Cyborgs, Slashdot, Jan. 14, 2014, slashdot.org/topic/cloud/were-already-cyborgs.
[7] See note 88, infra.
[8] See note 55, infra.
[9] For example, the Rathenau Instituut in the Netherlands has dubbed the merge of humans and technology an "intimate technological revolution.” See generally Virgil Rinie van Est, Ira van Rerimassie & Gaston Dorren Keulen, Intimate Technology—The Battle for our Body and Behavior, Rathenau Instituut (2014), available at http:// http://www.rathenau.nl/uploads/tx_tferathenau/Intimate_Technology_-_the_battle_for_our_body_and_behaviourpdf_01.pdf.
[10] Costas Pitas, Snowden Warns of Loss of Privacy in Christmas Message, Reuters (Dec. 25, 2013, 6:42 PM), http://www.reuters.com/article/2013/12/25/us-usa-snowden-privacy-idUSBRE9BO09020131225.
[11] Id.
[12] See Gray, supra note 5, at 4 (tracing the idea of man-made sentient creatures to ancient Greek and Hindi folklore and sixteenth-century Japan, China and Europe); Hugh Herr, et al., Cyborg Technology—Biomimetic Orthotic and Prosthetic Technology, available at biomech.media.mit.edu/wp-content/uploads/sites/3/2013/07/Biologicall_Inspired_Intell.pdf (noting that the ancient Egyptians and early Romans used prostheses and simple walking aids.).
[13] Manfred E. Clynes & Nathan S. Kline, Cyborgs and Space, Astronautics (Sept. 1960), reprinted in The Cyborg Handbook (Chris Hables Gray, ed. 1995), available at cyberneticzoo.com/wp-content/uploads/2012/01/cyborgs-Astronautics-sep1960.pdf.
[14] See Rolf Pfeifer & Josh Bongard, How the Body Shapes the Way We Think: A New View of Intelligence 265 (2007); Chris Hables Gray, Steven Mentor, Heidi J. Figueroa-Sarriera, Cyborgology: Constructing the Knowledge of Cybernetic Organisms, in The Cyborg Handbook, supra note 13, at 6.
[15] Stuart A. Umpleby, Science of Cybernetics and the Cybernetics of Science, Cybernetics and Systems 120 (2007), http://www.tandfonline.com/doi/pdf/10.1080/01969729008902227; see also Manfred E. Clynes, Cyborg II: Sentic Space Travel, in The Cyborg Handbook, supra note 13, at 35.
[16] Clynes’s “cyborg” concept builds off the work of Norbert Weiner, one of the greatest mathematicians of the twentieth century. Weiner is credited with pioneering the concept of “cybernetics” (though he did not coin the term) and giving coherence to a literature that up until that point had not acknowledged “the essential unity of the set of problems centering about communication, control, and statistical mechanics, whether in the machine or in living tissue.” See Norbert Weiner, Cybernetics: Or the Control and Communication in the Animal and the Machine 19 (1948). Though its applications have proven diverse and wide-ranging, the thesis of his magnum opus was specific: Weiner posited:
[S]ociety can only be understood through a study of the messages and the communication facilities which belong to it; and . . . in the future development of these messages and communication facilities, messages between man and machines, between machines and man, and between machine and machine, are destined to play an ever-increasing part.”
Id. Weiner’s idea that man and machine rely on “precisely parallel” principles for their physical function, Norbert Weiner, The Human Use of Human Beings: Cybernetics and Society 26 (1954), would prove foundational to Clynes and Kline’s idea of fusing man and machine as part of an effort to “adapt[] man’s body to any environment he may choose,” Clynes & Kline, supra note 13.
[17] Robert W. Driscoll, The Cyborg Study: Engineering Man for Space (1963), in The Cyborg Handbook, supra note 13, at 76, available at cyberneticzoo.com/wp-content/uploads/2012/01/cyborg-nasa-driscoll-1963.pdf.
[18] Clynes & Kline, supra note 13, at 27 (“For the exogenously extended organizational complex functioning as an integrated homeostatic system unconsciously, we propose the term ‘Cyborg’. The Cyborg deliberately incorporates exogenous components extending the self-regulatory control function of the organism in order to adapt it to new environments.”)
[19] See, e.g., David J. Hess, On Low-Tech Cyborgs, in The Cyborg Handbook, supra note 13, at 371 (“The cyborg is a symbol of news on the cultural landscape; it is a metaphor of the possibilities engendered by the events of biotechnology and artificial intelligence.”).
[20] Gray, supra note 5, at 19-20.
[21] See id. See also Vasi Van Deventer, Cyborg Theory and Learning, in Connected Minds, Emerging Cultures: Cybercultures in Online Learning 170 (Steve Wheeler, ed. 2009) (“The cyborg has become a symbol for adaptability, intelligent application of information, and elective physical augmentation, but . . . [i]n popular consciousness the cyborg remains a superhuman who can pass as an ordinary person in everyday life . . . .”).
[22] Donna Haraway, Simians, Cyborgs and Women: The Reinvention of Nature 150 (1991).
[23] Andy Clark & David J. Chalmers, The Extended Mind, 58 Analysis 10 (1998), available at consc.net/papers/extended.html.
[24] Andy Clark, Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence 3 (2003).
[25] Bruce Sterling, Eight Theses on Cyborgism, Wired (Feb. 4, 2011, 11:21 PM), http://www.wired.com/beyond_the_beyond/2011/02/eight-theses-on-cyborgism.
[26] Clark, supra note 24, at 4.
[27] Id. at 5. See also Are We Becoming Cyborgs, N.Y. Times, Nov. 30, 2012, http://www.nytimes.com/2012/11/30/opinion/global/maria-popova-evgeny-morozov-susan-greenfield-are-we-becoming-cyborgs.html (“You know, anyone who wears glasses, in one sense or another, is a cyborg. And anyone who relies on technology in daily life to extend their human capacity is a cyborg as well. So I don’t think that there is anything to be feared from the very category of cyborg. We have always been cyborgs and always will be.”)
[28] Gray, Mentor & Figueroa-Sarriera, supra note 14, at 6.
[29] Cyborg anthropology, a subspecialty launched at the Annual Meetings of the American Anthropological Association (AAA) in 1993, focuses on how humans and nonhuman objects interact and the resulting cultural changes. What is Cyborg Anthropology?, Cyborg Anthropology, http://cyborganthropology.com/What_is_Cyborg_Anthropology%3F (last visited Jan. 26, 2014). See generally Gary Lee Downey, Joseph Dumit & Sarah Williams, Cyborg Anthropology, in The Cyborg Handbook, supra note 7 (offering a brief overview of the purpose and dangers of cyborg anthropology).
[30] Amber Case, We Are All Cyborgs Now, TED, Dec. 2010. http://www.ted.com/talks/amber_case_we_are_all_cyborgs_now.html.
[31] Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology 9 (2005).
[32] Tim Wu, If a Time Traveller Saw a Smartphone, New Yorker, Jan. 13, 2014, http://www.newyorker.com/online/blogs/elements/2014/01/if-a-time-traveller-saw-a-smartphone.html.
[33] M. Mitchell Waldrop, The Dream Machine, J.C.R. Licklider and the Revolution that Made Computing Personal (2001).
[34] J.C.R. Licklider, Man-Computer Symbiosis (1960), available at http://groups.csail.mit.edu/medg/people/psz/Licklider.html.
[35] Id.
[36] Steve Mann & Hal Niedzviecki, Cyborg: Digital Destiny and Human Possibility in the Age of the Wearable Computer (2001).
[37] Oxford English Dictionary (2014).
[38] Questions of distribution of and access to sophisticated medical technology have long been part of the bioethics debate. As the 2009 National Science Foundation report observes, “We have heard much about the ‘digital divide’, but one day there may well be a ‘nano divide’: the gap between those who can access and benefit from nanotechnology and those without.” Ethics of Human Enhancement: 25 Questions & Answers, U.S. National Science Foundation, Aug. 31, 2009, at 22, http://ethics.calpoly.edu/NSF_report.pdf [hereinafter NSF Report]..
[39] Issues of access are closely linked to concerns about discrimination against those unable to afford or unwilling to undergo certain modifications. Antidiscrimination laws may be necessary to prevent cyborgs from being denied employment as a result of their modifications and unenhanced humans from being discriminated against for opposite reasons. Joseph Guyer, Cyborgs in the Workplace: Why We Will Need New Labor Laws, Future Culturalist, Apr. 12, 2013, http://futureculturalist.wordpress.com/2013/04/12/cyborgs-in-the-workplace-why-we-will-need-new-labor-laws; see also Andy Greenberg, Cyborg Discrimination? Scientist Says McDonald's Staff Tried To Pull Off His Google-Glass-Like Eyepiece, Then Threw Him Out, Forbes (July 17, 2012, 8:00 AM), http://www.forbes.com/sites/andygreenberg/2012/07/17/cyborg-discrimination-scientist-says-mcdonalds-staff-tried-to-pull-off-his-google-glass-like-eyepiece-then-threw-him-out.
[40] See John Armitage, Militarized Bodies: An Introduction, 9 Body & Soc'y, no. 4, at 2 (2003), available at bod.sagepub.com/content/9/4/1.short ("[T]he social allure of the humanoid cyborg warrior, of new levels of militarized machinic incorporation and even of human-machine weapon systems, shows no sign of abating.").
[41] Anti-Google Glass Site Wants to Fight a Future Full of Cyborgs, DVice (Mar. 18, 2013, 11:41 AM), http://www.dvice.com/2013-3-18/anti-google-glass-site-wants-fight-future-full-cyborgs.
[42] See Gowri Ramachandran, Against the Right to Bodily Integrity: Of Cyborgs and Human Rights, 87 Denver U.L. Rev. 1, 2 (2009) (arguing against “one-to-one mapping between the physical borders of the organic, integrated human body and the legal borders of the rights derived from it”).
[43] See Gray, supra note 7, at 33 (describing the work of distinguished professor Jose M. R. Delgado, who has implanted electrodes in the brains of animals to control them).
[44] See, e.g., Moore v. Regents of the University of California, 793 P.2d 479 (Cal. 1990) (holding that a patient whose spleen cells were used to develop a commercially profitable cell line had no property rights in his organs).
[45] See Gray, supra note 5, at 33.
[46] Are We Living Among Cyborgs?, Deutsche Welle, http://www.dw.de/are-we-living-among-cyborgs/a-17361266 (last visited Jan 18, 2014).
[47] Brad Allenby, Is Human Enhancement Cheating?, Slate (May 9, 2013, 7:30 AM), http://www.slate.com/articles/technology/superman/2013/05/human_enhancement_ethics_is_it_cheating.html.
[48] See also Ramez Naam, More Than Human: Embracing the Promise of Biological Enhancement 8, 151 (2005) (“Scientifically there's no clear line between healing and enhancing . . . . "[T]he quest to heal often leads to the power to enhance.”).
[49] See Peter Conrad and Deborah Potter, Human Growth Hormone and the Temptations of Biomedical Enhancement, 26 Sociology of Health & Illness 184 (2004). For example, when synthetic versions of naturally occurring human growth hormone became available in the form of injections, the ethicist John Lantos suggested that shortness was becoming a “disease” in order for doctors and insurance companies to justify its treatment. Gray, supra note 1, at 174; see also Barry Werth, How Short Is Too Short?, N.Y. Times Magazine, June 16, 1991. Lantos’s point relates to the larger phenomenon of “medicalization,” the social process by which problems are defined and treated as medical problems subject to medical intervention. The rise of medicalization, described as “one of the most potent transformations of the last half of the twentieth century in the West,” see Peter Conrad, The Medicalization of Society: On the Transformation of Human Conditions into Treatable Disorders 5 (2007), roughly coincides with the emergence of cyborg discourse. See generally id. (examining the social construction of disease and the corresponding expansion of medical jurisdiction over conditions only newly identified as subject to treatment),
[50] Claire Cain Miller, Google Glass to Be Covered by Vision Care Insurer VSP, N.Y Times, Jan. 28, 2014, http://www.nytimes.com/2014/01/28/technology/google-glass-to-be-covered-by-vision-care-insurer-vsp.html.
[51] Id.
[52] Harbisson is the founder of the Cyborg Foundation, an international organization that assists humans in becoming cyborgs, as well as a vocal proponent of cyborg rights. Annalee Newitz, The First Person in the World to Become a Government-Recognized Cyborg, io9 (Dec. 2, 2013, 2:58 PM), http://io9.com/the-first-person-in-the-world-to-become-a-government-re-1474975237; see also Cyborg Foundation, http://www.cyborgfoundation.com (last visited Jan. 28, 2014).
[53] The Man Who Hears Colour, BBC (Feb. 15, 2012, 10:37 ET), http://www.bbc.co.uk/news/magazine-16681630.
[54] The distinction between prosthetics and assistive devices, however tenuous, is written into statute. For example, the Federal Employees' Compensation Act (FECA) covers damage or destruction to prosthetic devices, categorizing it under “injury.” Eyeglasses and hearing aids, on the other hand, are replaced or otherwise compensated for only if the damage or destruction "is incident to a personal injury requiring medical services.” 5 U.S.C. §8101(5).
[55] Linda MacDonald Glenn, Case Study: Ethical and Legal Issues in Human Machine Mergers (Or the Cyborgs Cometh), Annals of Health Law 175, 176 (2012), available at http://lawecommons.luc.edu/cgi/viewcontent.cgi?article=1024&context=annals.
[56] Id. at 176-77.
[57] Id. at 177.
[58] Id.
[59] See, e.g., Todd Bishop, No Google Glasses Allowed, Declares Seattle Dive Bar, GeekWire (Mar. 8, 2013, 1:27 PM), http://www.geekwire.com/2013/google-glasses-allowed-declares-seattle-dive-bar.
[60] See, e.g., Evan Schwartz, Google Glass Strip Clubs: Forget It!, Vibe, Apr. 8, 2013, http://www.vibe.com/article/google-glass-strip-clubs-forget-it.
[61] Steven Rosenbaum, Can You Really 'Ban' Google Glass?, Forbes (June 6, 2013, 8:58 PM), http://www.forbes.com/sites/stevenrosenbaum/2013/06/09/can-you-really-ban-google-glass.
[62] See, e.g., Rosa Golijan, From Strip Clubs to Theaters, Google Glass Won't Be Welcome Everywhere, NBC, http://www.nbcnews.com/technology/strip-clubs-theaters-google-glass-wont-be-welcome-everywhere-1B9231620.
[63] Claire Evans, Panopticon In Reverse: Steve Mann Is Fighting for Your Cyborg Rights, Motherboard Vice, http://motherboard.vice.com/blog/panopticon-in-reverse-steve-mann-is-fighting-for-your-post-human-rights.
[64] Microchips implanted in Mexican Officials, Associated Press (July 14, 2004, 9:21 ET), http://www.nbcnews.com/id/5439055/ns/technology_and_science-tech_and_gadgets/t/microchips-implanted-mexican-officials/#.UyJNS-ddVVM.
[65] NSF Report, supra note 38, at 9-10.
[66] Emmet Cole, The Cyborg Agenda: Extreme Users, Robotics Bus. Rev., Nov. 13, 2012, http://www.roboticsbusinessreview.com/article/the_cyborg_agenda_extreme_users
[67] Ramachandran, supra note 42.
[68] European Soc'y of Cardiology, Remote Monitoring and Follow-up of Pacemakers and Implantable Cardioverter Defibrillators, 11 Europace, 701, 701 (2009).
[69] What's To Stop Hackers From Infecting Medical Devices?, Forbes (April 20, 2012, 12:08 PM), http://www.forbes.com/sites/marcwebertobias/2012/04/20/whats-to-stop-hackers-from-infecting-medical-devices; see Tarun Wadhwa, Yes, You Can Hack A Pacemaker (And Other Medical Devices Too), Forbes (Dec. 6, 2012, 8:31 AM), http://www.forbes.com/sites/singularity/2012/12/06/yes-you-can-hack-a-pacemaker-and-other-medical-devices-too.
[70] Haran Burri & David Senouf, Remote Monitoring and Follow-up of Pacemakers and Implantable Cardioverter Defibrillators, 11 Europace 701, 701, 708 (2009).
[71] Andrea Peterson, Yes, Terrorists Could Have Hacked Dick Cheney’s Heart, Wash. Post (Oct. 21, 2013, 8:58 AM), http://www.washingtonpost.com/blogs/the-switch/wp/2013/10/21/yes-terrorists-could-have-hacked-dick-cheneys-heart.
[72] Google Glass Update Lets Users Wink and Take Photos, BBC, Dec. 18, 2013, http://www.bbc.com/news/technology-25426052.
[73] See Naam, supra note 48.
[74] Pfeifer & Bongard, supra note 14, at 265.
[75] Wu, supra note 32.
[76] Stop the Cyborgs, http://stopthecyborgs.org (last visited Mar. 12, 2014).
[77] Id.
[78] 425 U.S. 435 (1976).
[79] Id. at 443.
[80] Andrew J. DeFilippis, Note, Securing Informationships: Recognizing a Right to Privity in Fourth Amendment Jurisprudence, 115 Yale L.J. 1086, 1092 (2006) (arguing that the Supreme Court should overturn the third-party doctrine).
[81] Matthew D. Lawless, The Third Party Doctrine Redux: Internet Search Records and the Case for a “Crazy Quilt” of Fourth Amendment Protection, 2007 UCLA J. L. & Tech. 1, 3–4 (advocating “retooling” the third-party doctrine for Internet searches).
[82] 442 U.S. 735 (1979).
[83] See Cory Doctorow, Why Can't Pacemaker Users Read Their Own Medical Data?, BoingBoing (Sept. 28, 2012, 8:08 PM), http://boingboing.net/2012/09/28/why-cant-pacemaker-users-rea.html. 
[84] Health Care in the Digital Age: Who Owns the Data?, Wall St. J. (Nov. 28, 2012, 10:30 PM), http://live.wsj.com/video/health-care-in-the-digital-age-who-owns-the-data/28B6E0AD-8506-40B2-A659-20A9B696F524.html#!28B6E0AD-8506-40B2-A659-20A9B696F524 (“Who owns the rights to a patient's digital footprint, and who should control that information? Not just for medical implants, but also for smartphone apps, and over-the-counter monitors that track things like sleep patterns and hours of physical activity.”). Campo must pay out of pocket for biannual meetings with his doctor in order to receive short summaries of his data. Id. The law requires doctors to hand over traditional medical records to patients who request them within 30 days, but it is unclear whether data collected outside the medical facility is considered medical record. Id. And there are currently no guidelines from the Department of Health and Human Services that address these questions. Id.
[85] Michael Millar, How Wearable Technology Could Transform Business, BBC (Aug. 5, 2013, 19:00 ET), http://www.bbc.com/news/business-23579404.
[86] See Steve Mann, Jason Nolan & Barry Wellman, Sousveillance: Inventing and Using Wearable Computing Devices for Data Collection in Surveillance Environments, Surveillance & Society 331 (2003), http://www.surveillance-and-society.org/articles1(3)/sousveillance.pdf.
[87] “Stop the Cyborgs” Launches Public Campaign Against Google Glass, Ars Technica (Mar. 22, 2013, 2:50 PM), http://arstechnica.com/tech-policy/2013/03/stop-the-cyborgs-launches-public-campaign-against-google-glass.
[88] Steve Mann, Veillance and Reciprocal Transparency: Surveillance versus Sousveillance, AR Glass, Lifelogging, and Wearable Computing, 2013 IEEE International Symposium on Technology and Society, at 7, available at http://wearcam.org/veillance/part1.pdf.
[89] David Brown, For 1st Woman With Bionic Arm, a New Life Is Within Reach, Wash. Post, Sept. 14, 2006, http://www.washingtonpost.com/wp-dyn/content/article/2006/09/13/AR2006091302271.html; see also 7 Real-Life Human Cyborgs, Mother Nature Network (Apr. 25, 2013, 10:51 AM), http://www.mnn.com/leaderboard/stories/7-real-life-human-cyborgs (discussing Nigel Ackland, who lost part of his arm in a work accident and received a robotic prosthetic that allows him to control the arm using muscle movements in his other forearm).