http://www.freedom.news/2015-12-08-once-again-a-terrorist-attack-on-american-soil-proves-the-massive-federal-surveillance-state-doesnt-improve-public-safety-and-national-security.html~hehe thum sneaky "terror~ain't's" slipped pass "our vaunted" ...Mass surveillance "state"... "once" again ??? we ameri~cans pay/piss ALL them billions&billions&billions of OUR $$$ away ON & NOBODY in this dumb~meeed down drugged out... dull.. dim witted... alice in won~dee~land ville ... 6 rate country says WTF !!! or maybe the ass pipes in charge are ... "miss~in" this shit on ..purpose or maybe ..just fucking maybe there is an "agenda" go~in ON & "it's" ...not 4 "our" ...benny's ??? i mean cum the fuck ON ! America ... just how the fuck long can "our" ass pipes ( 'elected' ass facials ) keep rid~in that 'well we R just THAT ...im~cum~pump~tip' ..."roooo~teen" ..game ..before 1 of U.S. boobs says ...wait a minute ;o Huh ??? ....
Americans have surrendered 100 percent of their privacy for a
surveillance state that continues to fail them
December 8th, 2015, by Jon E. Dougherty
(Freedom.news) What’s that we hear from the “national security hawks” on Capitol Hill: Mass surveillance will keep us safe?
That Congress and the president must ignore the Fourth Amendment’s
privacy guarantee in this “age of terrorism” so they can improve public
safety and prevent terrorist attacks?
Yes, well, tell that to the families of the 14 dead Americans in San
Bernardino, Calif., as well as the others who were wounded when two
people with emerging terrorist ties waltzed into their holiday party and
shot the place up.
In fact, as top security experts will tell you, mass surveillance is not effective and actually puts the nation more at risk.
Consider: As per Zero Hedge, the former chief of the National Security Agency’s global intelligence-gathering operations, Bill Binney, told Washington’s Blog
that mass surveillance actually disrupts the government’s ability to
locate and interrupt bad guys plotting bad things. The government failed
to identify plots that led to the 9/11 attacks, the Boston Marathon
bombings, the shootings at Fort Hood, Texas, and, most recently, the
terrorist attack in San Bernardino, Calif., because of being overwhelmed with data and having too few analysts and resources to adequately sift through it in a timely manner.
Binney said:
A good deal of the failure is, in my opinion, due to bulk data.
So, I am calling all these attacks a result of “Data bulk failure.” Too
much data and too many people for the 10-20 thousand analysts to follow.
Simple as that. Especially when they make word match pulls (like
Google) and get dumps of data selected from close to 4 billion people.
This is the same problem NSA had before 9/11. They had data that
could have prevented 9/11 but did not know they had it in their data
bases. This back then when the bulk collection was not going on. Now the
problem is orders of magnitude greater. Result, it’s harder to succeed.
Expect more of the same from our deluded government that thinks
more data improves possibilities of success. All this bulk data
collection and storage does give law enforcement a great capability to
retroactively analyze anyone they want. But, of course,that data cannot
be used in court since it was not acquired with a warrant.
In separate comments to Zero Hedge, Binney added:
I always like to point to the obvious. Look at what is happening
in France and Belgium after the attack in Paris. They are going after
targeted individuals, who they knew were related to the killers before
the attack. And, it’s working!!! So, this is what I have been saying
they should do all along.
Do a targeted selection of data from the communications based on
known people and their attributes and you can succeed (as now in France
and Belgium) instead of the bulk collection on everyone which buries
them in data and they fail. After the attack and people die, they do the
right thing. This should make it obvious what route to take.
Interestingly, following the San Bernardino attacks, we learned that the suspects were actually under FBI surveillance, and yet were never dealt with in time to prevent the attacks that left 14 dead and nearly 20 wounded.
“The recent massacre at the Inland Regional Center in San Bernardino,
Calif., in which a married Muslim couple killed 14 people and wounded
21 others is further proof that the surveillance state does not work.
Americans have surrendered 100 percent of their privacy for a
surveillance state that continues to fail them,” wrote Julie Wilson for NaturalNews Dec. 4.
But unfortunately, we are getting no shortage of intelligence experts and politicians crying for more
mass surveillance. In fact, following the mid-November ISIS-connected
terrorist attacks in Paris, which left 130 people dead, the Obama
administration called for new powers to access via “back doors” all cell phones and other devices, Cyberwar.news reported.
Richard Clarke, counterterrorism czar under Presidents Bill Clinton
and George W. Bush, agrees that mass surveillance is a loser in terms of
protecting the country, and that surveillance ought to be more
targeted, premised upon specific information, and above all
constitutional.
“I am troubled by the precedent of stretching a law on domestic
surveillance almost to the breaking point. On issues so fundamental to
our civil liberties, elected leaders should not be so needlessly
secretive,” he told Washington’s Blog in June 2013.
“The argument that this sweeping search must be kept secret from the
terrorists is laughable. Terrorists already assume this sort of thing is
being done. Only law-abiding American citizens were blissfully ignorant
of what their government was doing,” he added.
CONCERNING CERN: ARTIFICIAL INTELLIGENCE TO UNPLUG DATA FLOOD AT CERN? ...
THis
is one of the articles that so many of you shared that I simply have to
comment about it. There seems to be an expanded role for AI envisioned
at CERN, which the the world's largest single-entity generator of
"data", according to this article from Scientific American:
Artificial Intelligence Called In to Tackle LHC Data Deluge
Now, before I venture into my high octane speculation of the day, I
want the reader to focus on the following paragraphs, which summarize
the data filtration and collection system in use at CERN's Large Hadron
Collider currently, and which I reviewed in my most recent book, The Third Way:
Driven by an eagerness to make discoveries and the
knowledge that they will be hit with unmanageable volumes of data in ten
years’ time, physicists who work on the Large Hadron Collider (LHC),
near Geneva, Switzerland, are enlisting the help of AI experts.
On November 9-13, leading lights from both communities attended a
workshop—the first of its kind—at which they discussed how advanced AI
techniques could speed discoveries at the LHC. Particle physicists have
“realized that they cannot do it alone”, says Cécile Germain, a computer
scientist at the University of Paris South in Orsay, who spoke at the
workshop at CERN, the particle-physics lab that hosts the LHC.
Computer scientists are responding in droves. Last year, Germain
helped to organize a competition to write programs that could ‘discover’
traces of the Higgs boson in a set of simulated data; it attracted
submissions from more than 1,700 teams.
Particle physics is already no stranger to AI. In particular, when ATLAS and CMS, the LHC’s two largest experiments, discovered the Higgs boson in 2012, they did so in part using machine learning—a
form of AI that ‘trains’ algorithms to recognize patterns in data.
The algorithms were primed using simulations of the debris from particle
collisions, and learned to spot the patterns produced by the decay of
rare Higgs particles among millions of more mundane events. They were
then set to work on the real thing.
But in the near future, the experiments will need to get smarter at
collecting their data, not just processing it. CMS and ATLAS each
currently produces hundreds of millions of collisions per second, and
uses quick and dirty criteria to ignore all but 1 in 1,000 events.
Upgrades scheduled for 2025 mean that the number of collisions will grow
20-fold, and that the detectors will have to use more sophisticated
methods to choose what they keep, says CMS physicist María Spiropulu of
the California Institute of Technology in Pasadena, who helped to
organize the CERN workshop. “We’re going into the unknown,” she says.
Inspiration could come from another LHC experiment, LHCb, which is
dedicated to studying subtle asymmetries between particles and their
antimatter counterparts. In preparation for the second, higher-energy
run of the LHC, which began in April, the LHCb team programmed its detector to use machine learning to decide which data to keep.
In effect, what all this means, is that the enormous mountain of data that CERN's collider generates is first filtered by computer algorithms which are programed to sift through the mountain of data and pull certain events which conform to this programmed filter for human analysis and review.
It was this fact that led me to propose, in The Third Way, the hypothesis that there could be hidden
algorithms, in all the millions of lines of code, designed to pull
anomalous or other types of events, and shunt them into a covert program
consisting of covert analysts that might have Additionally, I
suggested that one such program would not consist so much of
the experiments themselves, but rather, "data correlation" experiments,
pulling data not only from the collider, but from concurrent
events, be they geophysical, or events concurrent with collider runs
that occur in the magnetosphere of the earth, solar events, and so on.
In other words, I was, and am, proposing the idea that in addition to
the public story of "particle physics," there might be hidden
experiments, only revealed by means of such data correlation algorithms, dealing with the macro-systemic effects of the collider's operation.
With that in mind, consider the very opening paragraph of the article:
The next generation of particle-collider experiments will
feature some of the world’s most advanced thinking machines, if links
now being forged between particle physicists and artificial intelligence
(AI) researchers take off. Such machines could make discoveries with little human input—a prospect that makes some physicists queasy. (Emphasis added)
Such a statement seems to imply the possibility for a hidden program,
but more importantly, throws an interesting and intriguing light on my
"correlation" experiment idea, for such an experiment would seem,
perforce, to demand such vast computational powers that only an AI could
provide, sifting through reams of data not only from particle
collisions, but "concurrent" data seemingly unrelated save only their
occurrence in time frames when the collider is active, and their absence
when it is not, data from human behavior trends (if any), data from
alterations in the magnetosphere's shape and behavior(and there is some
suggestive stuff out there), to other types of data. This would require
enormous computational ability and considerable skill in designing the
algorithms.
So in my high octane speculation of the day, I suspect that perhaps
we've been given a hint of this, and of these types of possibilities, in
this article, for it seems, reading between the lines a bit, these very
types of possibilities.