Monday, April 13, 2015

Twenty Years Later: Facts About the Oklahoma Bombing That Go Unreported


OKC-Murrah
Next week will mark the 20th anniversary of the terrorist bombing of the Murrah Federal Building in Oklahoma City, which killed 168 people including 19 children. The mainstream media will undoubtedly focus its attention on Timothy McVeigh, who was put to death in June 2001 for his part in the crime. They might also mention Terry Nichols, who was convicted of helping McVeigh plan the bombing and is serving a life sentence without parole.
There will be less discussion about how the FBI spent years hunting for a man who witnesses say accompanied McVeigh on the day of the bombing. They called this accomplice John Doe #2 and theories about his identity range from an Iraqi named Hussain Al-Hussaini, to a German national described below, to a neo-nazi bank robber named Richard Guthrie. The Justice Department finally gave up its search and said it was all a mistake— that there was never any credible evidence of a John Doe #2 being involved.
That reversal demonstrates a pattern of cover-up by authorities and limited media coverage in the years since the crime. This week, accounts will not repeat early reports of secondary devices in the building, or reports of the involvement of unknown middle-eastern characters. There will also be little if any mention of the extensive independent investigation into the crime that was conducted by leading members of the OKC community. Here are seven more facts that will probably not see much coverage on the 20thanniversary.
  1. Attorney Jesse Trentadue began investigating the case after his brother Kenney was killed in prison, apparently having been tortured to death by the FBI in its search for John Doe #2. Trentadue’s investigation led to a federal judge nearly finding the FBI in contempt of court for tampering with a key witness. Trentadue now says, “There’s no doubt in my mind, and it’s proven beyond any doubt, that the FBI knew that the bombing was going to take place months before it happened, and they didn’t stop it.”
  1. Judge Clark Waddoups, who presided over the case brought by Jesse Trentadue, ruled in 2010 that CIA documents associated with the case must be held secret. These documents show that the CIA was involved in the OKC bombing investigation and the prosecution of McVeigh. This means that foreign parties were involved because the CIA is prohibited from interfering in purely domestic investigations.
  1. Andreas Strassmeir, a former German military officer, was suspected of being John Doe #2. Strassmeir became close friends with McVeigh and they were both associated with a neo-nazi organization located in Elohim City, OK. A retired U.S. intelligence official claimed that Strassmeir was “working for the German government and the FBI” while at Elohim City. Mainstream reports about the OKC bombing typically avoid reference to Strassmeir.
  1. Larry Potts was the FBI supervisor who was responsible for the tragedies at Ruby Ridge in 1992, and Waco in 1993. Potts was then given responsibility for investigating the OKC bombing. Terry Nichols claimed that McVeigh—who allegedly had been recruited as an undercover intelligence asset while in the Army—had been working under the supervision of Potts.
  1. Terry Yeakey, an officer of the OKC Police Department, was among the first to reach the scene and he was heralded as a hero for rescuing many victims. Yeakey was also an eyewitness to conversations and physical evidence that convinced him that there was a cover-up of the bombing by federal agents. Yeakey was committed to getting to the truth about what happened but a year after the bombing he was found dead off the side of a rural road. His death was ruled a suicide despite overwhelming evidence that he was murdered. Authorities reported that Yeakey, “slit his wrists and neck… then miraculously climbed over a barbed wire fence… walked over a mile’s distance, through a nearby field, and eventually shot himself in the side of the head at an unusual angle.” No weapon was found, no investigation was conducted, no fingerprints were taken, and no interviews were conducted. His family continues to fight for the truth about his death.
  1. Gene Corley, the engineer who was hired by the government to support its claims about the structural fire at the Branch Davidian complex in Waco, was brought in to investigate the destruction of the Murrah Building. Corley brought along three other engineers: Charles ThorntonMete Sozen, and Paul Mlakar. Their investigation was conducted from half a block away—where they could not observe any of the damage directly—yet their conclusions supported the pre-existing official account. A few years later, within 72 hours of the 9/11 attacks, these same four men were on site leading the investigations at the Word Trade Center and the Pentagon.
  1. There are many other links between OKC and 9/11. For example, the alleged hijackers visited the OKC area many times and even stayed in the same motel that was frequented by McVeigh and Nichols. After both the OKC bombing and 9/11, building monitoring videos went missing, FBI harassment of witnesses was seen, and officials ignored evidence that did not support the political story. Additionally, numerous oddities link the OKC area to al Qaeda. In 2002, OKC resident Nick Berg was interrogated by the FBI for lending his laptop and internet password to alleged “20thhijacker” Zacarias Moussoui. Two years after this interrogation, Berg became world famous as a victim of beheading in Iraq. Investigators looking for clues about these connections will be particularly interested in two airports in OKC, the president of the University of Oklahoma, and the CIA leader who both monitored the alleged hijackers in Germany and was hired at the university just before 9/11.
On April 19, 2015, at the 20th anniversary of one of the worst terrorist attacks in history, citizens should be reminded that we don’t know what happened that day. We don’t know because officials have covered-up the crime for unknown reasons and most media sources will not challenge that cover-up.
Kevin Ryan blogs at Dig Within.

The End of Theory: The Data Deluge Makes the Scientific Method Obsolete

Illustration: Marian Bantjes
"All models are wrong, but some are useful."
So proclaimed statistician George Box 30 years ago, and he was right. But what choice did we have? Only models, from cosmological equations to theories of human behavior, seemed to be able to consistently, if imperfectly, explain the world around us. Until now. Today companies like Google, which have grown up in an era of massively abundant data, don't have to settle for wrong models. Indeed, they don't have to settle for models at all.
Sixty years ago, digital computers made information readable. Twenty years ago, the Internet made it reachable. Ten years ago, the first search engine crawlers made it a single database. Now Google and like-minded companies are sifting through the most measured age in history, treating this massive corpus as a laboratory of the human condition. They are the children of the Petabyte Age.
The Petabyte Age is different because more is different. Kilobytes were stored on floppy disks. Megabytes were stored on hard disks. Terabytes were stored in disk arrays. Petabytes are stored in the cloud. As we moved along that progression, we went from the folder analogy to the file cabinet analogy to the library analogy to — well, at petabytes we ran out of organizational analogies.
At the petabyte scale, information is not a matter of simple three- and four-dimensional taxonomy and order but of dimensionally agnostic statistics. It calls for an entirely different approach, one that requires us to lose the tether of data as something that can be visualized in its totality. It forces us to view data mathematically first and establish a context for it later. For instance, Google conquered the advertising world with nothing more than applied mathematics. It didn't pretend to know anything about the culture and conventions of advertising — it just assumed that better data, with better analytical tools, would win the day. And Google was right.
Google's founding philosophy is that we don't know why this page is better than that one: If the statistics of incoming links say it is, that's good enough. No semantic or causal analysis is required. That's why Google can translate languages without actually "knowing" them (given equal corpus data, Google can translate Klingon into Farsi as easily as it can translate French into German). And why it can match ads to content without any knowledge or assumptions about the ads or the content.
Speaking at the O'Reilly Emerging Technology Conference this past March, Peter Norvig, Google's research director, offered an update to George Box's maxim: "All models are wrong, and increasingly you can succeed without them."
This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.
The big target here isn't advertising, though. It's science. The scientific method is built around testable hypotheses. These models, for the most part, are systems visualized in the minds of scientists. The models are then tested, and experiments confirm or falsify theoretical models of how the world works. This is the way science has worked for hundreds of years.
Scientists are trained to recognize that correlation is not causation, that no conclusions should be drawn simply on the basis of correlation between X and Y (it could just be a coincidence). Instead, you must understand the underlying mechanisms that connect the two. Once you have a model, you can connect the data sets with confidence. Data without a model is just noise.
But faced with massive data, this approach to science — hypothesize, model, test — is becoming obsolete. Consider physics: Newtonian models were crude approximations of the truth (wrong at the atomic level, but still useful). A hundred years ago, statistically based quantum mechanics offered a better picture — but quantum mechanics is yet another model, and as such it, too, is flawed, no doubt a caricature of a more complex underlying reality. The reason physics has drifted into theoretical speculation about n-dimensional grand unified models over the past few decades (the "beautiful story" phase of a discipline starved of data) is that we don't know how to run the experiments that would falsify the hypotheses — the energies are too high, the accelerators too expensive, and so on.
Now biology is heading in the same direction. The models we were taught in school about "dominant" and "recessive" genes steering a strictly Mendelian process have turned out to be an even greater simplification of reality than Newton's laws. The discovery of gene-protein interactions and other aspects of epigenetics has challenged the view of DNA as destiny and even introduced evidence that environment can influence inheritable traits, something once considered a genetic impossibility.
In short, the more we learn about biology, the further we find ourselves from a model that can explain it.
There is now a better way. Petabytes allow us to say: "Correlation is enough." We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.
The best practical example of this is the shotgun gene sequencing by J. Craig Venter. Enabled by high-speed sequencers and supercomputers that statistically analyze the data they produce, Venter went from sequencing individual organisms to sequencing entire ecosystems. In 2003, he started sequencing much of the ocean, retracing the voyage of Captain Cook. And in 2005 he started sequencing the air. In the process, he discovered thousands of previously unknown species of bacteria and other life-forms.
If the words "discover a new species" call to mind Darwin and drawings of finches, you may be stuck in the old way of doing science. Venter can tell you almost nothing about the species he found. He doesn't know what they look like, how they live, or much of anything else about their morphology. He doesn't even have their entire genome. All he has is a statistical blip — a unique sequence that, being unlike any other sequence in the database, must represent a new species.
This sequence may correlate with other sequences that resemble those of species we do know more about. In that case, Venter can make some guesses about the animals — that they convert sunlight into energy in a particular way, or that they descended from a common ancestor. But besides that, he has no better model of this species than Google has of your MySpace page. It's just data. By analyzing it with Google-quality computing resources, though, Venter has advanced biology more than anyone else of his generation.
This kind of thinking is poised to go mainstream. In February, the National Science Foundation announced the Cluster Exploratory, a program that funds research designed to run on a large-scale distributed computing platform developed by Google and IBM in conjunction with six pilot universities. The cluster will consist of 1,600 processors, several terabytes of memory, and hundreds of terabytes of storage, along with the software, including IBM's Tivoli and open source versions of Google File System and MapReduce.1 Early CluE projects will include simulations of the brain and the nervous system and other biological research that lies somewhere between wetware and software.
Learning to use a "computer" of this scale may be challenging. But the opportunity is great: The new availability of huge amounts of data, along with the statistical tools to crunch these numbers, offers a whole new way of understanding the world. Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all.
There's no reason to cling to our old ways. It's time to ask: What can science learn from Google?
Chris Anderson (canderson@wired.com) is the editor in chief of Wired.

1This story originally stated that the cluster software would include the actual Google File System.
http://archive.wired.com/science/discoveries/magazine/16-07/pb_theory