---BREAKAWAY CIVILIZATION ---ALTERNATIVE HISTORY---NEW BUSINESS MODELS--- ROCK & ROLL 'S STRANGE BEGINNINGS---SERIAL KILLERS---YEA AND THAT BAD WORD "CONSPIRACY"--- AMERICANS DON'T EXPLORE ANYTHING ANYMORE.WE JUST CONSUME AND DIE.---
If you thought being an anime director was all cosplay groupies and
cool action figures, think again. Turns out it’s long, long hours in
front of a computer, less-than-fancy convenience store dinners and tons
of office all-nighters.
We recently caught up with the anime director of acclaimed Dream Link Entertainment (DLE),
Azuma Tani, whose cool name is rivaled only by his dedication to
creating the best animated films he can. The man recently spent nearly
three months locked in his office to complete the recent Glass Kamen Desu Ga
(“I’m Glass Mask, So What?”) film, and for some reason, instead of
going on a much deserved vacation, Tani lent us his time to give us a
glimpse into the busy, bizarre world of an anime director: What is it like to be an anime director? Give us a day in the life.
When you’re making an anime, there are two phases: writing and animating. During production of Glass Kamen,
I would wake up around 7:30 or so, watch one of my favorite TV shows –
“Ama-chan” – then start writing and drawing. At around 10 a.m., people
start showing up in the office, so it gets harder to concentrate on
writing. That’s when I’d usually switch to animating. My team filters in
and shows me the work they’re doing, so throughout the day people are
asking questions. I give directions, tell them what’s good and what
needs work, etc. I’ll go to lunch around 2 p.m. most days. Most of the
time I’ll grab a bento from a convenience store or, if I’m feeling fancy, I’ll get ramen at a place on the corner near the office. So you’re in the office most of the time?
During production of Glass Kamen, I slept in the office most nights. I’d have dinner at around 10 p.m. – again, ramen or bento – then get back to work. I’d sleep around 2 a.m. Did you ever get out of the office?
I would go to the recording studio to supervise voice actors a couple
times, but I would leave the office, bike out there, spend just a few
minutes listening, give my approval and bike back to the office. Where did you sleep?
I have a tatami mat I’d lay out on the floor in one of the meeting
rooms, and I’d use a plush toy of one of DLE’s characters as a pillow.
When you sleep on the floor, you get itchy all over, so it wasn’t the
most pleasant of sleeping arrangements. Glass Kamen took about three months to complete, and I was in the office the majority of that time
– but if you spend too many days in a row in the office, you start
going crazy. So, towards the end of production, I’d go home most days.
But, my wife and kids were always asleep when I got home at night, and I
felt bad waking them up in the morning.
▼ To an anime director, this is luxury accommodation.
So, how often did you get to see your family?
When we were making Glass Kamen, I had dinner with my family once. Did you have any time for hobbies?
Not really. But, I’m lucky in that my main hobby is riding road
bicycles. So, when I go to and from work, and to and from the recording
studio, I get some time to enjoy biking. I use a mobile app called Strava;
it records your route, measures your time and average speed, etc. and
then compares your performance to other bikers in the area. I’m proud to
say I’m ranked first on every route around my office. Once, I got lucky with traffic lights and recorded an average speed of 62 km/hour [editor's note: this is faster than the speed limit for cars in Japan].
When you’re riding, are you thinking about the anime?
No, no. That’s my time to totally clear my head. I literally think about nothing when I bike. So, when are you at your most creative?
After 10 p.m., people leave the office and it’s just me and a few
others, so things are quiet and that’s the time I really get to think
creatively. I also drink a lot of energy drinks and chew a lot of gum – I
find it helps me think. But, once I drank too much caffeine and my chest started to hurt. I thought I might have a heart attack if I kept it up, so I cut back on the energy drinks.
At this point in the interview, one of Tani-san’s interns frantically
interrupted – something about walk cycles – and Tani had to rush to the
rescue, cutting the interview short and exemplifying the die-hard
devotion of a pro anime director.
If you’re in Japan, you can see Tani’s newest film in theaters.
Japanese speakers can enjoy the short comedy anime version on YouTube,
although there are unfortunately no English subtitles available. Yet. Glass Mask is the second best selling girl’s manga series of all time. It’s been adapted into several anime and live action series, but Glass Kamen Desu Ga is the first feature length film adaptation of the venerated series.
Tani tells us his biggest aspiration is to create an anime from the
ground up for the North American market. Our fingers are crossed he gets
his wish.
BAE Systems was awarded a $34.5 million contract from the
Office of Naval Research (ONR) for the development of the
Electromagnetic (EM) Railgun under Phase 2 of the Navy’s Innovative
Naval Prototype (INP) program.
The focus of Phase 2 is to advance the Railgun technology by maturing
the launcher and pulsed power from a single shot operation to a
multi-shot capability, and incorporating auto-loading and thermal
management systems.
“We’re committed to developing this innovative and game changing
technology that will revolutionize naval warfare,” said Chris Hughes,
vice president and general manager of Weapon Systems at BAE Systems.
“The Railgun’s ability to defend against enemy threats from distances
greater than ever before improves the capabilities of our armed forces.”
In 2012, during Phase 1 of the INP program, engineers at the Naval
Surface Warfare Center in Dahlgren, Virginia successfully fired BAE
Systems’ EM Railgun prototype at tactical energy levels. The recently
awarded ONR contract marks the completion of Phase 1 and the selection
of BAE Systems as the developer for the Phase 2 launcher prototype.
Phase 2 is anticipated to begin immediately with initial prototypes to
be delivered in 2014. The Railgun development will be carried out by BAE
Systems in Minneapolis, Minnesota and by teammates IAP Research in
Dayton, Ohio and SAIC in Marietta, Georgia.
The EM Railgun
is a long-range weapon technology that uses high-power electromagnetic
energy instead of explosive chemical propellants to launch a projectile
farther and faster than any gun before it. When fully weaponized, a
Railgun will deliver hypervelocity projectiles on target, at ranges far
exceeding the Navy’s current capability.
Editor’s Note… Semi-official reports in the recent 36 hours describe
a mysterious blast in a Syrian arms depot in the city of Latakia
,located well within the Allawite enclave in North Western Syria. FSA
sources claimed responsibility for the blast, but western observes cast
doubts on these claims due to the geographical and tactical constraints.
According to some estimations the blasts were caused by missiles
arriving from the western direction, that is from the Mediterranean sea.
This would suggest that if the Israelis are behind this event, they may
have used an undetectable maritime platform, like the Dolphin class
submarine, to bypass the recently deployed S-300 air defense systems,
operated in Syria by Russian teams. Sea skimming cruise missiles are
hard to detect, likewise submarine-borne naval commandos conducting
nocturnal raids on enemy ports. If so, this would indicate a decision
to refrain from risking a confrontation with the Russian military which
would surely break out in case of a direct attack on the S-300
batteries. There is no ‘smoking gun’ to support this version yet,but
the refusal of Israeli officials to comment on this event may suggest a
repetition of the standard mode of operation used during the IAF bombing
raids on Hezbollah targets and biochemical facilities in Damascus in
the recent year: maintaining two-fold plausible deniability which
enables both governments (of Israel and Syria) to limit armed clashes
to the clandestine level in order to avoid the unpredictable results of a
full scale war .
*** Ha’aretz
Several powerful blasts were heard at a weapons depot belonging to
the Syrian military late on Thursday night, according to reports
gradually streaming in from Syria. BBC Arabic radio reported overnight Thursday that the explosions took place near the port of Latakia in Syria’s north.
Subsequent reports offered few new details and drew limited
attention. Among them was a statement by the London-based Syrian
Observatory for Human Rights, which said that “huge explosions shook the
area where a large Syrian army base and weapons depots are located.”
According to the group, residents in the area where the blasts were
heard say they were caused by missile fire of unknown origin. However,
according to other reports that have reached the rights group, fighter
jets were seen in the skies in the area of the city of Al-Haffah. It was
further reported that several troops have been killed and wounded in
the explosions. Fires broke out in the region.
A similar report carried by the Lebanese TV station Al-Manar said the
blasts were caused by rocket or missile fire at a military base near a
village some 20 kilometers from Latakia. Al-Manar cited a “military
source” as saying that the fire came from the direction of a northern
suburb of the city, where rebels and regime forces have been clashing
for days.
The same source said that the base contains large stockpiles of
weapons. The anonymous source denied the possibility that the explosions
were caused by an air or sea strike targeting the Syrian regime’s arms
store. It remains unclear whether the source was Syrian.
Opposition websites said the weapons depot was attacked by the Free
Syrian Army, and that, according to eye-witnesses, the blasts took place
at around 2 A.M. Flames could be seen from afar. There were also
reports of heavy exchanges of gunfire in the area after the explosions.
The reports cast blame for the blasts upon Syrian opposition groups.
The source of the strike, however, remains unclear, as do the details
about the damage that has been caused.
Latakia is located in an Alawite enclave in northern Syria. The city,
as well as the nearby port city of Tartus, houses the artificial
respiration system that is holding Syrian President Bashar Assad‘s
regime alive despite the bloody civil war that has claimed the lives of
more than 100,000 Syrians over the course of nearly two years.
Recently, at a speech held at the Washington Institute, Israel’s
defense minister, Moshe Ya’alon, had warned that Israel will respond
harshly if Assad orders border attacks against the country. Threat from Sinai
The situation developing in Sinai in the wake of Egyptian President Mohammed Morsi‘s
ousting is also of concern to Israel. On Friday night a radical
Islamist group, Ansar Bayt Al-Maqdis, claimed responsibility for rockets
fired from Sinai toward the Israeli city of Eilat on Thursday night. No
rockets were found within Israel’s territory after the attack, and it
is possible they have landed in Sinai. The sound of the blast echoed in
the Eilat.
Meanwhile, Islamist groups have also raided Egyptian army posts near
the Sinai town of El Arish, killing a senior Egyptian officer. Israeli
officials have postulated that the groups are retaliating against the
toppling of the Muslim Brotherhood government in Cairo. The Egyptian
security forces’ may not be able to dedicate as much time and effort to
monitor Sinai at this time, and Israel has to take into account that the
violence in the peninsula could turn into terror attacks within
Israel’s borders.
US President Barack Obama, June 17, 2013. Photo: Reuters
US foreign policy is failing worldwide.
The Russian and Chinese embrace
of indicted traitor Edward Snowden is just the latest demonstration of the
contempt in which the US is held by an ever increasing number of adversarial
states around the world.
Iran has also gotten a piece of the
action.
As part of the regime’s bread and circuses approach to its
subjects, supreme dictator Ali Khamenei had pretend reformer Hassan Rohani win
the presidential election in a landslide two weeks ago. Rohani has a long record
of advancing Iran’s nuclear program, both as a national security chief and as a
senior nuclear negotiator. He also has a record of deep involvement in acts of
mass terror, including the 1994 bombing of the AMIA Jewish center in Buenos
Aires that killed 85 people and wounded hundreds.
Yet rather than
distance itself from Rohani the phony, the Obama administration has celebrated
Iranian democracy and embraced him as a reformer. Obama’s spokesmen say they
look forward to renewing nuclear talks with Rohani, and so made clear – yet
again – that the US has no intention of preventing Iran from becoming a nuclear
power.
Rohani responded to the administration’s embrace by stating
outright he will not suspend Iran’s nuclear enrichment activities. In other
words, so great is Iran’s contempt for President Barack Obama and his
administration, that it didn’t even pay lip service to the notion of cutting a
deal.
And that makes sense. Obama only has one card he is willing to play
with Iran – appeasement. And so that is the card he plays. His allies are
already talking about containing a nuclear Iran. But that’s not an
option.
A government’s ability to employ a strategy of nuclear
containment is entirely dependent on the credibility of its nuclear threats.
Obama is slashing the US nuclear arsenal, and Snowden reportedly just gave the
Russians and the Chinese the US’s revised nuclear war plans. Obama has no
credibility in nuclear games of chicken. He has no chance of containing Khamenei
and his apocalyptic jihad state.
Iran, its Russian ally and its Lebanese
Hezbollah proxy now have the upper hand in the Syrian civil war. In large part
due to Obama’s foreign policy, the war is spilling into Lebanon and threatening
Jordan and Iraq – not to mention Israel. In response to this state of affairs,
Obama has decided to begin arming the al-Qaida-dominated Syrian opposition
forces. Now it’s true, Obama is planning to transfer US arms to the Supreme
Military Council of the Free Syrian Army that is recognized by the US. But that
is no reason not to worry.
The Free Syrian Army is dominated by the
Muslim Brotherhood. It condemned the US’s decision to designate the Syrian
al-Qaida affiliate, Jabhat al-Nusra, a foreign terrorist organization. FSA
fighters and commanders regularly collaborate with (and sometimes fight)
Al-Nusra. At a minimum, there is no reason to believe that these US arms will
not be used in conjunction with al-Qaida forces in Syria.
In truth, there
is little reason from a US perspective to view a Syria dominated by any of the
warring parties – including the FSA – as amenable to US interests or values.
There is no ideological distinction between the goals of the Muslim Brotherhood
and those of al-Qaida, or Hamas or a dozen other jihadist armed groups that were
formed by Muslim Brotherhood members. Like Iran and its proxies, they all want
to see Western civilization – led by the US – destroyed. And yes, they all want
to destroy Israel, and Europe.
But for the Obama administration, this
ideological affinity is not relevant.
The only distinction they care
about is whether a group just indoctrinates people to become jihadists, or
whether they are actively engaged – at this minute – in plotting or carrying out
terrorist attacks against the US. And even then, there are
exceptions.
For instance, the Taliban are actively waging war against the
US in Afghanistan. But since the Obama administration has no will to defeat the
Taliban, it is begging them to negotiate with US officials.
Obama’s
default position in the Muslim world is to support the Muslim Brotherhood.
Egypt’s Muslim Brotherhood is the wellspring of the Sunni jihadist movement. And
Obama is the Brotherhood’s greatest ally. He facilitated the Brotherhood’s rise
to power in Egypt, at the expense of the US’s most important Arab ally, Hosni
Mubarak.
He even supported them at the expense of American citizens
employed in Egypt by US government- supported NGOs. Forty-three Americans were
arrested for promoting democracy, and all the administration would do was
facilitate their escape from Egypt. Robert Becker, the one US aid worker who
refused to flee, was abandoned by the State Department. He just escaped from
Egypt after being sentenced to two years in prison.
The Obama
administration supports the Morsi government even as it persecutes Christians.
It supports the Muslim Brotherhood even though the government has demonstrated
economic and administrative incompetence, driving Egypt into failed state
status. Egypt is down to its last few cans of fuel. It is facing the specter of
mass starvation. And law and order have already broken down entirely. It has
lost the support of large swathes of the public. But still Obama maintains
faith.
Then there are the Palestinians.
Next week John Kerry will
knock on our door, again in an obsessive effort to restart the mordant phony
peace process. For its part, as The Jerusalem Post’s Khaled Abu Toameh reported
this week, the supposedly moderate Fatah-ruled Palestinian Authority has adopted
a policy of denying Jews entrance to PA-ruled areas. Jewish reporters – Israeli
and non-Israeli – are barred from covering the PA or speaking with Fatah and PA
officials.
Jewish diplomats are barred from speaking to PA officials or
joining the entourage of diplomats who speak with them. Jewish businessmen are
barred from doing business in the PA.
As for the radical Hamas terror
group that rules Gaza, this week Hamas again reiterated its loyalty to its
covenant which calls for the obliteration of Israel and the annihilation of
world Jewry.
But Kerry is coming back because he’s convinced that the
reason there’s no peace process is that Israelis are too rich, and too happy,
and too stingy, and too suspicious, and too lacking empathy for the Palestinians
who continue to teach their children to murder our children.
You might
think that this pile-on of fiascos would lead Obama and his advisers to
reconsider their behavior.
But you’d be wrong. If Obama were asked his
opinion of his foreign policy he would respond with absolute conviction that his
foreign policy is a total success – everywhere. And by his own metrics, he’d be
right.
Obama is a man of ideas. And he has surrounded himself with men
and women who share his ideas. For Obama and his advisers, what matters are not
the facts, but the theoretical assumptions – the ideas – that determine their
policies. If they like an idea, if they find it ideologically attractive, then
they base their policies on it. Consequences and observable reality are no match
for their ideas. To serve their ideas, reality can be deliberately distorted.
Facts can be ignored, or denied.
Obama has two ideas that inform his
Middle East policy. First, the Muslim Brotherhood is good. And so his policy is
to support the Muslim Brotherhood, everywhere. That’s his idea, and as long as
the US continues to support the Brotherhood, its foreign policy is successful.
For Obama it doesn’t matter whether the policy is harmful to US national
security. It doesn’t matter if the Brotherhood slaughters Christians and
Shi’ites and persecutes women and girls. It doesn’t matter if the Brotherhood’s
governing incompetence transforms Egypt – and Tunisia, and Libya and etc., into
hell on earth. As far as Obama is concerned, as long as he is true to his idea,
his foreign policy is a success.
Obama’s second idea is that the root
cause of all the problems in the region is the absence of a Palestinian state on
land Israel controls. And as a consequence, Israel is to blame for everything
bad that happens because it is refusing to give in to all of the Palestinians’
demands.
Stemming from this view, the administration can accept a nuclear
Iran. After all, if Israel is to blame for everything, then Iran isn’t a threat
to America.
This is why Fatah terrorism, incitement and anti-Semitism are
ignored.
This is why Hamas’s Deputy Foreign Minister Ghazi Hamad reported
that he met with senior US officials two weeks ago.
This is why Kerry is
coming back to pressure the rich, stingy, paranoid, selfish Jews into making
massive concessions to the irrelevant Palestinians.
Obama’s satisfaction
with his foreign policy is demonstrated by the fact that he keeps appointing
likeminded ideologues to key positions.
This week it was reported that
Kerry is set to appoint Robert Malley to serve as deputy assistant secretary of
state for Near Eastern affairs. Malley has built his career out of advancing the
ideas Obama embraces.
In 2001, Malley authored an article in The New York
Times where he blamed Israel for the failure of the Camp David peace summit in
July 2000. At that summit, Israel offered the Palestinians nearly everything
they demanded. Not only did Palestinian leader Yasser Arafat refuse the offer.
He refused to make a counteroffer.
Instead he went home and ordered his
deputies to prepare to initiate the terror war against Israel which he started
two months later.
As Lee Smith wrote in a profile of Malley in Tablet in
2010, Malley’s article, and subsequent ones, “created a viable interpretative
framework for continuing to blame both sides for the collapse of the peace
process even after the outbreak of the second intifada. If both sides were at
fault, then it would be possible to resume negotiations once things calmed down.
If, on the other hand, the sticking point was actually about existential issues
– the refusal to accept a Jewish state – and the inability, or unwillingness, of
the Palestinians to give up the right of Arab refugees to return to their pre-
1948 places of residence, then Washington would have been compelled to abandon
the peace process after Clinton left office.”
In other words, Malley
shared the idea that Israel was to blame for the pathologies of the Arabs.
Stemming from this view, Malley has been meeting with Hamas terrorists for
years. He belittled the threat posed by a nuclear Iran and accused Prime
Minister Binyamin Netanyahu of exaggerating the Iranian nuclear threat to divert
attention away from the Palestinians. He has also met with Hezbollah, and has
been an outspoken supporter of Syrian President Bashar Assad.
After the
September 11 attacks, the US pledged to wage a war of ideas in the Muslim world.
And in Obama’s foreign policy, we have such a war of ideas.
The only
problem is that all of his ideas are wrong.
Regulation:
No longer the stuff of science fiction, a little-noticed change in
energy-efficiency requirements for appliances could lead to government
controlling the power used in your home and how you set your thermostat.
In a seemingly innocuous revision of its Energy Star efficiency
requirements announced June 27, the Environmental Protection Agency
included an "optional" requirement for a "smart-grid" connection for
customers to electronically connect their refrigerators or freezers with
a utility provider.
The feature lets the utility provider regulate the appliances' power
consumption, "including curtailing operations during more expensive
peak-demand times."
So far, manufacturers are not required to include the feature, only
"encouraged," and consumers must still give permission to turn it on.
But with the Obama administration's renewed focus on fighting mythical
climate change, we expect it to become mandatory to save the planet from
the perils of keeping your beer too cold.
"Manufacturers that build in and certify optional 'connected
features' will earn a credit towards meeting the Energy Star efficiency
requirements," according to an EPA email to CNSNews.com.
We are both intrigued and bothered by the notion that a utility
company, the regulated energy sock-puppet of government, could and
probably will have the power to regulate the power we use and how we use
it, as long as we're paying our electricity bills, even to the point of
turning these devices and appliances off at will.
We're reminded that former EPA director Carol Browner was a big fan
of the smart grid and its potential ability to monitor and control power
usage down to the thermostat in your home.
"Eventually," she told U.S. News & World Report in 2009, "we can
get to a system when an electric company will be able to hold back some
of the power so that maybe your air-conditioner won't operate at its
peak."
The same year, President Obama, who had won election on a pledge to
"fundamentally transform America," signed a stimulus bill to "transform
the way we use energy" with a smart grid that would abandon "a grid of
lines and wires that dates back to Thomas Edison — a grid that can't
support the demands of clean energy."
To make sure we respond properly to the "demands of clean energy,"
Obama said this "investment will place Smart Meters in homes to make our
energy bills lower, make outages less likely and make it easier to use
clean energy." It'll also make it easier to monitor and control our
energy use.
So if one day you keep your house too cool or your beer too cold and
the dials start moving, don't be surprised. Its just a campaign promise
being kept.
In more than a dozen classified rulings, the nation’s surveillance court has created a secret body of law giving the National Security Agency the
power to amass vast collections of data on Americans while pursuing not
only terrorism suspects, but also people possibly involved in nuclear
proliferation, espionage and cyberattacks, officials say.
The rulings, some nearly 100 pages long, reveal that the court has
taken on a much more expansive role by regularly assessing broad
constitutional questions and establishing important judicial precedents,
with almost no public scrutiny, according to current and former
officials familiar with the court’s classified decisions.
The 11-member Foreign Intelligence Surveillance Court, known as the
FISA court, was once mostly focused on approving case-by-case
wiretapping orders. But since major changes in legislation and greater judicial oversight of intelligence operations were instituted six years ago,
it has quietly become almost a parallel Supreme Court, serving as the
ultimate arbiter on surveillance issues and delivering opinions that
will most likely shape intelligence practices for years to come, the
officials said.
Last month, a former National Security Agency contractor, Edward J.
Snowden, leaked a classified order from the FISA court, which authorized
the collection of all phone-tracing data from Verizon business customers. But the court’s still-secret decisions go far beyond any single surveillance order, the officials said.
“We’ve seen a growing body of law from the court,” a former
intelligence official said. “What you have is a common law that develops
where the court is issuing orders involving particular types of
surveillance, particular types of targets.”
In one of the court’s most important decisions, the judges have
expanded the use in terrorism cases of a legal principle known as the
“special needs” doctrine and carved out an exception to the Fourth
Amendment’s requirement of a warrant for searches and seizures, the
officials said.
The special needs doctrine was originally established in 1989 by the
Supreme Court in a ruling allowing the drug testing of railway workers,
finding that a minimal intrusion on privacy was justified by the
government’s need to combat an overriding public danger. Applying that
concept more broadly, the FISA judges have ruled that the N.S.A.’s
collection and examination of Americans’ communications data to track
possible terrorists does not run afoul of the Fourth Amendment, the
officials said.
That legal interpretation is significant, several outside legal
experts said, because it uses a relatively narrow area of the law — used
to justify airport screenings, for instance, or drunken-driving
checkpoints — and applies it much more broadly, in secret, to the
wholesale collection of communications in pursuit of terrorism suspects.
“It seems like a legal stretch,” William C. Banks,
a national security law expert at Syracuse University, said in response
to a description of the decision. “It’s another way of tilting the
scales toward the government in its access to all this data.”
While President Obama and his intelligence advisers have spoken of the surveillance programs leaked by Mr. Snowden mainly in terms of combating terrorism,
the court has also interpreted the law in ways that extend into other
national security concerns. In one recent case, for instance,
intelligence officials were able to get access to an e-mail attachment
sent within the United States because they said they were worried that
the e-mail contained a schematic drawing or a diagram possibly connected
to Iran’s nuclear program.
In the past, that probably would have required a court warrant
because the suspicious e-mail involved American communications. In this
case, however, a little-noticed provision in a 2008 law, expanding the
definition of “foreign intelligence” to include “weapons of mass
destruction,” was used to justify access to the message.
The court’s use of that language has allowed intelligence officials
to get wider access to data and communications that they believe may be
linked to nuclear proliferation, the officials said. They added that
other secret findings had eased access to data on espionage,
cyberattacks and other possible threats connected to foreign
intelligence.
“The definition of ‘foreign intelligence’ is very broad,” another
former intelligence official said in an interview. “An espionage target,
a nuclear proliferation target, that all falls within FISA, and the
court has signed off on that.”
The official, like a half-dozen other current and former national
security officials, discussed the court’s rulings and the general trends
they have established on the condition of anonymity because they are
classified. Judges on the FISA court refused to comment on the scope and
volume of their decisions.
Unlike the Supreme Court, the FISA court hears from only one side in the case — the government — and its findings are almost never made public.
A Court of Review is empaneled to hear appeals, but that is known to
have happened only a handful of times in the court’s history, and no
case has ever been taken to the Supreme Court. In fact, it is not clear
in all circumstances whether Internet and phone companies that are
turning over the reams of data even have the right to appear before the
FISA court.
Created by Congress in 1978 as a check against wiretapping abuses by
the government, the court meets in a secure, nondescript room in the
federal courthouse in Washington. All of the current 11 judges, who
serve seven-year terms, were appointed to the special court by Chief
Justice John G. Roberts Jr., and 10 of them were nominated to the bench
by Republican presidents. Most hail from districts outside the capital
and come in rotating shifts to hear surveillance applications; a single
judge signs most surveillance orders, which totaled nearly 1,800 last
year. None of the requests from the intelligence agencies was denied,
according to the court.
Beyond broader legal rulings, the judges have had to resolve
questions about newer types of technology, like video conferencing, and
how and when the government can get access to them, the officials said.
The judges have also had to intervene repeatedly when private
Internet and phone companies, which provide much of the data to the
N.S.A., have raised concerns that the government is overreaching in its
demands for records or when the government itself reports that it has
inadvertently collected more data than was authorized, the officials
said. In such cases, the court has repeatedly ordered the N.S.A. to
destroy the Internet or phone data that was improperly collected, the
officials said.
The officials said one central concept connects a number of the
court’s opinions. The judges have concluded that the mere collection of
enormous volumes of “metadata” — facts like the time of phone calls and
the numbers dialed, but not the content of conversations — does not
violate the Fourth Amendment, as long as the government establishes a
valid reason under national security regulations before taking the next
step of actually examining the contents of an American’s communications.
This concept is rooted partly in the “special needs” provision the
court has embraced. “The basic idea is that it’s O.K. to create this
huge pond of data,” a third official said, “but you have to establish a
reason to stick your pole in the water and start fishing.”
Under the new procedures passed by Congress in 2008 in the FISA
Amendments Act, even the collection of metadata must be considered
“relevant” to a terrorism investigation or other intelligence
activities.
The court has indicated that while individual pieces of data may not
appear “relevant” to a terrorism investigation, the total picture that
the bits of data create may in fact be relevant, according to the
officials with knowledge of the decisions. Geoffrey R. Stone,
a professor of constitutional law at the University of Chicago, said he
was troubled by the idea that the court is creating a significant body
of law without hearing from anyone outside the government, forgoing the
adversarial system that is a staple of the American justice system.
“That whole notion is missing in this process,” he said.
The FISA judges have bristled at criticism that they are a rubber
stamp for the government, occasionally speaking out to say they apply
rigor in their scrutiny of government requests. Most of the surveillance
operations involve the N.S.A., an eavesdropping behemoth that has
listening posts around the world. Its role in gathering intelligence
within the United States has grown enormously since the Sept. 11
attacks.
Soon after, President George W. Bush, under a secret wiretapping
program that circumvented the FISA court, authorized the N.S.A. to
collect metadata and in some cases listen in on foreign calls to or from
the United States. After a heated debate, the essential elements of the
Bush program were put into law by Congress in 2007, but with greater
involvement by the FISA court.
Even before the leaks by Mr. Snowden, members of Congress and civil
liberties advocates had been pressing for declassifying and publicly
releasing court decisions, perhaps in summary form.
Reggie B. Walton, the FISA court’s presiding judge, wrote in March that he recognized the “potential benefit of better informing the public”
about the court’s decisions. But, he said, there are “serious
obstacles” to doing so because of the potential for misunderstanding
caused by omitting classified details.
Gen. Keith B. Alexander, the N.S.A. director, was noncommital when he
was pressed at a Senate hearing in June to put out some version of the
court’s decisions.
While he pledged to try to make more decisions public, he said, “I
don’t want to jeopardize the security of Americans by making a mistake
in saying, ‘Yes, we’re going to do all that.’ ”
Source: Sun Daily
Japan is planning to launch satellites to monitor the world’s oceans,
a report said Sunday, as Chinese government ships plied waters around
islands controlled by Tokyo and claimed by Beijing.
The Cabinet office plans to launch nine satellites in the next five
years to counter piracy and monitor the movements of foreign ships
intruding into Japanese territorial waters, the business daily Nikkei
reported.
They will also collect data for forecasting natural disasters such as tsunamis, it said.
The report, which cabinet ministry officials could not immediately
confirm, came as Japan’s coastguard said three Chinese government ships
entered waters around the Senkaku islands in the East China Sea.
The maritime surveillance vessels entered the 12-nautical-mile zone
of Uotsurijima, one of the Senkaku islands, which China calls the
Diaoyus, at about 9:30 am (0030 GMT), the coastguard said.
Ships from the two countries have for months traded warnings over
intrusions into what both regard as their territory as Beijing and Tokyo
jostle over ownership of the strategically sited and resource-rich
islands.
The territorial row that dates back four decades reignited last
September when Tokyo nationalised three islands in the chain, in what it
said was a mere administrative change of ownership.
Former Japanese prime minister Yukio Hatoyama came under fire in June
after he said he understood China’s claim to the islands. – AFP
LOOKING DOWN FROM 500 MILES
above Earth’s surface, you could watch the FedEx Custom Critical
Delivery truck move across the country along 3,140 miles of highway in
47 and a half hours of nonstop driving. Starting off in Wilmington,
Massachusetts, the truck merges south onto I-95 and keeps right at the
fork for I-90. Then it winds its way across the width of New York State,
charging past the airport in Toledo, through the flatlands of Indiana,
Illinois, Iowa, Nebraska, and Wyoming, snaking down the mountain passes
and switchbacks above Salt Lake City, across the Nevada deserts and over
to Sacramento, then down the highway toward San Jose and off at the
California 237 exit, headed for Mountain View.
Neither Jim nor Carla Cline, a married couple who take turns at the
wheel, has the slightest inkling that the large wooden crate in the back
of their truck might radically change how we see our world. When they
finally pull into the parking lot of a low warehouse-like structure
around the corner from a Taco Bell, more than a hundred engineers,
coders, and other geeks who work for a startup called Skybox Imaging are
there to cheer the Clines’ arrival. He and Carla delivered some
dinosaur bones once, Jim tells me, leaning out the window as he idles by
the curb. Elvis’ Harley too. “Never saw anything get the attention this
got,” he says.
Dan Berkenstock, executive VP and chief product officer of Skybox, is
in the cheering crowd, fidgeting with his half-filled coffee mug. In
worn Converse sneakers, short-sleeved blue oxford shirt, jeans, and
glasses, he looks younger than most of the employees at the company he
founded, which has been his passion ever since he dropped out of
Stanford’s engineering school in 2009. Berkenstock’s idea for a startup
was far outside the mainstream of venture capital investment in the
Valley, with its penchant for “lean” software plays and quick-hit social
apps. But his company got funded nevertheless, and now Skybox has
designed and built something unprecedented—the kind of
once-in-a-lifetime something that makes the hearts of both engineers and
venture capitalists beat faster. The Clines have just delivered the
final piece: a set of high-end custom optics, which will be inserted
into an unassuming metal box the size of a dorm-room minifridge.
“What would you say,” I ask Jim, “if I told you that you had a
satellite in the back of your truck, and these guys were going to launch
it into space?” He grins.
“I’d say that’s pretty damn cool,” he answers. “If they can get it up there.”
Data From Above
What can you really learn from 500 miles above Earth? Quite a lot,
it turns out. Already, our limited commercial services for satellite
imaging are providing crucial data to companies, scientists, and
governments. —Sara Breselor
PARKING PATTERNS
Chicago-based Remote Sensing Metrics tracks the number of cars in parking lots to forecast retail performance.
DATA MINES
A view of the size of pits and slag heaps around a mine can allow for an estimate of its productivity.
BLEAK HOUSES
Insurance companies look at damaged property from above to validate claims and flag potential fraud.
CRUDE MEASUREMENTS
After an oil spill, the National Oceanic and Atmospheric Administration tracks the size and movement of oil slicks.
Forty years after humans first saw
pictures of a blue and white marble taken from space, it’s remarkable
how few new images of Earth we get to lay eyes on. Of the 1,000 or more
satellites orbiting the planet at any given time, there are perhaps 100
that send back visual data. Only 12 of those send back high-resolution
pictures (defined as an image in which each pixel represents a square
meter or less of ground), and only nine of the 12 sell into the
commercial space-based imaging market, currently estimated at $2.3
billion a year. Worse still, some 80 percent of that market is
controlled by the US government, which maintains priority over all other
buyers: If certain government agencies decide they want satellite time
for themselves, they can simply demand it. Earlier this year, after the
government cut its imaging budget, the market’s two biggest
companies—DigitalGlobe and GeoEye, which between them operate five of
the nine commercial geoimaging satellites—were forced to merge. Due to
the paucity of satellites and to the government’s claim on their
operations, ordering an image of a specific place on Earth can take
days, weeks, even months.
Because so few images make their way down from space every day, and
even fewer reach the eyes of the public—remember how dazzled we were
when Google Earth first let us explore one high-definition image of the
planet?—we can fool ourselves into thinking that the view from space
barely changes. But even with the resolutions allowed by the government
for commercial purposes, an orbiting satellite can clearly show
individual cars and other objects that are just a few feet across. It
can spot a FedEx truck crossing America or a white van driving through
Beirut or Shanghai. Many of the most economically and environmentally
significant actions that individuals and businesses carry out every day,
from shipping goods to shopping at big-box retail outlets to cutting
down trees to turning out our lights at night, register in one way or
another on images taken from space. So, while Big Data companies scour
the Internet and transaction records and other online sources to glean
insight into consumer behavior and economic production around the world,
an almost entirely untapped source of data—information that companies
and governments sometimes try to keep secret—is hanging in the air right
above us.
Here is the soaring vision that Skybox’s founders have sold the
Valley: that kids from Stanford, using inexpensive consumer hardware,
can ring Earth with constellations of imaging satellites that are
dramatically cheaper to build and maintain than the models currently
aloft. By blanketing the exosphere with its cameras, Skybox will quickly
shake up the stodgy business (estimated to grow to $4 billion a year by
2018) of commercial space imaging. Even with six small satellites
orbiting Earth, Skybox could provide practically real-time images of the
same spot twice a day at a fraction of the current cost.
But over the long term, the company’s real payoff won’t be in the
images Skybox sells. Instead, it will derive from the massive trove of
unsold images that flow through its system every day—images that, when
analyzed by computer vision or by low-paid humans, can be transmogrified
into extremely useful, desirable, and valuable data. What kinds of
data? One sunny afternoon on the company’s roof, I drank beers with the
Skybox employees as they kicked around the following hypotheticals:
— The number of cars in the parking lot of every Walmart in America.
— The number of fuel tankers on the roads of the three fastest-growing economic zones in China.
— The size of the slag heaps outside the largest gold mines in southern Africa.
— The rate at which the wattage along key stretches of the Ganges River is growing brighter.
Such bits of information are hardly trivial. They are digital gold
dust, containing clues about the economic health of countries,
industries, and individual businesses. (One company insider confided to
me that they have already brainstormed entirely practical ways to
estimate major economic indicators for any country, entirely based on
satellite data.) The same process will yield even more direct insight
into the revenues of a retail chain or a mining company or an
electronics company, once you determine which of the trucks leaving
their factories are shipping out goods or key components.
Plenty of people would want real-time access to that data—investors,
environmentalists, activists, journalists—and no one currently has it,
with the exception of certain nodes of the US government. Given that,
the notion that Skybox could become a Google-scale business—or, as one
guy on the roof that afternoon suggested to me, an insanely profitable
hedge fund—is not at all far-fetched. All they need to do is put enough
satellites into orbit, then get the image streams back to Earth and
analyze them. Which is exactly what Skybox is planning to do. The most important thing to understand
about Skybox is that there is nothing wonderful or magical or even all
that interesting about the technology—no shiny new solar-reflecting
paint or radiation-proof self-regenerating microchip, not even a cool
new way of beaming signals down from orbit. Dozens of very smart people
work at Skybox, to be sure, but none of them are doing anything more
than making incremental tweaks to existing devices and protocols, nearly
all of which are in the public domain or can be purchased for
reasonable amounts of money by anyone with a laptop and a credit card.
There is nothing impressive about the satellites they are building until
you step back to consider the way that they plan to link them, and how
the resulting data can be used.
There are 1,000 satellites orbiting the planet at any given time, But only 12 send back hi-res images.
Berkenstock, John Fenwick, and Julian Mann first teamed up as grad
students at Stanford to compete for the Google Lunar X Prize, which
promised $20 million to the first group of contestants that could land a
rover on the moon and send back pictures. The stock market crash of
2008 killed their funding, but the germ of the Stanford team’s idea—to
use cheap off-the-shelf technology in space and make money doing
it—stuck with them, and they hit on the idea of building imaging
satellites along the same principles. “We looked around at our friends
and realized that we knew this unique group of people who had experience
building capable satellites at a fundamentally different price point,”
Berkenstock says. “The potential was not just to disrupt the existing
marketplace—we could potentially blow the roof off it and make it much,
much larger.”
The idea was to start with a CubeSat, a type of low-cost satellite
that aerospace-engineering grad students and DIY space enthusiasts have
been playing with for more than a decade. The CubeSat idea began in
1999, when two engineering professors, looking to encourage postgraduate
interest in space exploration, came up with a standard design for a
low-cost satellite that could be built entirely from cheap components or
prepackaged kits. The result was a cube (hence the name) measuring 10
centimeters on each side, just large enough to fit a basic sensor and
communications payload, solar panels, and a battery. The standardized
size meant that CubeSats could be put into orbit using a common
deployment system, thus bringing launch and deployment costs down to a
bare minimum that made it feasible for a group of dedicated hobbyists in
a university lab or even a high school to afford. All told, a CubeSat
could be built and launched for less than $60,000—an unheard-of price
for getting anything into orbit.
The first CubeSats launched on June 30, 2003, on a Russian rocket
from the Plesetsk site, and entirely transformed the world of amateur
space exploration. A group of Stanford students worked with a private
earthquake-sensing company to put up something called Quakesat, which
aimed to measure ultralow-frequency magnetic signals that have been
associated by some researchers with earthquakes. One team sponsored by
NASA sought to study the growth of E. coli bacteria. (True to
form, the NASA team reportedly spent $6 million on its first CubeSat
mission.) Other teams launched CubeSats to study and improve the CubeSat
design itself. The concept proved to be so simple and robust that a
website called Cubesatshop.com sprang up to help even the laziest team
of grad students build a cheap satellite of their very own: Just click
on each of the tabs (Communication Systems, Power Systems, Solar Panels,
Attitude Control Systems, Antenna Systems, Ground Stations, CubeSat
Cameras) to order the necessary parts.
Skybox headquarters and staff in Mountain View, California. Photo: Spencer Lowell
After 10 years of CubeSat
experimentation, it was left to Berkenstock, Fenwick, and Mann to
realize that the basic principles of DIY satellite construction might be
put to extremely profitable use. As the three men saw it, massive
advances in processing power and speed meant not only that they could
build a Sputnik-type satellite from cheap parts but that they could pack
it with computing ability, making it more powerful than Sputnik could
ever be. By extending the craft beyond the CubeSat’s 10-centimeter limit
to roughly a meter tall, they could expand the payload to include the
minimal package of fine optics able to capture commercial-grade images.
Sure, it would be significantly heavier: Whereas the smallest CubeSat
weighs 2.2 pounds, the Skybox satellite would weigh 220 pounds. But
Skybox’s “MiniFridgeSat” could use software-based systems to relay
imagery and hi-def video back to Earth, where large amounts of data
could be stored and processed and then distributed over the web.
When Mann and Berkenstock first brought up this idea with Fenwick—a
spectral guy with a shaved head who vibrates at a Pynchonesque level of
intensity—it turned out that he knew a lot more about satellites than
they did. One of his jobs before Stanford had been as a liaison in
Congress for the National Reconnaissance Office, the ultrasecret spy
agency that manages much of America’s most exotic space toys. A graduate
of the Air Force Academy and MIT, he took the job at the NRO after a
series of laser eye surgeries failed to qualify him as an Air Force
pilot. Even if Fenwick couldn’t talk about everything he knew, he could
help do the math and hook the team up with other smart people. More
important, he understood not just the value the US government might see
in Mann and Berkenstock’s idea but also the threat. When I ask him
whether his government experience came in handy in helping to design and
build Skybox, he pauses and raises a hand to his head. “Every day I
bite my tongue so I don’t go to jail,” he says, quite seriously.
Soon, in a Stanford management class, the three founders met the
woman who would become their fourth—Ching-Yu Hu, a former J.P. Morgan
analyst with experience in crunching big data sets—and together they
wrote up a business plan. The four enrolled in Formation of New
Ventures, a course taught by Mark Leslie, founder of Veritas Software.
Leslie was impressed enough to get in touch with Vinod Khosla, of Khosla
Ventures, who handed them off to Pierre Lamond, a partner of his at the
firm. Lamond had been given a $1 billion fund to invest, roughly a
quarter of which was supposed to go to “black swan” science projects—the
sorts of ideas that would probably fail spectacularly but might pay off
big, and at the very least would be fun to talk about at dinner
parties. And sure enough, Lamond, who served as an intelligence officer
in the French army before coming to California and ran half a dozen
Silicon Valley companies over the past four decades, gave Skybox its
first $3 million. With the money, what had been a space
company of young outsiders soon got a serious injection of Big Aerospace
expertise. Worried about future fund-raising, Lamond soon felt (to
Berkenstock’s huge disappointment) that Skybox needed an experienced
CEO. So he brought in Tom Ingersoll, a former McDonnell Douglas
executive who had left to start a ground-operations outsourcing firm,
Universal Space Network, that sold its services largely to NASA and the
Defense Department. Ingersoll, in turn, recruited a host of scientific
advisers who had spent their lives in the traditional aerospace industry
and government-sponsored big science programs.
Chief among these advisers was Joe Rothenberg, who ran NASA’s human
space exploration programs and the Goddard Space Flight Center.
Rothenberg’s leadership of the effort to fix the Hubble Space Telescope
had made him a legend in the small fraternity of men who ran America’s
space programs back in the days when they spent real money. When I first
met Rothenberg, it was hard to understand just what he was doing
there—despite the fact that he had no stake in the company, Rothenberg
was working at Skybox two full weeks a month, looking for bugs in its
systems. I soon realized that, to my surprise, he was there not to get
rich but to help revolutionize space exploration.
Today’s NASA, Rothenberg freely admits, has failed to build and
maintain the qualified workforce it needs, “and a large fraction of
them, quite frankly, are aging people who should be retired or in
different jobs.” Rothenberg looks at the young software engineers at
Skybox and sees that they think in a fundamentally different way about
how to solve problems, and he wants NASA to take note. “If you took
somebody my age, 50 to 70,” he says, “then took these guys and gave them
the same mission, you’d get two totally different spacecraft. And the
price difference between them would be 10 to one.” The possibility that
Skybox might serve as a model for a different way of doing things in
space is a big reason why Rothenberg is there.
The Washington pedigrees of old heads like Rothenberg and Ingersoll
might also come in handy. The disruptive threat that Skybox poses to the
space-based commercial imaging market might also annoy some powerful
people in the US government who could deny the company licenses, seize
its technology or bandwidth, and place restrictions on the frequency and
users of its service. Skybox has come as far as it has, Fenwick says,
because the right people in Washington can see the use of its service.
“If the wrong person gets pissed, they’ll shut us down in an instant,”
he admits.
On one recent trip to Washington, Ingersoll says, a high-ranking
government technologist warned him that “the antibodies are starting to
form.” On the same trip, a senior Defense Department official took him
aside and counseled, “You better be thinking about the role you want the
government to play in your company.” To avoid any military-industrial
squelching of its technology before launch, Skybox has loaded up on
advisers and board members with high-level defense connections,
including Jeff Harris, former president of Lockheed Martin Special
Programs, and former Air Force lieutenant general David Deptula, who
captained the Air Force’s use of drones and who may see similar utility
in a constellation of cheap satellites sending back timely video from
above Earth’s trouble spots. In the end, the government will likely
commandeer some of Skybox’s imaging capabilities under terms similar to
those imposed on other vendors. But Skybox feels confident that its
network will be so wide and so nimble that there will be plenty of
images—and data—left over for everyone else.
Mission control—someday. Photo: Spencer Lowell
Building SkySat-1 in the clean room. Photo: Spencer Lowell
The Skybox clean room, where the
company’s first satellite, SkySat-1, is being made, is a
Plexiglas-walled rectangle the size of a suburban living room; it’s also
a place where any precocious 10-year-old with a few years of
model-rocket experience might feel immediately at home. Fred Villagomez,
a technician in his midforties, sits at one of three stations at a
workbench examining the payload antenna feed through a pair of
protective goggles and making small adjustments with an X-Acto knife. To
the right of his work area is a bottle of acetone, of the kind that any
mildly advanced basement model-builder might use to remove excess globs
of glue. At the end of the bench are three surplus movie lights, which
he is using to test solar arrays.
To an outsider’s eye, there is something sweet and almost cartoonlike
about how Skybox is hand-producing homemade satellites with a hobby
knife, all in an effort to launch a multibillion-dollar business. Before
coming to Skybox, though, Villagomez worked at Space Systems Loral,
which produces high-end space behemoths on classified budgets. Kelly
Alwood, the satellite’s project manager, also worked at Loral after
graduating from Stanford, and before that at NASA’s Jet Propulsion Lab.
Her boss, Mike Trela, who oversees both the satellites and the launches,
worked at the space program lab at Johns Hopkins.
Ronny Votel, who looks like a blond USC frat boy minus the letter
jacket and who codes in a graphic environment called Simulink, wrote
much of the early part of the software that will help the satellite
track objects on the ground and manage large-angle maneuvers. He met
Berkenstock at Stanford and was the second person hired after Skybox
received its initial $3 million in funding. “My first month on the job, I
was vetting out telescope and optics packages,” he recalls. “I had no
training in optics. But we knew the math and how to order a book off of
Amazon and how to write code and do sanity checks. I think it was fear
that drove us to do a good job.” The ground software alone will have
200,000 lines of original code, of which approximately 180,000 are
already written.
That focus on software permeates Skybox’s business. Take the cameras:
Compared with most satellites, they are cheap, lo-res, unsophisticated.
“One of the image-processing guys once joked that the images from the
satellite are equivalent to those from a free cell phone that you would
have given away in Rwanda,” says Ollie Guinan, Skybox’s VP of ground
software. But by building homegrown algorithms to knit dozens of those
images together, Skybox can create “one super-high-quality image where
suddenly you can see things that you can’t see in any one of the
individual pictures.” That focus on off-board processing means less work
has to be done in the satellite itself, allowing it to be lighter and
cheaper. “Think about your iPhone,” Ingersoll explains to me during my
second visit. “There was a time you had a phone, a Palm, a PC, and also a
camera. Now the computing capability has improved to the point where it
is fast enough, with a low enough power, at a low enough price, that
you can integrate these functions into much smaller packages at a much
lower cost.”
Sending Just Enough Space Into Space
To cover the whole Earth with imaging satellites, Skybox needs to
break free from the design patterns that have defined commercial
satellite construction to date. This chart shows the relative scale of
SkySat-1, set against its high-end (and low-end) alternatives. —Sara
Breselor
CUBESAT
Skybox’s founders were inspired by the CubeSat, a
tiny DIY satellite design—buildable for less than $60K—that roughly 100
teams have launched.
SKYSAT-1
Essentially, Skybox scaled a CubeSat up to the size
of a minifridge, packing it with computing. Total cost: under
$50 million for a satellite that will last four years.
WORLDVIEW-2
Launched by DigitalGlobe in 2009, this satellite
takes ultra-hi-res photos and will last for nearly eight years. The
downside: It cost an estimated $400 million to build.
Illustrations by Remie Geoffroi
to keep the feds at bay, skybox has loaded up on advisers with big defense connections.
Guinan is a black-haired Irishman who grew up poor and spent nearly a
decade working in the Valley on visas with short-term expiry dates
before eventually landing a good job at Yahoo. When he fled for Skybox,
he took five of his best engineers with him, as well as a healthy
respect for the elegant and powerful architectures that can wring
information and intelligence from good enough hardware. The more
emphasis the design team placed on software, the smaller and cheaper the
hardware became—and the less power the satellite required, which helped
with the rest of the design, mainly by making it possible to carry a
high-enough-quality optics package at a ridiculously low weight.
Skybox also found ways of piggybacking on other people’s technology.
The image-reception system is built on top of a satellite TV broadcast
protocol, the same one that allows DirecTV signals to get through an
electrical storm or heavy rain. “They’ve put hundreds of millions of
dollars into building these systems and making them as perfect as they
can be,” Guinan points out. “We took advantage of that.” This means that
Skybox will be able to use a 6-and-a-half-foot antenna to reach a dish
the size of a dinner plate on the SkySat instead of the much more
expensive, 30-foot antenna that commercial satellite-image companies
typically require. Between now and then, the real
question is whether Skybox’s VCs will be able to fund the company long
enough to get SkySat-1 into space. Eight months after the satellite was
complete, the team is still waiting for its launch provider, the Russian
government, to deliver it to orbit. “The one piece of advice we got
from everybody who came in here was ‘Oh, don’t worry about the launch
vehicle,’” Berkenstock says with a wry look. After dallying with Elon
Musk’s SpaceX, the company decided to go with the far less expensive
Russian plan, which would launch SkySat-1 on a decommissioned Soviet
ICBM.
It was only after signing the agreement and paying part of the cost
of the berth that Skybox discovered the catch: The actual launch date
depends on both the Russian defense ministry and the office of president
Vladimir Putin signing off. That paperwork has stalled in the Russian
bureaucracy, and so the former Soviet ICBM has remained in its silo—and
the Russians have no intention of giving Skybox its money back. But in
May, the Russians finally approved the launch. The team is cautiously
optimistic about a September date, with a second satellite heading up
perhaps four months later.
For now, the would-be kings of space are forced to wait. One
afternoon, Guinan takes me upstairs to see where the Skybox team will
sit when the first satellite finally launches. “The NASA guys came
around and said, ‘You need more than a closet for an operations room,’”
he says, as he shows me around the half-finished setup, which looks like
something between a Monday Night Football broadcast booth and the floor of a call center.
As he shows me where the launch will be broadcast and where the racks
of servers will go, it’s obvious that his heart lies not in space but
here on Earth, where he will stitch together the images as they flood
in. In its own weird way, this vision of the future is just as inspiring
as sending men to the moon. Yes, Skybox is planning to put the
equivalent of cheap cell phone cameras into space, to beam the pictures
down via something that is more or less DirecTV, to use cheap eyeballs
to count cars or soybeans or whatever someone will pay to count. But the
data those cameras provide might save the Amazon basin or the global
coffee market—the uses are thrillingly infinite and unpredictable.
Yes, it takes astronauts to plant flags on the moon. But what the
Skybox team has built is effectively a new kind of mirror, reflecting
the entire planet in a continuous orbital data stream that will show us
to ourselves in new and useful ways.
Provided, of course, that they can get it off the ground.
David Samuels (dsamuels1@gmail.com) is a contributing editor at Harper’s and author of The Runner and Only Love Can Break Your Heart.
CREDITS Opening image: Corbis; courtesy of Skybox Imaging
When we first looked
at the report of the bigfoot genome, it was an odd mixture of things:
standard methods and reasonable looking data thrown in with unusual
approaches and data that should have raised warning flags for any
biologist. We just couldn't figure out the logic of why certain things
were done or the reasoning behind some of the conclusions the authors
reached. So, we spent some time working with the reported genome
sequences themselves and talked with the woman who helped put the
analysis together, Dr. Melba Ketchum. While it didn't answer all of our
questions, it gave us a clearer picture of how the work came to be.
The biggest clarification made was what the team behind the results
considered their scientific reasoning, which makes sense of how they ran
past warning signs that they were badly off track. It provided an
indication of what motivated them to push the results into a publication
that they knew would cause them grief.
Melba Ketchum and the bigfoot genome
The public face of the bigfoot genome has been Melba Ketchum, a Texas-based forensic scientist. It was Ketchum who first announced
that a genome was in the works, and she was the lead author of the
paper that eventually described it. That paper became the one and only
publication of the online journal De Novo; it's still the only one to appear there.
The paper itself is an odd mix of things. There's a variety of fairly
standard molecular techniques mixed in with a bit of folklore and a
link to a YouTube video that reportedly shows a sleeping Sasquatch. In
some ways, the conclusions of the paper are even odder than the video.
They suggest that bigfeet aren't actually an unidentified species of ape
as you might have assumed. Instead, the paper claims that bigfeet are
hybrids, the product of humans interbreeding with a still unknown
species of hominin.
As evidence, it presents two genomes that purportedly came from
bigfoot samples. The mitochondrial genome, a small loop of DNA that's
inherited exclusively from mothers, is human. The nuclear genome, which
they've only sequenced a small portion of, is a mix of human and other
sequences. Some are closely related, others quite distant.
But my initial analysis
suggested that the "genome sequence" was an artifact, the product of a
combination of contamination, degradation, and poor assembly methods.
And every other biologist I showed it to reached the same conclusion.
Ketchum couldn't disagree more. "We've done everything in our power to
make sure the paper was absolutely above-board and well done," she told
Ars. "I don't know what else we could have done short of spending
another few years working on the genome. But all we wanted to do was
prove they existed, and I think we did that."
How do you get one group of people who looks at the evidence and sees
contamination, while another decides "The data conclusively prove that
the Sasquatch exists"? To find out, we went through the paper's data
carefully, then talked to Ketchum to understand the reasoning behind the
work.
Why they think it was genuine
Fundamentally, the scientific problems with the work seem to go back
to the fact that some of the key steps—sample processing and
preparation—were done by forensic scientists. As the name itself
implies, forensic science is, like more general sciences, heavily
focused on evidence, reproducibility, and other aspects shared with less
applied sciences. But unlike genetics for example, forensic science is
very goal-oriented. That seems to be what caused the problems here.
Over the decades that DNA has been used as forensic evidence, people
in the field have come up with a variety of procedures that have been
validated repeatedly. By following those procedures, they know the
evidence they generate is likely to hold up in court. And, to an extent,
it seems like the people behind the bigfoot genome wanted it to hold up
in court.
“It's non-human hair—it's clearly
non-human hair—it was washed and prepared forensically, and it gave a
human mitochondrial DNA result. That just doesn't happen.”
Many of the samples they had were clumps of hair of various sizes.
Hair is a common item in forensic analysis, where people have to
identify whether the hair is human, whether it is a possible match for a
suspect's, etc. In this case, the team was able to determine that the
hair was not human. So far, so good.
In cases where the hair comes attached to its follicle, it's possible
to extract DNA from its cells. And that is exactly what the bigfoot
team did, using a standard forensic procedure that was meant to remove
any other DNA that the hair had picked up in the interim. If everything
worked as expected, the only DNA present should be from whatever
organism the fur originated from.
And, in Ketchum's view, that's exactly what happened. They worked
according to procedure, isolating DNA from the hair follicles and taking
precautions to rule out contamination by DNA from anyone that was
involved in the work. Because of this, Ketchum is confident that any DNA
that came from the samples once belonged to whatever creature deposited
the fur in the woods—no matter how confusing the results it produced
were. "The mito [mitochondrial DNA results] should have done it," she
argued. "It's non-human hair—it's clearly non-human hair—it was washed
and prepared forensically, and it gave a human mitochondrial DNA result.
That just doesn't happen."
Ketchum was completely adamant that contamination wasn't a
possibility. "We had two different forensics labs extract these samples,
and they all turned out non-contaminated, because forensics scientists
are experts in contamination. We see it regularly, we know how to deal
with mixtures, whether it's a mixture or a contaminated sample, and we
certainly know how to find it. And these samples were clean."
But note the key phrase two paragraphs up: "if everything worked as
expected." Anyone who's done much biology (or presumably, much science
in general) knows that everything typically does not work as expected.
In fact, things go badly wrong for all sorts of reasons. Sometimes it's
obvious they went wrong, sometimes results look pretty reasonable but
fall apart on careful examination.
In this case, there was no need for careful examination; the results
the team got from the DNA was a mix of warning signs that things weren't
right (internally inconsistent information) and things that simply
didn't make any sense. But Ketchum believed so strongly in the rigor of
the forensic procedures that she went with the results regardless of the
problems. In fact, it seemed as if almost everything unusual about the
samples was interpreted as a sign that there was something special about
them.
Warning signs
Potential problems with the samples were apparent in what were likely
the first experiments done with the DNA isolated from them. These were
amplifications of specific human DNA sequences using a technique called
the polymerase chain reaction, or PCR. By using short DNA sequences that
match parts of the human genome, it's possible to start with a single
DNA molecule and create many copies of it, which makes it simple to
detect its presence. In this case, the PCR reactions targeted sequences
that are known to vary in length in the human population—a feature that
makes them useful for forensic identification.
If the DNA was human and had not degraded much during its time in the
environment, then most of these reactions should produce a clear,
human-like signal. The same would be true if, as Ketchum concluded, the
samples contained DNA from a close relative of humans (remember, chimps'
DNA is over 95 percent identical to ours). If the animal were more
distantly related, you might expect some reactions to work and some to
fail, with the percentage of failures going up as the degree of
relatedness fell. In some cases, you might expect the reactions to
produce a PCR product that was the wrong size due to changes in DNA
content that occur during evolution.
But you can't necessarily expect the DNA to sit outdoors and remain
intact. DNA tends to break into fragments, with the size of the
fragments shrinking over time. Depending on how degraded the sample is,
you might see more or fewer reactions failing.
What they saw was a chaotic mix of things. As Ketchum herself put it,
"We would get these crazy different variants of sequence." Some
reactions produced the expected human-sized PCR products. Others
produced products with unexpected sizes. Still others produced the sorts
of things you'd expect to see if the PCR had failed entirely or there
was no DNA present. "We would get these things that were novel in
genbank. We would get a lot of failure, and we'd get some that would
have regular human sequence," Ketchum said. "We could not account for
this, and it was repeatable."
All of which suggested that there was likely to be DNA present that
was only distantly related to humans; anything that was from a human or
close relative was probably seriously degraded. In fact, the team did an experiment that suggested this was exactly
what they were dealing with: they imaged the DNA using electron
microscopy. This revealed exactly what their initial experiments
suggested: shorter fragments of DNA, some of it a single (rather than
double) helix. Strands that paired nicely for some stretches and then
diverged into single stranded sections, which then paired again to a
completely separate molecule. This sort of pattern is what you might see
if there were some distantly related mammals present, where the
protein-coding sequences would match fairly well, but the intervening
sequences would probably be very different.
So all the initial data suggested that the DNA was badly preserved
and probably contaminated. Which in turn suggests that whatever
techniques they used to get DNA from a single, uncontaminated source
just wasn't sufficient for the samples they were working with. But
instead of reaching that conclusion, the bigfoot team had an
alternative: their technique worked perfectly fine. It was the sample
that was unusual.
The problem is that it simply couldn't be that unusual. The
idea is that there was some other primate that was still capable of
interbreeding with humans. In the cases where we know this happened
(semi-modern humans like Neanderthals and Denisovans), the DNA sequences
are so similar that it's quite hard to tell them apart. Here, the team
was seeing indications that human DNA was mixed with something that was
really quite distant—probably not even one of the great apes.
These were far from the last results that should have told them they were on the wrong track.
Looking suspiciously human
Nevertheless, the authors plowed on. And one of the first things they
found was that at least some of the DNA was human. This, as it turned
out, was the foundation for their conclusion that the DNA was from a
human-primate hybrid.
It's often overlooked that human cells actually have two genomes. One
lives in the chromosomes stored in the nucleus, and that's the one
we're typically concerned with. But a second resides in our
mitochondria, small compartments in the cell that provide most of the
cell's ATP. These are the remains of what were once free-living bacteria
but took up a symbiotic residence inside the cell billions of years
ago; however, they still have a small genome of their own (circular,
like bacteria's) with a handful of essential genes on it.
There are a few things that make mitochondrial DNA effective for
tracking populations of humans and other species. Because this genome
doesn't have a full DNA repair machinery at hand, and because it can't
undergo recombination, it tends to pick up mutations far more rapidly
than the nuclear genome. That means that even closely related
populations are likely to have some differences in their mitochondrial
DNA. There are also hundreds of mitochondria in each cell, and each of
these may have dozens of copies of the genome. So it's relatively easy
to get samples, even from badly degraded and/or contaminated DNA like
that found in ancient bones.
So team bigfoot sequenced the mitochondrial genome of several of
their samples. And rather than a novel primate sequence that was
distantly related to humans, the sequences were human. Which is
what you might expect if the species is a hybrid as the authors
concluded. What you wouldn't expect is that the sequences would come
from multiple humans—from the wrong side of the planet.
All indications are that successful interbreeding between humans and
closely related groups like Neanderthals and Denisovans was relatively
rare. You'd expect that something that looks like a walking shag carpet
would be more distantly related, and that it would be much, much harder
to successfully interbreed. This makes the hybrids even rarer. Instead,
each sample tested produced a different mitochondrial DNA sequence,
which implies the interbreeding had to have taken place many, many
times. (And that the hybrids never bred with females of whatever the
primate in question was. And that said primate is, apparently, extinct,
since none of its mitochondrial DNA showed up.)
Who were these human females that ostensibly did the interbreeding?
If you wanted to make a scientifically plausible guess, you'd bet on the
mitochondrial DNA lineages that originate in Asia (most likely those
branches that expanded into the Americas). Those are the only humans
that are likely to have been around until a few hundred years ago. And
that's exactly what they didn't find. Instead, most of the sequences originated in the human populations of Europe, with an African sample or two.
And at least one of them was recent—Ketchum described one of the
mitochondrial sequences in detail, saying, "about 13000 years ago is
when that haplotype came into existence. It was in Spain, basically,
where it originated. So the hybridization could not have occurred before
that haplotype came into existence." In her view, that put an upper
limit on when these sequences made it to North America. "It couldn't
have been longer than 13,000 years ago," she told Ars.
On the face of it, there's simply no way to make sense of this—the
European and African DNA, the recent time frame for its arrival, the
fact that there must have been so many interbreedings.... The obvious
interpretation is that the samples were all from humans or contaminated
with human DNA, which nicely explains the diversity and modernity of the
sequences.
But remember, to Ketchum, that possibility had been ruled out. In the
absence of the obvious, her team went with a far less obvious
suggestion: sometime during the last glacial period, a diverse group of
Europeans and Africans got together and wandered across the vast empty
spaces of the Greenland ice sheet and found themselves in North America.
"Several of the Smithsonian scientists even wrote a book about it,
where they've gone below the Clovis layer and found artifacts that they
feel came from [an] area in France," she said. But she wasn't committed
to that idea and later suggested that the interbreeding might have taken
place in Europe... after which the Sasquatch left to cross the Bering
Sea-land bridge before the Ice Age ended. "It's feasible they could have
crossed the world, basically," she said. "They're very fast."
Ultimately, though, Ketchum indicated these are just technical
details. She wasn't especially interested in sorting them out. "We don't
know how they got here, we just know they did."
A problem of technique
Most of the problems so far weren't really experimental ones; rather,
they were problems with interpretation. It's only when the team went
after sequences from the genome that things got a bit strange. A few of
their samples appeared to have sufficient DNA to send them for
sequencing on one of the current high-throughput sequencing platforms.
The quality score assigned to the sequencing runs was good, meaning that
they had lots of DNA sequence data to assemble into a genome (although,
oddly, the team interpreted this to mean that the sample came from a
single individual, which it does not).
The challenge is that the high-throughput machines typically produce
short sequences that are about 100 bases long. Even the smallest human
chromosome is over 40 million bases long. There are programs that are
able to recognize when two of these 100 base-long fragments partly
overlap and combine their sequences to create a longer sequence (say 150
bases). By searching for further partial overlaps, the programs can
gradually build up longer and longer stretches, sometimes ranging into
the millions of base pairs. Although this software will still leave gaps
where sequences don't exist or show up at multiple places in the
genome, it's still the standard way of assembling genomes from short,
100-base-long reads.
For some unfathomable reason, team bigfoot didn't use it. Instead,
they took a single human chromosome and got some software to line up as
much as it could to that.
There are a number of serious problems with this approach. You could
have an entirely different genome present in the sequences, and the
software would ignore most of it. Most of the gene coding regions are
highly conserved among mammals, so they'd line up nicely against the
human chromosome—in fact, they might be difficult to distinguish from
it. But the entire rest of the genome would be ignored by the software.
By taking this approach, the authors pretty much guaranteed they'd get
something out that looked a lot like a human genome.
The other problem here is that the software will typically treat the
human chromosomal sequence as a target that it attempts to recreate. If
it can't find a good match, it will stick the best match available where
it's needed. Sometimes, the match will be fairly good. Other times, the
sequence will be barely related to the template it's supposed to match.
Even given all these advantages, the software still couldn't assemble
an entire chromosome. Instead, it ended up matching sequences to three
different stretches of the chromosome, each a few hundred thousand base
pairs long. Remember, the human genome is over three billion
base pairs total. This only represents a tiny fraction of it. Given that
the quality score provided for the DNA sequencing run was high, this
tells us one of two things: either the software was woefully incapable
of assembling a genome, even when given a template; or there was very
little human DNA there in the first place. As we'll see, it might be a
little bit of both.
A hypothetical hybrid
At this point, it's worth stepping back to try to figure out what it
would look like if the author's ideas were correct, and some humans
interbred with an unidentified hominin species to produce what are now
bigfeet. There are two groups that humans are known to have interbred
with: Neanderthals and Denisovans. But, obviously, anything that would
have given us a bigfoot must have been quite different from the
Neanderthals and Denisovans, which largely looked human. So, we can
probably assume that it had diverged from our lineage for longer, but
not as long as chimps.
What would the genome of such a hominin look like? Well, for
Neanderthals and Denisovans, the genomes mostly look human. If there's a
difference between humans and chimps, in most cases, these other groups
have the human sequence. Hominin X's genome would be more distantly
related. But the chimp genome puts a very strict limit on how different
it could be. In terms of large-scale structure, the chimp and human are
almost identical; there are only six locations
with a major structural difference between the two with a total of 11
breakpoints. Unless you happen to be looking at one of those, you'd
typically see the same genes in the same order. None of the breakpoints
happens to be on Chromosome 11, which is what the authors were looking
at, so this is a non-issue.
Smaller scale insertions and deletions are more common but not that common. Even when you consider them, the human-chimp sequence identity is over 95 percent.
If you only focus on the areas of the genome where things line up
without major rearrangements, then the identity is 99 percent. So any
hominin that we can interbreed with would have a genome that is almost
certainly in the area of 97-98 percent identical to our own. Sequences
that lined up would be even higher than that.
“One thing I'm sure of is we've
proven they exist. We should have been able to do it with just human
mito with non-human hair, thoroughly washed and done by two labs.”
The first generation of hybrids would have a 50/50 split between
these two nearly identical genomes, after which they'd start randomly
assorting. Some areas would undoubtedly be favored or disfavored by
various forms of natural selection. But about 90 percent of the human
genome doesn't seem to be under any selective pressure at all, and most
of the remainder of the genome wouldn't be under selective pressure
simply because it's identical in the two species. As a result, all but
one or two percent of the genome would probably be inherited randomly
from one or both of the two species.
Of course, after the first generation, the two genomes would start
undergoing recombination, scrambling them at a finer scale. The
probability of recombination roughly scales with the length of DNA you
have. The basic measure of recombination, the Centimorgan, represents a
one percent probability that there would be a recombination each
generation. In humans, a Centimorgan is about a million base pairs. So,
if you had 50 million base pairs of DNA, then you'd have even odds that a
recombination would take place every generation. In humans, the
generation time averages out to be about 29 years; in chimps, it's 25. We'll assume bigfeet are in the neighborhood of 27 years per generation.
If bigfeet got started more recently than 13,000 years ago (based on
the Spanish mitochondrial DNA, as mentioned above), that means there
have been approximately 481 generations since. In half of these, there
would be a recombination within our 50 million base pairs, meaning 241
recombinations. That means, on average, we'd see a recombination every
200,000 base pairs or so.
With that, we know what our genome should look like. Stretches of
DNA, over 100,000 bases long, that is human, alternating with equally
long stretches of something that looks almost human but not quite. In
fact, the identity between the two sequences should be strong enough
that it would be difficult to say where one ended and the next started
with any greater resolution than about 1,000 base pairs. And because
there were apparently a number of distinct interbreeding events (again,
based on the mitochondrial DNA), then no two big feet are likely to have
the same combinations of human and nonhuman stretches.
You call that a genome?
This is, of course, nothing at all like what the genome that's been
published looks like. The paper itself indicates that regions of clearly
human DNA are typically only a few hundred base pairs long.
And interspersed with those are equally short pieces of DNA that appear
to look little to nothing like the stretch of the human genome that
they're supposed to be aligned to. If the genome is viewed as a test of
the hybrid hypothesis, then the hypothesis fails. When asked about this,
Ketchum just returned to the mitochondrial data. "I know there are
ways, like you said, to figure out the nuclear age of things, but the
bottom line is it couldn't have been longer than 13,000 years ago."
What actually is this? To find out, I started with the ENSEMBL genome
website, which provides a convenient view of a variety of animal
genomes. I then selected a large region (about 10,000 bases) from the
purported bigfoot genome and used software called BLAST to align it
against the human genome. The best match was invariably chromosome 11,
which made sense, because that's what the authors used to build their
sequence. And as described in the paper, the sequence was a mix of
perfect matches to the human sequence along with intervening sequences
that the software indicated didn't match.
I then selected each of the intervening sequences that were over 100
base-pairs-long and used the BLAST software hosted by the National
Institutes of Health at NCBI. This would test the sequence against any
genome that we've tried to sequence, even if the project wasn't
complete.
If the hybrid model was correct, and these sequences were derived
from another homonin, then they should look largely human. But for the
first 10,000, most of them failed to match anything in the databases,
even though the search's settings would allow some mismatch. Other
sequences came from different locations in the human genome; another
matched the giant panda genome (and presumably represents contamination
by a bear). Similar things happened in the next 10,000, with a mix of
human sequences, one that matched to mice and rats, and then a handful
of sequences with no match to anything whatsoever. And so it went for
another 24,000 bases before I gave up.
Ketchum's team had done the same and found similar results. "We had
one weird sequence that we blasted in the genome BLAST, and we got
closest to polar bear of all things," she told Ars. "And then we'd turn
around and blast [unclear] and get 70 percent rhesus monkey with a bunch
of SNPs [single base changes] out. Just weird, weird stuff."
Clearly, the DNA that was sequenced came from a mix of sources, some
human, some from other animals you might find in the North American
woodlands. (Recently, a researcher who was given a sample of the DNA by
Ketchum announced
that it was a mix of "opossum and other species," consistent with this
analysis.) Clearly, there was human DNA present, but it was either
degraded or present in relatively low amounts.
When asked to align this sequence to a human chromosome, the software
did the best that it could by picking out the human sequences when and
where they were available. When they weren't, it filled the gaps with
whatever it could—sometimes human, sometimes not.
A question of motivation
In science, it's usually best to start with the evidence. But when
the vast majority of the evidence points to one conclusion, and someone
insists on reaching a different one, then it can be worth stepping back
and trying to understand what might motivate them to do so. In Ketchum's
case, the motivations weren't hard to discern; she offered them up
without being prompted, even when the discussion was focused on the
science.
This was clearest when Ketchum suggested that North America's bigfeet
could have European mitochondrial DNA because interbreeding took place
there, after which the hybrids crossed Siberia and into Alaska. As noted
above, this seemed possible to her because "They're very fast." What
wasn't noted above is that she followed that up with, "I've seen them,
that's why I can say that." This was followed by a pretty detailed
description of how this came about.
There's groups of people called habituators. They have
them living around their property. And they interact with them, but
they're highly secretive because one, people think they're crazy when
they say they interact with bigfoot—and I prefer Sasquatch by the way,
but bigfoot's easier to say. Finally a group of them came by and said
"you want to see 'em? we'll take you and show you." And they did. The
clan I was around was used to people and they were just very, very easy
to be around—they're real curious about us, and they'd come and look at
us, and we'd look at them.
With that experience and others that followed (several of which she
described), Ketchum says she switched from skepticism to a desire to
protect what she had seen. Several groups, including Spike TV, have
offered rewards for anyone who could shoot a bigfoot, something Ketchum
genuinely seems to be horrified by. "They are a type of human and we
want them protected," Ketchum told Ars. "That's been the whole point of
this once we realized what we had. And I've known what we had for
several years now. Within the first year, we knew that we had them, it
was just a matter of accumulating enough proof to satisfy science."
In terms of knowing what she had, Ketchum returned to the forensic
evidence, which showed human mitochondrial DNA in a hair sample that had
been identified as non-human. "One thing I'm sure of is we've proven
they exist. We should have been able to do it with just human mito with
non-human hair, thoroughly washed and done by two labs." At a different
point, she said, "All we wanted to do with the paper was to prove there
was something novel out there that was basically Homo, and the mitochondrial DNA placed it clearly in Homo."
With that clearly established, all the apparently contradictory
results simply become points of confusion. When asked about the
discrepancy between the young mitochondrial age and the nuclear genome,
Ketchum just said it was a mystery. Referring to the apparent age
difference, she said, "It would look that way but it's not, that's the
problem. I don't know how to rectify that other than they are what they
are, and the data is what it is." Later, she suggested that the
creatures might simply experience an extremely high rate of mutation.
Ultimately, she saw the collection of contradictions as a sign of her
own sincerity. "I'm not sure why they're like they are. I don't think
anybody is, and I think that gives people a real problem. But we can't
change how the results came out. And I'm not going to lie about them,
and I'm not going to try to make them fit a scientific model when it
doesn't."
After an hour-long phone conversation, there was no question about
whether Ketchum is sincere in her belief that bigfoot exists and if her
data conclusively proves that it's worthy of protection. But, at the
same time, it's almost certainly this same sincerity that drove her to
look past the clear problems with her proof.