Saturday, March 1, 2014

MORE FOR THE GMO SCRAPBOOK: RUSSIA’S STATE DUMA CONSIDERS A TOTAL BAN

While the Ukraine is for the moment locked into the grip of the banksters (who’ve already, it seems, begun to plunder the country [we'll have more on that tomorrow]), there are other moves afoot, this time concerning Russia and GMOs. Many of you sent me the following article, and it’s worth posting here and commenting on, because it is symptomatic, I think, of larger cultural and economic warfare issues that might also be taking place. According to the following article from RT, Russia’s state duma is considering a moratorium on the production of GMOs in that country:
Duma seeks moratorium on GMO production in Russia
There are four paragraphs in this important story I want to draw your attention to, because they seem to corroborate something I have been suspecting for a long time might be in the works:
“The State Duma’s Agriculture Committee has decided that Russia needs a ban on the registration and turnover of genetically modified organisms, local mass media reported on Wednesday. It is suggested that until specialists develop a working system of control over the GMO effects on humans and the natural environment, the government should impose a moratorium on breeding and growth of genetically modified plants, animals and microorganisms.
“Russia’s Agriculture Ministry is supporting the parliament’s position. Deputy Agriculture Minister Aleksandr Petrikov has told the MPs that the reasons behind the conservative stance on the issue are a lack of research into the various effects of GMO cultures, the absence of a working monitoring system and the fact that spreading of GMO crops could harm the biodiversity in whole regions.
“He also noted that hasty introduction of GMO cultures carries economic risks – Russia cannot compete with foreign producers when it comes to costs, but still can position itself as a producer of high-quality, GMO-free agricultural goods. Thus, any use of GMO cultures would harm the national export potential, Petrikov said.
“The ministry supports a complete ban on growing and using genetically altered organisms in the country, with the exception of those used in scientific research”
Note what this article is really saying:
  1. It implies there is no system of adequate “control” over the effects of GMOs on humans, a tacit admission that, as far as Russia’s Ministry of Agriculture is concerned, the growing body of evidence of GMO effects on humans has reached the stage that it has become a concern for that nation and its substantial agriculture industry;
  2. It implies further that Russia’s Minsitry of Agriculture has tacitly accepted the argument of many concerned voices in the West and elsewhere that there never was any adequate inter-generational testing of the effects of GMOs on human health, and the effects of GMOs on overall agricultural production itself and their effect on the greater environment (a matter which I blogged about recently on this site. Regular readers may recall that recent studies indicated a growing number of farmers in North America are turning away from GMOs as they inhibit productivity over the long term, and decrease profit margins). These admissions are hugely significant, since they virtually echo the findings and concerns of the Western anti-GMO movement.
  3. The article implies – in the fourth paragraph cited above – that Russia intends to completely ban the planting of GMOs(and hence their production in and importation to that country), and to conduct scientific tests on them. This may be construed in a certain sense, as an open declaration of agro-economic war on the agribusiness giants of the West and on America in particular;
  4. In the third paragraph cited above, the article clearly indicates Russia’s intention to “position itself as a producer of high-quality, GMO-free agricultural goods. Thus, any use of GMO cultures would harm the national export potential.”(Emphasis added).
It is this last point which I find incredibly significant and intriguing, for regular readers here, and followers of my little video commentaries (the News and Views from the Nefarium), will recall that I have been arguing for some time that one area in which we can expect the BRICSA nations eventually to “push back” against growing Euro-American economic dominance is precisely to enter the agribusiness market as major exporters of natural (or so-called “heirloom” seeds). What this statement thus presages is that such considerations are being carefully weighed in Moscow, and, if the rest of the article is any indicator, Russia also intends putting its considerable scientific talents to a careful study of the effects of GMOs on the environment, on the long-term productivity of agriculture, and on human health issues.
In short, in my opinion we are already looking at more major pushback, and it is only going to increase. The next step? We can expect Russia or other concerned nations to begin to organize and host conferences on the whole GMO issue, and this will play very well in Europe and in other regions and nations where GMOs – and the heavy-handed tactics of American agribusiness giants – have created growing opposition and concerns over the issue.

Essay: Anatomy of the Deep State

The U.S. Capitol is seen in Washington, Monday, June 17, 2013. (AP Photo/J. Scott Applewhite)
Rome lived upon its principal till ruin stared it in the face. Industry is the only true source of wealth, and there was no industry in Rome. By day the Ostia road was crowded with carts and muleteers, carrying to the great city the silks and spices of the East, the marble of Asia Minor, the timber of the Atlas, the grain of Africa and Egypt; and the carts brought out nothing but loads of dung. That was their return cargo.
The Martyrdom of Man by Winwood Reade (1871)

There is the visible government situated around the Mall in Washington, and then there is another, more shadowy, more indefinable government that is not explained in Civics 101 or observable to tourists at the White House or the Capitol. The former is traditional Washington partisan politics: the tip of the iceberg that a public watching C-SPAN sees daily and which is theoretically controllable via elections. The subsurface part of the iceberg I shall call the Deep State, which operates according to its own compass heading regardless of who is formally in power. [1]
During the last five years, the news media has been flooded with pundits decrying the broken politics of Washington. The conventional wisdom has it that partisan gridlock and dysfunction have become the new normal. That is certainly the case, and I have been among the harshest critics of this development. But it is also imperative to acknowledge the limits of this critique as it applies to the American governmental system. On one level, the critique is self-evident: In the domain that the public can see, Congress is hopelessly deadlocked in the worst manner since the 1850s, the violently rancorous decade preceding the Civil War.
Yes, there is another government concealed behind the one that is visible at either end of Pennsylvania Avenue, a hybrid entity of public and private institutions ruling the country…
As I wrote in The Party is Over, the present objective of congressional Republicans is to render the executive branch powerless, at least until a Republican president is elected (a goal that voter suppression laws in GOP-controlled states are clearly intended to accomplish). President Obama cannot enact his domestic policies and budgets: Because of incessant GOP filibustering, not only could he not fill the large number of vacancies in the federal judiciary, he could not even get his most innocuous presidential appointees into office. Democrats controlling the Senate have responded by weakening the filibuster of nominations, but Republicans are sure to react with other parliamentary delaying tactics. This strategy amounts to congressional nullification of executive branch powers by a party that controls a majority in only one house of Congress. Despite this apparent impotence, President Obama can liquidate American citizens without due processes, detain prisoners indefinitely without charge, conduct dragnet surveillance on the American people without judicial warrant and engage in unprecedented — at least since the McCarthy era — witch hunts against federal employees (the so-called “Insider Threat Program”). Within the United States, this power is characterized by massive displays of intimidating force by militarized federal, state and local law enforcement. Abroad, President Obama can start wars at will and engage in virtually any other activity whatsoever without so much as a by-your-leave from Congress, such as arranging the forced landing of a plane carrying a sovereign head of state over foreign territory. Despite the habitual cant of congressional Republicans about executive overreach by Obama, the would-be dictator, we have until recently heard very little from them about these actions — with the minor exception of comments from gadfly Senator Rand Paul of Kentucky. Democrats, save a few mavericks such as Ron Wyden of Oregon, are not unduly troubled, either — even to the extent of permitting seemingly perjured congressional testimony under oath by executive branch officials on the subject of illegal surveillance.
These are not isolated instances of a contradiction; they have been so pervasive that they tend to be disregarded as background noise. During the time in 2011 when political warfare over the debt ceiling was beginning to paralyze the business of governance in Washington, the United States government somehow summoned the resources to overthrow Muammar Ghaddafi’s regime in Libya, and, when the instability created by that coup spilled over into Mali, provide overt and covert assistance to French intervention there. At a time when there was heated debate about continuing meat inspections and civilian air traffic control because of the budget crisis, our government was somehow able to commit $115 million to keeping a civil war going in Syria and to pay at least £100m to the United Kingdom’s Government Communications Headquarters to buy influence over and access to that country’s intelligence. Since 2007, two bridges carrying interstate highways have collapsed due to inadequate maintenance of infrastructure, one killing 13 people. During that same period of time, the government spent $1.7 billion constructing a building in Utah that is the size of 17 football fields. This mammoth structure is intended to allow the National Security Agency to store a yottabyte of information, the largest numerical designator computer scientists have coined. A yottabyte is equal to 500 quintillion pages of text. They need that much storage to archive every single trace of your electronic life.
Yes, there is another government concealed behind the one that is visible at either end of Pennsylvania Avenue, a hybrid entity of public and private institutions ruling the country according to consistent patterns in season and out, connected to, but only intermittently controlled by, the visible state whose leaders we choose. My analysis of this phenomenon is not an exposé of a secret, conspiratorial cabal; the state within a state is hiding mostly in plain sight, and its operators mainly act in the light of day. Nor can this other government be accurately termed an “establishment.” All complex societies have an establishment, a social network committed to its own enrichment and perpetuation. In terms of its scope, financial resources and sheer global reach, the American hybrid state, the Deep State, is in a class by itself. That said, it is neither omniscient nor invincible. The institution is not so much sinister (although it has highly sinister aspects) as it is relentlessly well entrenched. Far from being invincible, its failures, such as those in Iraq, Afghanistan and Libya, are routine enough that it is only the Deep State’s protectiveness towards its higher-ranking personnel that allows them to escape the consequences of their frequent ineptitude. [2]
How did I come to write an analysis of the Deep State, and why am I equipped to write it? As a congressional staff member for 28 years specializing in national security and possessing a top secret security clearance, I was at least on the fringes of the world I am describing, if neither totally in it by virtue of full membership nor of it by psychological disposition. But, like virtually every employed person, I became, to some extent, assimilated into the culture of the institution I worked for, and only by slow degrees, starting before the invasion of Iraq, did I begin fundamentally to question the reasons of state that motivate the people who are, to quote George W. Bush, “the deciders.”
Photo: Dale Robbins
Cultural assimilation is partly a matter of what psychologist Irving L. Janis called “groupthink,” the chameleon-like ability of people to adopt the views of their superiors and peers. This syndrome is endemic to Washington: The town is characterized by sudden fads, be it negotiating biennial budgeting, making grand bargains or invading countries. Then, after a while, all the town’s cool kids drop those ideas as if they were radioactive. As in the military, everybody has to get on board with the mission, and questioning it is not a career-enhancing move. The universe of people who will critically examine the goings-on at the institutions they work for is always going to be a small one. As Upton Sinclair said, “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”A more elusive aspect of cultural assimilation is the sheer dead weight of the ordinariness of it all once you have planted yourself in your office chair for the 10,000th time. Government life is typically not some vignette from an Allen Drury novel about intrigue under the Capitol dome. Sitting and staring at the clock on the off-white office wall when it’s 11:00 in the evening and you are vowing never, ever to eat another piece of takeout pizza in your life is not an experience that summons the higher literary instincts of a would-be memoirist. After a while, a functionary of the state begins to hear things that, in another context, would be quite remarkable, or at least noteworthy, and yet that simply bounce off one’s consciousness like pebbles off steel plate: “You mean the number of terrorist groups we are fighting is classified?” No wonder so few people are whistle-blowers, quite apart from the vicious retaliation whistle-blowing often provokes: Unless one is blessed with imagination and a fine sense of irony, growing immune to the curiousness of one’s surroundings is easy. To paraphrase the inimitable Donald Rumsfeld, I didn’t know all that I knew, at least until I had had a couple of years away from the government to reflect upon it.
The Deep State does not consist of the entire government. It is a hybrid of national security and law enforcement agencies: the Department of Defense, the Department of State, the Department of Homeland Security, the Central Intelligence Agency and the Justice Department. I also include the Department of the Treasury because of its jurisdiction over financial flows, its enforcement of international sanctions and its organic symbiosis with Wall Street. All these agencies are coordinated by the Executive Office of the President via the National Security Council. Certain key areas of the judiciary belong to the Deep State, such as the Foreign Intelligence Surveillance Court, whose actions are mysterious even to most members of Congress. Also included are a handful of vital federal trial courts, such as the Eastern District of Virginia and the Southern District of Manhattan, where sensitive proceedings in national security cases are conducted. The final government component (and possibly last in precedence among the formal branches of government established by the Constitution) is a kind of rump Congress consisting of the congressional leadership and some (but not all) of the members of the defense and intelligence committees. The rest of Congress, normally so fractious and partisan, is mostly only intermittently aware of the Deep State and when required usually submits to a few well-chosen words from the State’s emissaries.
I saw this submissiveness on many occasions. One memorable incident was passage of the Foreign Intelligence Surveillance Amendments Act of 2008. This legislation retroactively legalized the Bush administration’s illegal and unconstitutional surveillance first revealed by The New York Times in 2005 and indemnified the telecommunications companies for their cooperation in these acts. The bill passed easily: All that was required was the invocation of the word “terrorism” and most members of Congress responded like iron filings obeying a magnet. One who responded in that fashion was Senator Barack Obama, soon to be coronated as the presidential nominee at the Democratic National Convention in Denver. He had already won the most delegates by campaigning to the left of his main opponent, Hillary Clinton, on the excesses of the global war on terror and the erosion of constitutional liberties.
As the indemnification vote showed, the Deep State does not consist only of government agencies. What is euphemistically called “private enterprise” is an integral part of its operations. In a special series in The Washington Post called “Top Secret America,” Dana Priest and William K. Arkin described the scope of the privatized Deep State and the degree to which it has metastasized after the September 11 attacks. There are now 854,000 contract personnel with top-secret clearances — a number greater than that of top-secret-cleared civilian employees of the government. While they work throughout the country and the world, their heavy concentration in and around the Washington suburbs is unmistakable: Since 9/11, 33 facilities for top-secret intelligence have been built or are under construction. Combined, they occupy the floor space of almost three Pentagons — about 17 million square feet. Seventy percent of the intelligence community’s budget goes to paying contracts. And the membrane between government and industry is highly permeable: The Director of National Intelligence, James R. Clapper, is a former executive of Booz Allen Hamilton, one of the government’s largest intelligence contractors. His predecessor as director, Admiral Mike McConnell, is the current vice chairman of the same company; Booz Allen is 99 percent dependent on government business. These contractors now set the political and social tone of Washington, just as they are increasingly setting the direction of the country, but they are doing it quietly, their doings unrecorded in the Congressional Record or the Federal Register, and are rarely subject to congressional hearings.
Photo: Dale Robbins
Washington is the most important node of the Deep State that has taken over America, but it is not the only one. Invisible threads of money and ambition connect the town to other nodes. One is Wall Street, which supplies the cash that keeps the political machine quiescent and operating as a diversionary marionette theater. Should the politicians forget their lines and threaten the status quo, Wall Street floods the town with cash and lawyers to help the hired hands remember their own best interests. The executives of the financial giants even have de facto criminal immunity. On March 6, 2013, testifying before the Senate Judiciary Committee, Attorney General Eric Holder stated the following: “I am concerned that the size of some of these institutions becomes so large that it does become difficult for us to prosecute them when we are hit with indications that if you do prosecute, if you do bring a criminal charge, it will have a negative impact on the national economy, perhaps even the world economy.” This, from the chief law enforcement officer of a justice system that has practically abolished the constitutional right to trial for poorer defendants charged with certain crimes. It is not too much to say that Wall Street may be the ultimate owner of the Deep State and its strategies, if for no other reason than that it has the money to reward government operatives with a second career that is lucrative beyond the dreams of avarice — certainly beyond the dreams of a salaried government employee. [3]The corridor between Manhattan and Washington is a well trodden highway for the personalities we have all gotten to know in the period since the massive deregulation of Wall Street: Robert Rubin, Lawrence Summers, Henry Paulson, Timothy Geithner and many others. Not all the traffic involves persons connected with the purely financial operations of the government: In 2013, General David Petraeus joined KKR (formerly Kohlberg Kravis Roberts) of 9 West 57th Street, New York, a private equity firm with $62.3 billion in assets. KKR specializes in management buyouts and leveraged finance. General Petraeus’ expertise in these areas is unclear. His ability to peddle influence, however, is a known and valued commodity. Unlike Cincinnatus, the military commanders of the Deep State do not take up the plow once they lay down the sword. Petraeus also obtained a sinecure as a non-resident senior fellow at the Belfer Center for Science and International Affairs at Harvard. The Ivy League is, of course, the preferred bleaching tub and charm school of the American oligarchy. [4]
Petraeus and most of the avatars of the Deep State — the White House advisers who urged Obama not to impose compensation limits on Wall Street CEOs, the contractor-connected think tank experts who besought us to “stay the course” in Iraq, the economic gurus who perpetually demonstrate that globalization and deregulation are a blessing that makes us all better off in the long run — are careful to pretend that they have no ideology. Their preferred pose is that of the politically neutral technocrat offering well considered advice based on profound expertise. That is nonsense. They are deeply dyed in the hue of the official ideology of the governing class, an ideology that is neither specifically Democrat nor Republican. Domestically, whatever they might privately believe about essentially diversionary social issues such as abortion or gay marriage, they almost invariably believe in the “Washington Consensus”: financialization, outsourcing, privatization, deregulation and the commodifying of labor. Internationally, they espouse 21st-century “American Exceptionalism”: the right and duty of the United States to meddle in every region of the world with coercive diplomacy and boots on the ground and to ignore painfully won international norms of civilized behavior. To paraphrase what Sir John Harrington said more than 400 years ago about treason, now that the ideology of the Deep State has prospered, none dare call it ideology. [5] That is why describing torture with the word “torture” on broadcast television is treated less as political heresy than as an inexcusable lapse of Washington etiquette: Like smoking a cigarette on camera, these days it is simply “not done.”
Photo: Dale Robbins
After Edward Snowden’s revelations about the extent and depth of surveillance by the National Security Agency, it has become publicly evident that Silicon Valley is a vital node of the Deep State as well. Unlike military and intelligence contractors, Silicon Valley overwhelmingly sells to the private market, but its business is so important to the government that a strange relationship has emerged. While the government could simply dragoon the high technology companies to do the NSA’s bidding, it would prefer cooperation with so important an engine of the nation’s economy, perhaps with an implied quid pro quo. Perhaps this explains the extraordinary indulgence the government shows the Valley in intellectual property matters. If an American “jailbreaks” his smartphone (i.e., modifies it so that it can use a service provider other than the one dictated by the manufacturer), he could receive a fine of up to $500,000 and several years in prison; so much for a citizen’s vaunted property rights to what he purchases. The libertarian pose of the Silicon Valley moguls, so carefully cultivated in their public relations, has always been a sham. Silicon Valley has long been tracking for commercial purposes the activities of every person who uses an electronic device, so it is hardly surprising that the Deep State should emulate the Valley and do the same for its own purposes. Nor is it surprising that it should conscript the Valley’s assistance.Still, despite the essential roles of lower Manhattan and Silicon Valley, the center of gravity of the Deep State is firmly situated in and around the Beltway. The Deep State’s physical expansion and consolidation around the Beltway would seem to make a mockery of the frequent pronouncement that governance in Washington is dysfunctional and broken. That the secret and unaccountable Deep State floats freely above the gridlock between both ends of Pennsylvania Avenue is the paradox of American government in the 21st century: drone strikes, data mining, secret prisons and Panopticon-like control on the one hand; and on the other, the ordinary, visible parliamentary institutions of self-government declining to the status of a banana republic amid the gradual collapse of public infrastructure.
The results of this contradiction are not abstract, as a tour of the rotting, decaying, bankrupt cities of the American Midwest will attest. It is not even confined to those parts of the country left behind by a Washington Consensus that decreed the financialization and deindustrialization of the economy in the interests of efficiency and shareholder value. This paradox is evident even within the Beltway itself, the richest metropolitan area in the nation. Although demographers and urban researchers invariably count Washington as a “world city,” that is not always evident to those who live there. Virtually every time there is a severe summer thunderstorm, tens — or even hundreds — of thousands of residents lose power, often for many days. There are occasional water restrictions over wide areas because water mains, poorly constructed and inadequately maintained, have burst. [6] The Washington metropolitan area considers it a Herculean task just to build a rail link to its international airport — with luck it may be completed by 2018.
It is as if Hadrian’s Wall was still fully manned and the fortifications along the border with Germania were never stronger, even as the city of Rome disintegrates from within and the life-sustaining aqueducts leading down from the hills begin to crumble. The governing classes of the Deep State may continue to deceive themselves with their dreams of Zeus-like omnipotence, but others do not. A 2013 Pew Poll that interviewed 38,000 people around the world found that in 23 of 39 countries surveyed, a plurality of respondents said they believed China already had or would in the future replace the United States as the world’s top economic power.
The Deep State is the big story of our time. It is the red thread that runs through the war on terrorism, the financialization and deindustrialization of the American economy, the rise of a plutocratic social structure and political dysfunction. Washington is the headquarters of the Deep State, and its time in the sun as a rival to Rome, Constantinople or London may be term-limited by its overweening sense of self-importance and its habit, as Winwood Reade said of Rome, to “live upon its principal till ruin stared it in the face.” “Living upon its principal,” in this case, means that the Deep State has been extracting value from the American people in vampire-like fashion.
We are faced with two disagreeable implications. First, that the Deep State is so heavily entrenched, so well protected by surveillance, firepower, money and its ability to co-opt resistance that it is almost impervious to change. Second, that just as in so many previous empires, the Deep State is populated with those whose instinctive reaction to the failure of their policies is to double down on those very policies in the future. Iraq was a failure briefly camouflaged by the wholly propagandistic success of the so-called surge; this legerdemain allowed for the surge in Afghanistan, which equally came to naught. Undeterred by that failure, the functionaries of the Deep State plunged into Libya; the smoking rubble of the Benghazi consulate, rather than discouraging further misadventure, seemed merely to incite the itch to bomb Syria. Will the Deep State ride on the back of the American people from failure to failure until the country itself, despite its huge reserves of human and material capital, is slowly exhausted? The dusty road of empire is strewn with the bones of former great powers that exhausted themselves in like manner.
But, there are signs of resistance to the Deep State and its demands. In the aftermath of the Snowden revelations, the House narrowly failed to pass an amendment that would have defunded the NSA’s warrantless collection of data from US persons. Shortly thereafter, the president, advocating yet another military intervention in the Middle East, this time in Syria, met with such overwhelming congressional skepticism that he changed the subject by grasping at a diplomatic lifeline thrown to him by Vladimir Putin. [7]Has the visible, constitutional state, the one envisaged by Madison and the other Founders, finally begun to reassert itself against the claims and usurpations of the Deep State? To some extent, perhaps. The unfolding revelations of the scope of the NSA’s warrantless surveillance have become so egregious that even institutional apologists such as Senator Dianne Feinstein have begun to backpedal — if only rhetorically — from their knee-jerk defense of the agency. As more people begin to waken from the fearful and suggestible state that 9/11 created in their minds, it is possible that the Deep State’s decade-old tactic of crying “terrorism!” every time it faces resistance is no longer eliciting the same Pavlovian response of meek obedience. And the American people, possibly even their legislators, are growing tired of endless quagmires in the Middle East.
But there is another more structural reason the Deep State may have peaked in the extent of its dominance. While it seems to float above the constitutional state, its essentially parasitic, extractive nature means that it is still tethered to the formal proceedings of governance. The Deep State thrives when there is tolerable functionality in the day-to-day operations of the federal government. As long as appropriations bills get passed on time, promotion lists get confirmed, black (i.e., secret) budgets get rubber-stamped, special tax subsidies for certain corporations are approved without controversy, as long as too many awkward questions are not asked, the gears of the hybrid state will mesh noiselessly. But when one house of Congress is taken over by tea party Wahhabites, life for the ruling class becomes more trying.
If there is anything the Deep State requires it is silent, uninterrupted cash flow and the confidence that things will go on as they have in the past. It is even willing to tolerate a degree of gridlock: Partisan mud wrestling over cultural issues may be a useful distraction from its agenda. But recent congressional antics involving sequestration, the government shutdown and the threat of default over the debt ceiling extension have been disrupting that equilibrium. And an extreme gridlock dynamic has developed between the two parties such that continuing some level of sequestration is politically the least bad option for both parties, albeit for different reasons. As much as many Republicans might want to give budget relief to the organs of national security, they cannot fully reverse sequestration without the Democrats demanding revenue increases. And Democrats wanting to spend more on domestic discretionary programs cannot void sequestration on either domestic or defense programs without Republicans insisting on entitlement cuts.
So, for the foreseeable future, the Deep State must restrain its appetite for taxpayer dollars. Limited deals may soften sequestration, but agency requests will not likely be fully funded anytime soon. Even Wall Street’s rentier operations have been affected: After helping finance the tea party to advance its own plutocratic ambitions, America’s Big Money is now regretting the Frankenstein’s monster it has created. Like children playing with dynamite, the tea party and its compulsion to drive the nation into credit default has alarmed the grown-ups commanding the heights of capital; the latter are now telling the politicians they thought they had hired to knock it off.
The House vote to defund the NSA’s illegal surveillance programs was equally illustrative of the disruptive nature of the tea party insurgency. Civil liberties Democrats alone would never have come so close to victory; tea party stalwart Justin Amash (R-MI), who has also upset the business community for his debt-limit fundamentalism, was the lead Republican sponsor of the NSA amendment, and most of the Republicans who voted with him were aligned with the tea party.
The final factor is Silicon Valley. Owing to secrecy and obfuscation, it is hard to know how much of the NSA’s relationship with the Valley is based on voluntary cooperation, how much is legal compulsion through FISA warrants and how much is a matter of the NSA surreptitiously breaking into technology companies’ systems. Given the Valley’s public relations requirement to mollify its customers who have privacy concerns, it is difficult to take the tech firms’ libertarian protestations about government compromise of their systems at face value, especially since they engage in similar activity against their own customers for commercial purposes. That said, evidence is accumulating that Silicon Valley is losing billions in overseas business from companies, individuals and governments that want to maintain privacy. For high tech entrepreneurs, the cash nexus is ultimately more compelling than the Deep State’s demand for patriotic cooperation. Even legal compulsion can be combatted: Unlike the individual citizen, tech firms have deep pockets and batteries of lawyers with which to fight government diktat.This pushback has gone so far that on January 17, President Obama announced revisions to the NSA’s data collection programs, including withdrawing the agency’s custody of a domestic telephone record database, expanding requirements for judicial warrants and ceasing to spy on (undefined) “friendly foreign leaders.” Critics have denounced the changes as a cosmetic public relations move, but they are still significant in that the clamor has gotten so loud that the president feels the political need to address it.
When the contradictions within a ruling ideology are pushed too far, factionalism appears and that ideology begins slowly to crumble. Corporate oligarchs such as the Koch brothers are no longer entirely happy with the faux-populist political front group they helped fund and groom. Silicon Valley, for all the Ayn Rand-like tendencies of its major players, its offshoring strategies and its further exacerbation of income inequality, is now lobbying Congress to restrain the NSA, a core component of the Deep State. Some tech firms are moving to encrypt their data. High tech corporations and governments alike seek dominance over people though collection of personal data, but the corporations are jumping ship now that adverse public reaction to the NSA scandals threatens their profits.
The outcome of all these developments is uncertain. The Deep State, based on the twin pillars of national security imperative and corporate hegemony, has until recently seemed unshakable and the latest events may only be a temporary perturbation in its trajectory. But history has a way of toppling the altars of the mighty. While the two great materialist and determinist ideologies of the twentieth century, Marxism and the Washington Consensus, successively decreed that the dictatorship of the proletariat and the dictatorship of the market were inevitable, the future is actually indeterminate. It may be that deep economic and social currents create the framework of history, but those currents can be channeled, eddied, or even reversed by circumstance, chance and human agency. We have only to reflect upon defunct glacial despotisms such as the USSR or East Germany to realize that nothing is forever.
Throughout history, state systems with outsized pretensions to power have reacted to their environments in two ways. The first strategy, reflecting the ossification of its ruling elites, consists of repeating that nothing is wrong, that the status quo reflects the nation’s unique good fortune in being favored by God and that those calling for change are merely subversive troublemakers. As the French ancien régime, the Romanov dynasty and the Habsburg emperors discovered, the strategy works splendidly for a while, particularly if one has a talent for dismissing unpleasant facts. The final results, however, are likely to be thoroughly disappointing.The second strategy is one embraced to varying degrees and with differing goals, by figures of such contrasting personalities as Mustafa Kemal Atatürk, Franklin D. Roosevelt, Charles de Gaulle and Deng Xiaoping. They were certainly not revolutionaries by temperament; if anything, their natures were conservative. But they understood that the political cultures in which they lived were fossilized and incapable of adapting to the times. In their drive to reform and modernize the political systems they inherited, their first obstacles to overcome were the outworn myths that encrusted the thinking of the elites of their time.
As the United States confronts its future after experiencing two failed wars, a precarious economy and $17 trillion in accumulated debt, the national punditry has split into two camps. The first, the declinists, sees a broken, dysfunctional political system incapable of reform and an economy soon to be overtaken by China. The second, the reformers, offers a profusion of nostrums to turn the nation around: public financing of elections to sever the artery of money between the corporate components of the Deep State and financially dependent elected officials, government “insourcing” to reverse the tide of outsourcing of government functions and the conflicts of interest that it creates, a tax policy that values human labor over financial manipulation and a trade policy that favors exporting manufactured goods over exporting investment capital.
Mike Lofgren on the Deep State Hiding in Plain Sight
All of that is necessary, but not sufficient. The Snowden revelations (the impact of which have been surprisingly strong), the derailed drive for military intervention in Syria and a fractious Congress, whose dysfunction has begun to be a serious inconvenience to the Deep State, show that there is now a deep but as yet inchoate hunger for change. What America lacks is a figure with the serene self-confidence to tell us that the twin idols of national security and corporate power are outworn dogmas that have nothing more to offer us. Thus disenthralled, the people themselves will unravel the Deep State with surprising speed.


[1] The term “Deep State” was coined in Turkey and is said to be a system composed of high-level elements within the intelligence services, military, security, judiciary and organized crime. In British author John le Carré’s latest novel, A Delicate Truth, a character describes the Deep State as “… the ever-expanding circle of non-governmental insiders from banking, industry and commerce who were cleared for highly classified information denied to large swathes of Whitehall and Westminster.”  I use the term to mean a hybrid association of elements of government and parts of top-level finance and industry that is effectively able to govern the United States without reference to the consent of the governed as expressed through the formal political process.

[2] Twenty-five years ago, the sociologist Robert Nisbet described this phenomenon as “the attribute of No Fault…. Presidents, secretaries and generals and admirals in America seemingly subscribe to the doctrine that no fault ever attaches to policy and operations. This No Fault conviction prevents them from taking too seriously such notorious foul-ups as Desert One, Grenada, Lebanon and now the Persian Gulf.” To his list we might add 9/11, Iraq, Afghanistan and Libya.

[3] The attitude of many members of Congress towards Wall Street was memorably expressed by Rep. Spencer Bachus (R-AL), the incoming chairman of the House Financial Services Committee, in 2010: “In Washington, the view is that the banks are to be regulated, and my view is that Washington and the regulators are there to serve the banks.”

[4] Beginning in 1988, every US president has been a graduate of Harvard or Yale. Beginning in 2000, every losing presidential candidate has been a Harvard or Yale graduate, with the exception of John McCain in 2008.

[5] In recent months, the American public has seen a vivid example of a Deep State operative marketing his ideology under the banner of pragmatism. Former Secretary of Defense Robert M. Gates — a one-time career CIA officer and deeply political Bush family retainer — has camouflaged his retrospective defense of military escalations that have brought us nothing but casualties and fiscal grief as the straight-from-the-shoulder memoir from a plain-spoken son of Kansas who disdains Washington and its politicians.

[6] Meanwhile, the US government took the lead in restoring Baghdad’s sewer system at a cost of $7 billion.

[7] Obama’s abrupt about-face suggests he may have been skeptical of military intervention in Syria all along, but only dropped that policy once Congress and Putin gave him the running room to do so. In 2009, he went ahead with the Afghanistan “surge” partly because General Petraeus’ public relations campaign and back-channel lobbying on the Hill for implementation of his pet military strategy pre-empted other options. These incidents raise the disturbing question of how much the democratically elected president — or any president — sets the policy of the national security state and how much the policy is set for him by the professional operatives of that state who engineer faits accomplis that force his hand.
Mike Lofgren is a former congressional staff member who served on both the House and Senate budget committees. His book about Congress, The Party is Over: How Republicans Went Crazy, Democrats Became Useless, and the Middle Class Got Shafted, appeared in paperback on August 27, 2013.

This escort made an infographic of how she spent her time in 2013

girl in bed


If you’ve ever seen The Girlfriend Experience, you know that escorting can be just as mundane as any other job (also, that Sasha Grey has the emotional range of a teak coffee table, but that’s beside the point). To illustrate this point, professional companion Avery Moore has made an infographic (inspired by the Feltron Report) showing how she spent her time in the last year, from the number of hours she spent traveling to the emails she received to the amount of meals she shared with her “friends” (here, a euphemism for clients).
The result is a fascinating, borderline obsessive-compulsive peek into the day-to-day life of a professional escort, sort of like Secret Diary of a Call Girl meets Monk:

Via Avery Playful Blog
While most people think being a professional escort is all jet-setting to exotic locales and spooning caviar into the mouths of sheikhs and having multiple orgasms on diplomats’ private sex planes, Moore’s infographic indicates that’s only partly true. It’s true that she spends a lot of time with “company” (1,394 hours, to be exact), which involves traveling to various U.S. cities (19 in all), attending the theater (29 plays in a year), and other assorted “mystery activities.”
But she also devotes an awful lot of time to “administrative” duties, such as checking emails and catching up on blog posts. In this respect, her work life seems basically identical to that of any other profession, including, well, mine (I’m sure I’m not the only person who’s felt like they should update their job description of LinkedIn to “professional email responder”).
Another thing that’s surprising about the infographic is how much time Moore reports spending on self-beautification: 201 hours on salon visits and exercise. Given her $1,000-an-hour rate, it’s probably to be expected that a high-end escort like Moore would devote a fair amount of time to primping, but in all honestly 201 hours seems like it’s kinda on the low end of the spectrum; in the Upper East Side of New York City, there are probably non-escort housewives who spend three times as much on their nail care alone.  
On her blog, Moore says that the infographic was more for her benefit than that of her readers, but in an email to the Daily Dot, she admits that it’s also a corrective for people’s misguided beliefs about the nature of her work:
I thought it would be enlightening for other people. Everyone's always so curious about what an escort's life is ‘really like’ and there are so many incorrect assumptions. I can't speak for anyone but me, but my days often involve a lot of planes, trains, and automobiles, and fantastic getaways with wonderful people. I've been given such a fun life, I get pleasure out of giving others a glimpse into it.
So, OK, being a high-end escort is still probably more glamorous than being, say, a traveling medical sales representative. But doesn’t it make you feel better to know that even someone who’s getting $1,000 an hour has to reply to their email as much as you do?
H/T Avery Playful Blog | Photo by iamtheo/Flickr (CC BY 2.0)

Tor is building an anonymous instant messenger

Top Secret | Flickr - Photo Sharing!

Tor is building an anonymous instant messenger

Forget the $16 billion romance between Facebook and WhatsApp. There’s a new messaging tool worth watching.
Tor, the team behind the world’s leading online anonymity service, is developing a new anonymous instant messenger client, according to documents produced at the Tor 2014 Winter Developers Meeting in Reykjavík, Iceland.
The Tor Instant Messaging Bundle (TIMB) is set to work with the open-source InstantBird messenger client in experimental builds released to the public by March 31, 2014. The developers aim to build in encrypted off-the-record chatting and then bundle the client with the general Tor Launcher in the following months.
Pidgin, an older and more popular open-source chat client, was originally considered to be the foundation of the TIMB but was thrown out in favor of InstantBird. However, Tor still plans to hire independent security contractors to audit the new software and test its mettle so that “people in countries where communication for the purpose of activism is met with intimidation, violence, and prosecution will be able to avoid the scrutiny of criminal cartels, corrupt officials, and authoritarian governments.”
Over the long term, TIMB will likely become the messenger of choice for Tor users. Software such as TorChat and BitMessage already have significant userbases and smart advocates, but with the full weight of the Tor Launcher and team behind it, there’s little reason to imagine TIMB won’t succeed.
The creation of the TIMB is yet another step in what has been a years-long improvement in Tor software. A decade ago, the anonymity program was available only to tech-savvy users who knew enough to dive into their operating system’s command line.
Now, the Tor user interface has progressed to the point that almost anyone can anonymously surf the Web with just a few clicks. If TIMB follows in those footsteps, it will be another powerful anonymity tool at the fingertips of of both the tech literate and humanity at large.
The Tor Project, a $2 million per year nonprofit consisting of 30 developers spread out over 12 countries, is pushing forward on TIMB as part of an overall initiative to make Tor even easier to use for the average person. Also in the pipeline are more localized support staff as well as “point-click-publish Hidden Services,” to make it extremely easy for anyone to create a Deep Web site.
When it comes to the sort of security that Tor provides, ease of use is of paramount importance. Many users can’t or won’t take the time to learn about encryption programs like Pretty Good Privacy (PGP), leaving themselves open to surveillance.
Even many patrons of the Deep Web black market Silk Road don’t bother with the simplest encryption tools.
“I post my PGP key everywhere and beg my customers to use it but the majority don't..... including for some pretty big orders!,” popular Silk Road ecstasy vendor DrMDA wrote late last year.
“Something like 80 percent of SR users don't use PGP,” wrote astor, another longtime Silk Roader.
Many people need encryption served up to them on silver platter to even consider it. TIMB is the waiter that plans to deliver.
Photo via Michelangelo Carrieri/Flickr (CC-BY-ND 2.0)

6 Examples of Media Manipulation

Is Everything in the Mainstream Media Fake?
Revisiting the power of Nazi propaganda(Credit: Kunstbibliothek Berlin/BPK, Berlin/Art Resource, New York)
Sigmund Fraud
Activist Post

The world of television and modern media has become a tool of de-evolution, propaganda and social control. Since the reign of Edward Bernays and the rise of the Tavistock Institute in the early 20th century, nearly unlimited resources have been applied to understanding how to manipulate the human psyche through television and other forms of mass media.

What we have today is an increasingly sophisticated full-spectrum assault on free will and psychological well-being, and we have come to a point where it is no longer even necessary for media institutions to attempt to hide their blatant work of manipulating public opinion, manufacturing consent, and creating winners and losers in the minds of the already brain-washed public.

Here are 6 examples where truth reveals that the impression the media is conveying to a dumbed-down, unsuspecting public differs greatly from what is actually happening behind the scenes. By looking at these examples in a single location, it is easy to see how the mainstream media pushes ulterior motives on the public, and how important it is to be vigilant when consuming their info.

News Media Lies, Scripting, Omissions and Obfuscations

1. Time Magazine sanitizes their covers for American consumption – Time Magazine is considered a leader in national news, yet they consistently portray a dumbed-down, frivolous image of life in America while presenting an entirely different message to the rest of the world. In the two examples below, their magazine covers feature a different cover story for Americans while sending different messages to the rest of the world.



Time Magazine Covers 1

Time Magazine Covers 2

Source: Buzzfeed

2. CNN is the leader in fake war new coverage - Here are just 2 examples of how CNN has scripted and staged live war coverage to create a sense of drama and danger around people who were not in harm’s way.

CNN Caught Staging News Segments on Syria With Actors


Anderson Cooper fakes Syria war footage by dubbing in sound effects and playing chaotic video next to a Syrian correspondent. In the video you can see how in one tape the correspondent is in a safe environment, then you see the footage aired by CNN with dubbed in theatrical effects:



In this CNN clip of coverage of the first Gulf War in the 1990s, anchorman Charles Jaco makes a joke of war coverage in Saudia Arabia and demonstrates how the news is overly sensationalized for American audiences, and how hosts pretend to be in danger when they are not. At 7:00 in this clip Jaco quickly puts on a ridiculous gas mask for obvious theatrics while talking to a guest:

CNN's Fake Newscast From The First Gulf War




Manufactured ‘Reality’ TV 

3. The Biggest Loser – Americans should by now already know that ‘reality shows’ are staged. The popular TV show, Biggest Loser, where overweight people compete to lose the most weight for the entertainment benefit of the rest of the world, lies about the circumstances of the training and weighing regimen of their contestants while using underhanded methods to generate phony emotional responses from their contestants.
They want the drama, the tears, the fights, the tears, the triumphs and the tears. Producers would push you to cry because that’s what makes good TV. They continually asked questions like “Do you miss your kids?” Needless to say, I broke down more than once…
Have you ever wondered how the contestants manage to lose a staggering 12 kilos in a single week? We don’t. In my series a weekly weigh-in was NEVER filmed after just one week of working out. In fact the longest gap from one weigh-in to the next was three and a half weeks. That’s 25 days between weigh-ins, not seven. -Andrew ’Cosi’ Costello, former contestant
Source: Courier Mail
It turns out that many of the so-called ‘reality’ shows are actually scripted theatrical presentations that count on the viewer’s suspension of disbelief to garner ratings. More info on the many fake ‘reality’ shows can be found here, and here. 

4. Nationally scripted local ‘news’ – The comedian and talk show host Conan O’Brien has done a service for America by compiling overtly ridiculous cases of local ‘news’ broadcasts that were simultaneously repeated verbatim in dozens of markets nationwide. This is proof that you simply cannot trust the authenticity of what you are seeing on news broadcasts. Watch these stunning and blatant examples of this in the following videos:

Newscasters Agree: A Christmas Present Or Two Or Ten Edition




Newscasters Agree: Don't Worry, Be Happy Edition



Digital Image Manipulation is Ubiquitous in the Media 

5. Manipulating Images as War Propaganda - There are many examples of how newspapers around the world photoshop images of war in order to influence public opinion. All sides in a conflict will do this as propaganda, so it is important to remember this when consuming news on international conflicts and to be alert for phonies. The images below of the recent conflict in Syria show how easily images are manipulated for the purposes of propaganda:

Syria Photoshop

 6. Advertising Industry ‘Touches Up’ Images of People – The practice of digitally ‘touching up’ actors and models in images and videos is an overt industry standard. Yet most people, whether they’re aware of image manipulation or not, still process television and print images on the sub-conscious level as if they were, primarily because everyone is doing it, and our bias for ‘normal’ has been socially reconstructed to adopt advertising lies as normal.

This process is demonstrated in the following video where an ordinary looking woman is transformed into a lusty beauty queen for the purposes of selling more products to consumers.

Body Evolution - Model Before and After




Conclusion

Whether for marketing or for manufacturing consent, the media industry is guilty of using subtle and not-so-subtle tactics to influence our conscious and subconscious minds to influence our opinions and behavior. There are countless other examples of these practices; and discerning, awake people would be well served to be vigilant of this when consuming modern media in any form.

You don’t have to be paranoid these days to acknowledge that you’re being lied to and that the institutions we should be able to depend on for bringing us an objective view of world are anything but objective. 

Sigmund Fraud is a survivor of modern psychiatry and a dedicated mental activist. He is a staff writer for WakingTimes.com, where this first appeared. Sigmund indulges in the possibility of a massive shift towards a more psychologically aware future for mankind.

They're Watching You at Work




What happens when Big Data meets human resources? The emerging practice of "people analytics" is already transforming how employers hire, fire, and promote.

Peter Yang
In 2003, thanks to Michael Lewis and his best seller Moneyball, the general manager of the Oakland A’s, Billy Beane, became a star. The previous year, Beane had turned his back on his scouts and had instead entrusted player-acquisition decisions to mathematical models developed by a young, Harvard-trained statistical wizard on his staff. What happened next has become baseball lore. The A’s, a small-market team with a paltry budget, ripped off the longest winning streak in American League history and rolled up 103 wins for the season. Only the mighty Yankees, who had spent three times as much on player salaries, won as many games. The team’s success, in turn, launched a revolution. In the years that followed, team after team began to use detailed predictive models to assess players’ potential and monetary value, and the early adopters, by and large, gained a measurable competitive edge over their more hidebound peers.
That’s the story as most of us know it. But it is incomplete. What would seem at first glance to be nothing but a memorable tale about baseball may turn out to be the opening chapter of a much larger story about jobs. Predictive statistical analysis, harnessed to big data, appears poised to alter the way millions of people are hired and assessed.
Yes, unavoidably, big data. As a piece of business jargon, and even more so as an invocation of coming disruption, the term has quickly grown tiresome. But there is no denying the vast increase in the range and depth of information that’s routinely captured about how we behave, and the new kinds of analysis that this enables. By one estimate, more than 98 percent of the world’s information is now stored digitally, and the volume of that data has quadrupled since 2007. Ordinary people at work and at home generate much of this data, by sending e-mails, browsing the Internet, using social media, working on crowd-sourced projects, and more—and in doing so they have unwittingly helped launch a grand new societal project. “We are in the midst of a great infrastructure project that in some ways rivals those of the past, from Roman aqueducts to the Enlightenment’s Encyclopédie,” write Viktor Mayer-Schönberger and Kenneth Cukier in their recent book, Big Data: A Revolution That Will Transform How We Live, Work, and Think. “The project is datafication. Like those other infrastructural advances, it will bring about fundamental changes to society.”
Some of the changes are well known, and already upon us. Algorithms that predict stock-price movements have transformed Wall Street. Algorithms that chomp through our Web histories have transformed marketing. Until quite recently, however, few people seemed to believe this data-driven approach might apply broadly to the labor market.
But it now does. According to John Hausknecht, a professor at Cornell’s school of industrial and labor relations, in recent years the economy has witnessed a “huge surge in demand for workforce-analytics roles.” Hausknecht’s own program is rapidly revising its curriculum to keep pace. You can now find dedicated analytics teams in the human-resources departments of not only huge corporations such as Google, HP, Intel, General Motors, and Procter & Gamble, to name just a few, but also companies like McKee Foods, the Tennessee-based maker of Little Debbie snack cakes. Even Billy Beane is getting into the game. Last year he appeared at a large conference for corporate HR executives in Austin, Texas, where he reportedly stole the show with a talk titled “The Moneyball Approach to Talent Management.” Ever since, that headline, with minor modifications, has been plastered all over the HR trade press.
The application of predictive analytics to people’s careers—an emerging field sometimes called “people analytics”—is enormously challenging, not to mention ethically fraught. And it can’t help but feel a little creepy. It requires the creation of a vastly larger box score of human performance than one would ever encounter in the sports pages, or that has ever been dreamed up before. To some degree, the endeavor touches on the deepest of human mysteries: how we grow, whether we flourish, what we become. Most companies are just beginning to explore the possibilities. But make no mistake: during the next five to 10 years, new models will be created, and new experiments run, on a very large scale. Will this be a good development or a bad one—for the economy, for the shapes of our careers, for our spirit and self-worth? Earlier this year, I decided to find out.
Ever since we’ve had companies, we’ve had managers trying to figure out which people are best suited to working for them. The techniques have varied considerably. Near the turn of the 20th century, one manufacturer in Philadelphia made hiring decisions by having its foremen stand in front of the factory and toss apples into the surrounding scrum of job-seekers. Those quick enough to catch the apples and strong enough to keep them were put to work.
In those same times, a different (and less bloody) Darwinian process governed the selection of executives. Whole industries were being consolidated by rising giants like U.S. Steel, DuPont, and GM. Weak competitors were simply steamrolled, but the stronger ones were bought up, and their founders typically were offered high-level jobs within the behemoth. The approach worked pretty well. As Peter Cappelli, a professor at the Wharton School, has written, “Nothing in the science of prediction and selection beats observing actual performance in an equivalent role.”
By the end of World War II, however, American corporations were facing severe talent shortages. Their senior executives were growing old, and a dearth of hiring from the Depression through the war had resulted in a shortfall of able, well-trained managers. Finding people who had the potential to rise quickly through the ranks became an overriding preoccupation of American businesses. They began to devise a formal hiring-and-management system based in part on new studies of human behavior, and in part on military techniques developed during both world wars, when huge mobilization efforts and mass casualties created the need to get the right people into the right roles as efficiently as possible. By the 1950s, it was not unusual for companies to spend days with young applicants for professional jobs, conducting a battery of tests, all with an eye toward corner-office potential. “P&G picks its executive crop right out of college,” BusinessWeek noted in 1950, in the unmistakable patter of an age besotted with technocratic possibility. IQ tests, math tests, vocabulary tests, professional-aptitude tests, vocational-interest questionnaires, Rorschach tests, a host of other personality assessments, and even medical exams (who, after all, would want to hire a man who might die before the company’s investment in him was fully realized?)—all were used regularly by large companies in their quest to make the right hire.
The process didn’t end when somebody started work, either. In his classic 1956 cultural critique, The Organization Man, the business journalist William Whyte reported that about a quarter of the country’s corporations were using similar tests to evaluate managers and junior executives, usually to assess whether they were ready for bigger roles. “Should Jones be promoted or put on the shelf?” he wrote. “Once, the man’s superiors would have had to thresh this out among themselves; now they can check with psychologists to see what the tests say.”
Remarkably, this regime, so widespread in corporate America at mid-century, had almost disappeared by 1990. “I think an HR person from the late 1970s would be stunned to see how casually companies hire now,” Peter Cappelli told me—the days of testing replaced by a handful of ad hoc interviews, with the questions dreamed up on the fly. Many factors explain the change, he said, and then he ticked off a number of them: Increased job-switching has made it less important and less economical for companies to test so thoroughly. A heightened focus on short-term financial results has led to deep cuts in corporate functions that bear fruit only in the long term. The Civil Rights Act of 1964, which exposed companies to legal liability for discriminatory hiring practices, has made HR departments wary of any broadly applied and clearly scored test that might later be shown to be systematically biased. Instead, companies came to favor the more informal qualitative hiring practices that are still largely in place today.
But companies abandoned their hard-edged practices for another important reason: many of their methods of evaluation turned out not to be very scientific. Some were based on untested psychological theories. Others were originally designed to assess mental illness, and revealed nothing more than where subjects fell on a “normal” distribution of responses—which in some cases had been determined by testing a relatively small, unrepresentative group of people, such as college freshmen. When William Whyte administered a battery of tests to a group of corporate presidents, he found that not one of them scored in the “acceptable” range for hiring. Such assessments, he concluded, measured not potential but simply conformity. Some of them were highly intrusive, too, asking questions about personal habits, for instance, or parental affection. Unsurprisingly, subjects didn’t like being so impersonally poked and prodded (sometimes literally).
For all these reasons and more, the idea that hiring was a science fell out of favor. But now it’s coming back, thanks to new technologies and methods of analysis that are cheaper, faster, and much-wider-ranging than what we had before. For better or worse, a new era of technocratic possibility has begun.
Consider Knack, a tiny start-up based in Silicon Valley. Knack makes app-based video games, among them Dungeon Scrawl, a quest game requiring the player to navigate a maze and solve puzzles, and Wasabi Waiter, which involves delivering the right sushi to the right customer at an increasingly crowded happy hour. These games aren’t just for play: they’ve been designed by a team of neuroscientists, psychologists, and data scientists to suss out human potential. Play one of them for just 20 minutes, says Guy Halfteck, Knack’s founder, and you’ll generate several megabytes of data, exponentially more than what’s collected by the SAT or a personality test. How long you hesitate before taking every action, the sequence of actions you take, how you solve problems—all of these factors and many more are logged as you play, and then are used to analyze your creativity, your persistence, your capacity to learn quickly from mistakes, your ability to prioritize, and even your social intelligence and personality. The end result, Halfteck says, is a high-resolution portrait of your psyche and intellect, and an assessment of your potential as a leader or an innovator.
When Hans Haringa heard about Knack, he was skeptical but intrigued. Haringa works for the petroleum giant Royal Dutch Shell—by revenue, the world’s largest company last year. For seven years he’s served as an executive in the company’s GameChanger unit: a 12-person team that for nearly two decades has had an outsize impact on the company’s direction and performance. The unit’s job is to identify potentially disruptive business ideas. Haringa and his team solicit ideas promiscuously from inside and outside the company, and then play the role of venture capitalists, vetting each idea, meeting with its proponents, dispensing modest seed funding to a few promising candidates, and monitoring their progress. They have a good record of picking winners, Haringa told me, but identifying ideas with promise has proved to be extremely difficult and time-consuming. The process typically takes more than two years, and less than 10 percent of the ideas proposed to the unit actually make it into general research and development.
When he heard about Knack, Haringa thought he might have found a shortcut. What if Knack could help him assess the people proposing all these ideas, so that he and his team could focus only on those whose ideas genuinely deserved close attention? Haringa reached out, and eventually ran an experiment with the company’s help.
People analytics cedes evaluation to machines. But consider the alternative. The way we now judge professional potential is rife with hidden biases.
Over the years, the GameChanger team had kept a database of all the ideas it had received, recording how far each had advanced. Haringa asked all the idea contributors he could track down (about 1,400 in total) to play Dungeon Scrawl and Wasabi Waiter, and told Knack how well three-quarters of those people had done as idea generators. (Did they get initial funding? A second round? Did their ideas make it all the way?) He did this so that Knack’s staff could develop game-play profiles of the strong innovators relative to the weak ones. Finally, he had Knack analyze the game-play of the remaining quarter of the idea generators, and asked the company to guess whose ideas had turned out to be best.
When the results came back, Haringa recalled, his heart began to beat a little faster. Without ever seeing the ideas, without meeting or interviewing the people who’d proposed them, without knowing their title or background or academic pedigree, Knack’s algorithm had identified the people whose ideas had panned out. The top 10 percent of the idea generators as predicted by Knack were in fact those who’d gone furthest in the process. Knack identified six broad factors as especially characteristic of those whose ideas would succeed at Shell: “mind wandering” (or the tendency to follow interesting, unexpected offshoots of the main task at hand, to see where they lead), social intelligence, “goal-orientation fluency,” implicit learning, task-switching ability, and conscientiousness. Haringa told me that this profile dovetails with his impression of a successful innovator. “You need to be disciplined,” he said, but “at all times you must have your mind open to see the other possibilities and opportunities.”
What Knack is doing, Haringa told me, “is almost like a paradigm shift.” It offers a way for his GameChanger unit to avoid wasting time on the 80 people out of 100—nearly all of whom look smart, well-trained, and plausible on paper—whose ideas just aren’t likely to work out. If he and his colleagues were no longer mired in evaluating “the hopeless folks,” as he put it to me, they could solicit ideas even more widely than they do today and devote much more careful attention to the 20 people out of 100 whose ideas have the most merit.
Haringa is now trying to persuade his colleagues in the GameChanger unit to use Knack’s games as an assessment tool. But he’s also thinking well beyond just his own little part of Shell. He has encouraged the company’s HR executives to think about applying the games to the recruitment and evaluation of all professional workers. Shell goes to extremes to try to make itself the world’s most innovative energy company, he told me, so shouldn’t it apply that spirit to developing its own “human dimension”?
“It is the whole man The Organization wants,” William Whyte wrote back in 1956, when describing the ambit of the employee evaluations then in fashion. Aptitude, skills, personal history, psychological stability, discretion, loyalty—companies at the time felt they had a need (and the right) to look into them all. That ambit is expanding once again, and this is undeniably unsettling. Should the ideas of scientists be dismissed because of the way they play a game? Should job candidates be ranked by what their Web habits say about them? Should the “data signature” of natural leaders play a role in promotion? These are all live questions today, and they prompt heavy concerns: that we will cede one of the most subtle and human of skills, the evaluation of the gifts and promise of other people, to machines; that the models will get it wrong; that some people will never get a shot in the new workforce.
It’s natural to worry about such things. But consider the alternative. A mountain of scholarly literature has shown that the intuitive way we now judge professional potential is rife with snap judgments and hidden biases, rooted in our upbringing or in deep neurological connections that doubtless served us well on the savanna but would seem to have less bearing on the world of work.
What really distinguishes CEOs from the rest of us, for instance? In 2010, three professors at Duke’s Fuqua School of Business asked roughly 2,000 people to look at a long series of photos. Some showed CEOs and some showed nonexecutives, and the participants didn’t know who was who. The participants were asked to rate the subjects according to how “competent” they looked. Among the study’s findings: CEOs look significantly more competent than non-CEOs; CEOs of large companies look significantly more competent than CEOs of small companies; and, all else being equal, the more competent a CEO looked, the fatter the paycheck he or she received in real life. And yet the authors found no relationship whatsoever between how competent a CEO looked and the financial performance of his or her company.
Examples of bias abound. Tall men get hired and promoted more frequently than short men, and make more money. Beautiful women get preferential treatment, too—unless their breasts are too large. According to a national survey by the Employment Law Alliance a few years ago, most American workers don’t believe attractive people in their firms are hired or promoted more frequently than unattractive people, but the evidence shows that they are, overwhelmingly so. Older workers, for their part, are thought to be more resistant to change and generally less competent than younger workers, even though plenty of research indicates that’s just not so. Workers who are too young or, more specifically, are part of the Millennial generation are tarred as entitled and unable to think outside the box.
“Some of our hiring managers don’t even want to interview anymore”—they just want to hire the people with the highest scores.
Malcolm Gladwell recounts a classic example in Blink. Back in the 1970s and ’80s, most professional orchestras transitioned one by one to “blind” auditions, in which each musician seeking a job performed from behind a screen. The move was made in part to stop conductors from favoring former students, which it did. But it also produced another result: the proportion of women winning spots in the most-prestigious orchestras shot up fivefold, notably when they played instruments typically identified closely with men. Gladwell tells the memorable story of Julie Landsman, who, at the time of his book’s publication, in 2005, was playing principal French horn for the Metropolitan Opera, in New York. When she’d finished her blind audition for that role, years earlier, she knew immediately that she’d won. Her last note was so true, and she held it so long, that she heard delighted peals of laughter break out among the evaluators on the other side of the screen. But when she came out to greet them, she heard a gasp. Landsman had played with the Met before, but only as a substitute. The evaluators knew her, yet only when they weren’t aware of her gender—only, that is, when they were forced to make not a personal evaluation but an impersonal one—could they hear how brilliantly she played.
We may like to think that society has become more enlightened since those days, and in many ways it has, but our biases are mostly unconscious, and they can run surprisingly deep. Consider race. For a 2004 study called “Are Emily and Greg More Employable Than Lakisha and Jamal?,” the economists Sendhil Mullainathan and Marianne Bertrand put white-sounding names (Emily Walsh, Greg Baker) or black-sounding names (Lakisha Washington, Jamal Jones) on similar fictitious résumés, which they then sent out to a variety of companies in Boston and Chicago. To get the same number of callbacks, they learned, they needed to either send out half again as many résumés with black names as those with white names, or add eight extra years of relevant work experience to the résumés with black names.
I talked with Mullainathan about the study. All of the hiring managers he and Bertrand had consulted while designing it, he said, told him confidently that Lakisha and Jamal would get called back more than Emily and Greg. Affirmative action guaranteed it, they said: recruiters were bending over backwards in their search for good black candidates. Despite making conscious efforts to find such candidates, however, these recruiters turned out to be excluding them unconsciously at every turn. After the study came out, a man named Jamal sent a thank-you note to Mullainathan, saying that he’d started using only his first initial on his résumé and was getting more interviews.
Perhaps the most widespread bias in hiring today cannot even be detected with the eye. In a recent survey of some 500 hiring managers, undertaken by the Corporate Executive Board, a research firm, 74 percent reported that their most recent hire had a personality “similar to mine.” Lauren Rivera, a sociologist at Northwestern, spent parts of the three years from 2006 to 2008 interviewing professionals from elite investment banks, consultancies, and law firms about how they recruited, interviewed, and evaluated candidates, and concluded that among the most important factors driving their hiring recommendations were—wait for it—shared leisure interests. “The best way I could describe it,” one attorney told her, “is like if you were going on a date. You kind of know when there’s a match.” Asked to choose the most-promising candidates from a sheaf of fake résumés Rivera had prepared, a manager at one particularly buttoned-down investment bank told her, “I’d have to pick Blake and Sarah. With his lacrosse and her squash, they’d really get along [with the people] on the trading floor.” Lacking “reliable predictors of future performance,” Rivera writes, “assessors purposefully used their own experiences as models of merit.” Former college athletes “typically prized participation in varsity sports above all other types of involvement.” People who’d majored in engineering gave engineers a leg up, believing they were better prepared.
Given this sort of clubby, insular thinking, it should come as no surprise that the prevailing system of hiring and management in this country involves a level of dysfunction that should be inconceivable in an economy as sophisticated as ours. Recent survey data collected by the Corporate Executive Board, for example, indicate that nearly a quarter of all new hires leave their company within a year of their start date, and that hiring managers wish they’d never extended an offer to one out of every five members on their team. A survey by Gallup this past June, meanwhile, found that only 30 percent of American workers felt a strong connection to their company and worked for it with passion. Fifty-two percent emerged as “not engaged” with their work, and another 18 percent as “actively disengaged,” meaning they were apt to undermine their company and co-workers, and shirk their duties whenever possible. These headline numbers are skewed a little by the attitudes of hourly workers, which tend to be worse, on average, than those of professional workers. But really, what further evidence do we need of the abysmal status quo?
Because the algorithmic assessment of workers’ potential is so new, not much hard data yet exist demonstrating its effectiveness. The arena in which it has been best proved, and where it is most widespread, is hourly work. Jobs at big-box retail stores and call centers, for example, warm the hearts of would-be corporate Billy Beanes: they’re pretty well standardized, they exist in huge numbers, they turn over quickly (it’s not unusual for call centers, for instance, to experience 50 percent turnover in a single year), and success can be clearly measured (through a combination of variables like sales, call productivity, customer-complaint resolution, and length of tenure). Big employers of hourly workers are also not shy about using psychological tests, partly in an effort to limit theft and absenteeism. In the late 1990s, as these assessments shifted from paper to digital formats and proliferated, data scientists started doing massive tests of what makes for a successful customer-support technician or salesperson. This has unquestionably improved the quality of the workers at many firms.
Teri Morse, the vice president for recruiting at Xerox Services, oversees hiring for the company’s 150 U.S. call and customer-care centers, which employ about 45,000 workers. When I spoke with her in July, she told me that as recently as 2010, Xerox had filled these positions through interviews and a few basic assessments conducted in the office—a typing test, for instance. Hiring managers would typically look for work experience in a similar role, but otherwise would just use their best judgment in evaluating candidates. In 2010, however, Xerox switched to an online evaluation that incorporates personality testing, cognitive-skill assessment, and multiple-choice questions about how the applicant would handle specific scenarios that he or she might encounter on the job. An algorithm behind the evaluation analyzes the responses, along with factual information gleaned from the candidate’s application, and spits out a color-coded rating: red (poor candidate), yellow (middling), or green (hire away). Those candidates who score best, I learned, tend to exhibit a creative but not overly inquisitive personality, and participate in at least one but not more than four social networks, among many other factors. (Previous experience, one of the few criteria that Xerox had explicitly screened for in the past, turns out to have no bearing on either productivity or retention. Distance between home and work, on the other hand, is strongly associated with employee engagement and retention.)
When Xerox started using the score in its hiring decisions, the quality of its hires immediately improved. The rate of attrition fell by 20 percent in the initial pilot period, and over time, the number of promotions rose. Xerox still interviews all candidates in person before deciding to hire them, Morse told me, but, she added, “We’re getting to the point where some of our hiring managers don’t even want to interview anymore”—they just want to hire the people with the highest scores.
The online test that Xerox uses was developed by a small but rapidly growing company based in San Francisco called Evolv. I spoke with Jim Meyerle, one of the company’s co‑founders, and David Ostberg, its vice president of workforce science, who described how modern techniques of gathering and analyzing data offer companies a sharp edge over basic human intuition when it comes to hiring. Gone are the days, Ostberg told me, when, say, a small survey of college students would be used to predict the statistical validity of an evaluation tool. “We’ve got a data set of 347,000 actual employees who have gone through these different types of assessments or tools,” he told me, “and now we have performance-outcome data, and we can split those and slice and dice by industry and location.”
Evolv’s tests allow companies to capture data about everybody who applies for work, and everybody who gets hired—a complete data set from which sample bias, long a major vexation for industrial-organization psychologists, simply disappears. The sheer number of observations that this approach makes possible allows Evolv to say with precision which attributes matter more to the success of retail-sales workers (decisiveness, spatial orientation, persuasiveness) or customer-service personnel at call centers (rapport-building). And the company can continually tweak its questions, or add new variables to its model, to seek out ever stronger correlates of success in any given job. For instance, the browser that applicants use to take the online test turns out to matter, especially for technical roles: some browsers are more functional than others, but it takes a measure of savvy and initiative to download them.
There are some data that Evolv simply won’t use, out of a concern that the information might lead to systematic bias against whole classes of people. The distance an employee lives from work, for instance, is never factored into the score given each applicant, although it is reported to some clients. That’s because different neighborhoods and towns can have different racial profiles, which means that scoring distance from work could violate equal-employment-opportunity standards. Marital status? Motherhood? Church membership? “Stuff like that,” Meyerle said, “we just don’t touch”—at least not in the U.S., where the legal environment is strict. Meyerle told me that Evolv has looked into these sorts of factors in its work for clients abroad, and that some of them produce “startling results.” Citing client confidentiality, he wouldn’t say more.
MIT’s Sandy Pentland has pioneered the use of electronic “badges” that transmit data about employees as they go about their days.
Meyerle told me that what most excites him are the possibilities that arise from monitoring the entire life cycle of a worker at any given company. This is a task that Evolv now performs for Transcom, a company that provides outsourced customer-support, sales, and debt-collection services, and that employs some 29,000 workers globally. About two years ago, Transcom began working with Evolv to improve the quality and retention of its English-speaking workforce, and three-month attrition quickly fell by about 30 percent. Now the two companies are working together to marry pre-hire assessments to an increasing array of post-hire data: about not only performance and duration of service but also who trained the employees; who has managed them; whether they were promoted to a supervisory role, and how quickly; how they performed in that role; and why they eventually left.
The potential power of this data-rich approach is obvious. What begins with an online screening test for entry-level workers ends with the transformation of nearly every aspect of hiring, performance assessment, and management. In theory, this approach enables companies to fast-track workers for promotion based on their statistical profiles; to assess managers more scientifically; even to match workers and supervisors who are likely to perform well together, based on the mix of their competencies and personalities. Transcom plans to do all these things, as its data set grows ever richer. This is the real promise—or perhaps the hubris—of the new people analytics. Making better hires turns out to be not an end but just a beginning. Once all the data are in place, new vistas open up.
For a sense of what the future of people analytics may bring, I turned to Sandy Pentland, the director of the Human Dynamics Laboratory at MIT. In recent years, Pentland has pioneered the use of specialized electronic “badges” that transmit data about employees’ interactions as they go about their days. The badges capture all sorts of information about formal and informal conversations: their length; the tone of voice and gestures of the people involved; how much those people talk, listen, and interrupt; the degree to which they demonstrate empathy and extroversion; and more. Each badge generates about 100 data points a minute.
Pentland’s initial goal was to shed light on what differentiated successful teams from unsuccessful ones. As he described last year in the Harvard Business Review, he tried the badges out on about 2,500 people, in 21 different organizations, and learned a number of interesting lessons. About a third of team performance, he discovered, can usually be predicted merely by the number of face-to-face exchanges among team members. (Too many is as much of a problem as too few.) Using data gathered by the badges, he was able to predict which teams would win a business-plan contest, and which workers would (rightly) say they’d had a “productive” or “creative” day. Not only that, but he claimed that his researchers had discovered the “data signature” of natural leaders, whom he called “charismatic connectors” and all of whom, he reported, circulate actively, give their time democratically to others, engage in brief but energetic conversations, and listen at least as much as they talk. In a development that will surprise few readers, Pentland and his fellow researchers created a company, Sociometric Solutions, in 2010, to commercialize his badge technology.
Pentland told me that no business he knew of was yet using this sort of technology on a permanent basis. His own clients were using the badges as part of consulting projects designed to last only a few weeks. But he doesn’t see why longer-term use couldn’t be in the cards for the future, particularly as the technology gets cheaper. His group is developing apps to allow team members to view their own metrics more or less in real time, so that they can see, relative to the benchmarks of highly successful employees, whether they’re getting out of their offices enough, or listening enough, or spending enough time with people outside their own team.
Whether or not we all come to wear wireless lapel badges, Star Trek–style, plenty of other sources could easily serve as the basis of similar analysis. Torrents of data are routinely collected by American companies and now sit on corporate servers, or in the cloud, awaiting analysis. Bloomberg reportedly logs every keystroke of every employee, along with their comings and goings in the office. The Las Vegas casino Harrah’s tracks the smiles of the card dealers and waitstaff on the floor (its analytics team has quantified the impact of smiling on customer satisfaction). E‑mail, of course, presents an especially rich vein to be mined for insights about our productivity, our treatment of co-workers, our willingness to collaborate or lend a hand, our patterns of written language, and what those patterns reveal about our intelligence, social skills, and behavior. As technologies that analyze language become better and cheaper, companies will be able to run programs that automatically trawl through the e-mail traffic of their workforce, looking for phrases or communication patterns that can be statistically associated with various measures of success or failure in particular roles.
When I brought this subject up with Erik Brynjolfsson, a professor at MIT’s Sloane School of Management, he told me that he believes people analytics will ultimately have a vastly larger impact on the economy than the algorithms that now trade on Wall Street or figure out which ads to show us. He reminded me that we’ve witnessed this kind of transformation before in the history of management science. Near the turn of the 20th century, both Frederick Taylor and Henry Ford famously paced the factory floor with stopwatches, to improve worker efficiency. And at mid-century, there was that remarkable spread of data-driven assessment. But there’s an obvious and important difference between then and now, Brynjolfsson said. “The quantities of data that those earlier generations were working with,” he said, “were infinitesimal compared to what’s available now. There’s been a real sea change in the past five years, where the quantities have just grown so large—petabytes, exabytes, zetta—that you start to be able to do things you never could before.”
Many companies are now gravitating toward pools of candidates who didn’t attend college.
It’s in the inner workings of organizations, says Sendhil Mullainathan, the economist, where the most-dramatic benefits of people analytics are likely to show up. When we talked, Mullainathan expressed amazement at how little most creative and professional workers (himself included) know about what makes them effective or ineffective in the office. Most of us can’t even say with any certainty how long we’ve spent gathering information for a given project, or our pattern of information-gathering, never mind know which parts of the pattern should be reinforced, and which jettisoned. As Mullainathan put it, we don’t know our own “production function.”
The prospect of tracking that function through people analytics excites Mullainathan. He sees it not only as a boon to a business’s productivity and overall health but also as an important new tool that individual employees can use for self-improvement: a sort of radically expanded The 7 Habits of Highly Effective People, custom-written for each of us, or at least each type of job, in the workforce.
Perhaps the most exotic development in people analytics today is the creation of algorithms to assess the potential of all workers, across all companies, all the time.
This past summer, I sat in on a sales presentation by Gild, a company that uses people analytics to help other companies find software engineers. I didn’t have to travel far: Atlantic Media, the parent company of The Atlantic, was considering using Gild to find coders. (No sale was made, and there is no commercial relationship between the two firms.)
In a small conference room, we were shown a digital map of Northwest Washington, D.C., home to The Atlantic. Little red pins identified all the coders in the area who were proficient in the skills that an Atlantic Media job announcement listed as essential. Next to each pin was a number that ranked the quality of each coder on a scale of one to 100, based on the mix of skills Atlantic Media was looking for. (No one with a score above 75, we were told, had ever failed a coding test by a Gild client.) If we’d wished, we could have zoomed in to see how The Atlantic’s own coders scored.
The way Gild arrives at these scores is not simple. The company’s algorithms begin by scouring the Web for any and all open-source code, and for the coders who wrote it. They evaluate the code for its simplicity, elegance, documentation, and several other factors, including the frequency with which it’s been adopted by other programmers. For code that was written for paid projects, they look at completion times and other measures of productivity. Then they look at questions and answers on social forums such as Stack Overflow, a popular destination for programmers seeking advice on challenging projects. They consider how popular a given coder’s advice is, and how widely that advice ranges.
The algorithms go further still. They assess the way coders use language on social networks from LinkedIn to Twitter; the company has determined that certain phrases and words used in association with one another can distinguish expert programmers from less skilled ones. Gild knows these phrases and words are associated with good coding because it can correlate them with its evaluation of open-source code, and with the language and online behavior of programmers in good positions at prestigious companies.
Here’s the part that’s most interesting: having made those correlations, Gild can then score programmers who haven’t written open-source code at all, by analyzing the host of clues embedded in their online histories. They’re not all obvious, or easy to explain. Vivienne Ming, Gild’s chief scientist, told me that one solid predictor of strong coding is an affinity for a particular Japanese manga site.
Why would good coders (but not bad ones) be drawn to a particular manga site? By some mysterious alchemy, does reading a certain comic-book series improve one’s programming skills? “Obviously, it’s not a causal relationship,” Ming told me. But Gild does have 6 million programmers in its database, she said, and the correlation, even if inexplicable, is quite clear.
Gild treats this sort of information gingerly, Ming said. An affection for a Web site will be just one of dozens of variables in the company’s constantly evolving model, and a minor one at that; it merely “nudges” an applicant’s score upward, and only as long as the correlation persists. Some factors are transient, and the company’s computers are forever crunching the numbers, so the variables are always changing. The idea is to create a sort of pointillist portrait: even if a few variables turn out to be bogus, the overall picture, Ming believes, will be clearer and truer than what we could see on our own.
Gild’s CEO, Sheeroy Desai, told me he believes his company’s approach can be applied to any occupation characterized by large, active online communities, where people post and cite individual work, ask and answer professional questions, and get feedback on projects. Graphic design is one field that the company is now looking at, and many scientific, technical, and engineering roles might also fit the bill. Regardless of their occupation, most people leave “data exhaust” in their wake, a kind of digital aura that can reveal a lot about a potential hire. Donald Kluemper, a professor of management at the University of Illinois at Chicago, has found that professionally relevant personality traits can be judged effectively merely by scanning Facebook feeds and photos. LinkedIn, of course, captures an enormous amount of professional data and network information, across just about every profession. A controversial start-up called Klout has made its mission the measurement and public scoring of people’s online social influence.
These aspects of people analytics provoke anxiety, of course. We would be wise to take legal measures to ensure, at a minimum, that companies can’t snoop where we have a reasonable expectation of privacy—and that any evaluations they might make of our professional potential aren’t based on factors that discriminate against classes of people.
But there is another side to this. People analytics will unquestionably provide many workers with more options and more power. Gild, for example, helps companies find undervalued software programmers, working indirectly to raise those people’s pay. Other companies are doing similar work. One called Entelo, for instance, specializes in using algorithms to identify potentially unhappy programmers who might be receptive to a phone call (because they’ve been unusually active on their professional-networking sites, or because there’s been an exodus from their corner of their company, or because their company’s stock is tanking). As with Gild, the service benefits the worker as much as the would-be employer.
Big tech companies are responding to these incursions, and to increasing free agency more generally, by deploying algorithms aimed at keeping their workers happy. Dawn Klinghoffer, the senior director of HR business insights at Microsoft, told me that a couple of years ago, with attrition rising industry-wide, her team started developing statistical profiles of likely leavers (hires straight from college in certain technical roles, for instance, who had been with the company for three years and had been promoted once, but not more than that). The company began various interventions based on these profiles: the assignment of mentors, changes in stock vesting, income hikes. Microsoft focused on two business units with particularly high attrition rates—and in each case reduced those rates by more than half.
Over time, better job-matching technologies are likely to begin serving people directly, helping them see more clearly which jobs might suit them and which companies could use their skills. In the future, Gild plans to let programmers see their own profiles and take skills challenges to try to improve their scores. It intends to show them its estimates of their market value, too, and to recommend coursework that might allow them to raise their scores even more. Not least, it plans to make accessible the scores of typical hires at specific companies, so that software engineers can better see the profile they’d need to land a particular job. Knack, for its part, is making some of its video games available to anyone with a smartphone, so people can get a better sense of their strengths, and of the fields in which their strengths would be most valued. (Palo Alto High School recently adopted the games to help students assess careers.) Ultimately, the company hopes to act as matchmaker between a large network of people who play its games (or have ever played its games) and a widening roster of corporate clients, each with its own specific profile for any given type of job.
Knack and Gild are very young companies; either or both could fail. But even now they are hardly the only companies doing this sort of work. The digital trail from assessment to hire to work performance and work engagement will quickly discredit models that do not work—but will also allow the models and companies that survive to grow better and smarter over time. It is conceivable that we will look back on these endeavors in a decade or two as nothing but a fad. But early evidence, and the relentlessly empirical nature of the project as a whole, suggests otherwise.
When I began my reporting for this story, I was worried that people analytics, if it worked at all, would only widen the divergent arcs of our professional lives, further gilding the path of the meritocratic elite from cradle to grave, and shutting out some workers more definitively. But I now believe the opposite is likely to happen, and that we’re headed toward a labor market that’s fairer to people at every stage of their careers. For decades, as we’ve assessed people’s potential in the professional workforce, the most important piece of data—the one that launches careers or keeps them grounded—has been educational background: typically, whether and where people went to college, and how they did there. Over the past couple of generations, colleges and universities have become the gatekeepers to a prosperous life. A degree has become a signal of intelligence and conscientiousness, one that grows stronger the more selective the school and the higher a student’s GPA, that is easily understood by employers, and that, until the advent of people analytics, was probably unrivaled in its predictive powers. And yet the limitations of that signal—the way it degrades with age, its overall imprecision, its many inherent biases, its extraordinary cost—are obvious. “Academic environments are artificial environments,” Laszlo Bock, Google’s senior vice president of people operations, told The New York Times in June. “People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment,” which is often quite different from the workplace.
One of the tragedies of the modern economy is that because one’s college history is such a crucial signal in our labor market, perfectly able people who simply couldn’t sit still in a classroom at the age of 16, or who didn’t have their act together at 18, or who chose not to go to graduate school at 22, routinely get left behind for good. That such early factors so profoundly affect career arcs and hiring decisions made two or three decades later is, on its face, absurd.
But this relationship is likely to loosen in the coming years. I spoke with managers at a lot of companies who are using advanced analytics to reevaluate and reshape their hiring, and nearly all of them told me that their research is leading them toward pools of candidates who didn’t attend college—for tech jobs, for high-end sales positions, for some managerial roles. In some limited cases, this is because their analytics revealed no benefit whatsoever to hiring people with college degrees; in other cases, and more often, it’s because they revealed signals that function far better than college history, and that allow companies to confidently hire workers with pedigrees not typically considered impressive or even desirable. Neil Rae, an executive at Transcom, told me that in looking to fill technical-support positions, his company is shifting its focus from college graduates to “kids living in their parents’ basement”—by which he meant smart young people who, for whatever reason, didn’t finish college but nevertheless taught themselves a lot about information technology. Laszlo Bock told me that Google, too, is hiring a growing number of nongraduates. Many of the people I talked with reported that when it comes to high-paying and fast-track jobs, they’re reducing their preference for Ivy Leaguers and graduates of other highly selective schools.
The prevailing system of hiring and management in this country involves a level of dysfunction that should be inconceivable in an economy as sophisticated as ours.
This process is just beginning. Online courses are proliferating, and so are online markets that involve crowd-sourcing. Both arenas offer new opportunities for workers to build skills and showcase competence. Neither produces the kind of instantly recognizable signals of potential that a degree from a selective college, or a first job at a prestigious firm, might. That’s a problem for traditional hiring managers, because sifting through lots of small signals is so difficult and time-consuming. (Is it meaningful that a candidate finished in the top 10 percent of students in a particular online course, or that her work gets high ratings on a particular crowd-sourcing site?) But it’s completely irrelevant in the field of people analytics, where sophisticated screening algorithms can easily make just these sorts of judgments. That’s not only good news for people who struggled in school; it’s good news for people who’ve fallen off the career ladder through no fault of their own (older workers laid off in a recession, for instance) and who’ve acquired a sort of professional stink that is likely undeserved.
Ultimately, all of these new developments raise philosophical questions. As professional performance becomes easier to measure and see, will we become slaves to our own status and potential, ever-focused on the metrics that tell us how and whether we are measuring up? Will too much knowledge about our limitations hinder achievement and stifle our dreams? All I can offer in response to these questions, ironically, is my own gut sense, which leads me to feel cautiously optimistic. But most of the people I interviewed for this story—who, I should note, tended to be psychologists and economists rather than philosophers—share that feeling.
Scholarly research strongly suggests that happiness at work depends greatly on feeling a sense of agency. If the tools now being developed and deployed really can get more people into better-fitting jobs, then those people’s sense of personal effectiveness will increase. And if those tools can provide workers, once hired, with better guidance on how to do their jobs well, and how to collaborate with their fellow workers, then those people will experience a heightened sense of mastery. It is possible that some people who now skate from job to job will find it harder to work at all, as professional evaluations become more refined. But on balance, these strike me as developments that are likely to make people happier.
Nobody imagines that people analytics will obviate the need for old-fashioned human judgment in the workplace. Google’s understanding of the promise of analytics is probably better than anybody else’s, and the company has been changing its hiring and management practices as a result of its ongoing analyses. (Brainteasers are no longer used in interviews, because they do not correlate with job success; GPA is not considered for anyone more than two years out of school, for the same reason—the list goes on.) But for all of Google’s technological enthusiasm, these same practices are still deeply human. A real, live person looks at every résumé the company receives. Hiring decisions are made by committee and are based in no small part on opinions formed during structured interviews.
One only has to look to baseball, in fact, to see where this all may be headed. In their forthcoming book, The Sabermetric Revolution, the sports economist Andrew Zimbalist and the mathematician Benjamin Baumer write that the analytical approach to player acquisition employed by Billy Beane and the Oakland A’s has continued to spread through Major League Baseball. Twenty-six of the league’s 30 teams now devote significant resources to people analytics. The search for ever more precise data—about the spin rate of pitches, about the muzzle velocity of baseballs as they come off the bat—has intensified, as has the quest to turn those data into valuable nuggets of insight about player performance and potential. Analytics has taken off in other pro sports leagues as well. But here’s what’s most interesting. The big blind spots initially identified by analytics in the search for great players are now gone—which means that what’s likely to make the difference again is the human dimension of the search.
The A’s made the playoffs again this year, despite a small payroll. Over the past few years, the team has expanded its scouting budget. “What defines a good scout?,” Billy Beane asked recently. “Finding out information other people can’t. Getting to know the kid. Getting to know the family. There’s just some things you need to find out in person.”