In a turn of phrase that seems designed to provoke headlines, the US Department of Defense this week said one of its primary goals is to “take the ‘man’ out of unmanned” combat. This quote and much more comes from the latest in the Department’s ongoing series of Roadmap to the Future reports, which seek to lay out both the current realities and future plans of the US military and defense industry. This time, the topic was ripped straight from the headlines: remote combat systems.
While the American military has for a long time remained static in terms of overall manpower, one type of recruit it just can’t seem to get enough of is drone pilots. It’s not just the US, either; in the UK, they’re so desperate to meet their need for highly skilled cyber-warriors that they recently threw out the physical fitness requirements for those positions. However, as much potential as there is for an unmanned future, the most recent update(PDF) on unmanned systems policy shows that it’s autonomy that really interests the DoD.
The word “drone” does not necessarily have to denote an aerial system. Naval drones are also a major area of interest.
And why not? After all, while drone pilots are far less likely to require long-term medical care than a soldier in the field, paying and feeding troops (not to mention taking care of their pensions) is still one of the most expensive aspects of running a military. Additionally, the precision of computerized war brings the frailty of the human element into sharp relief; Britain recently threw up its hands in frustration when it lost 12 of the 26British drones deployed in Afghanistan, many due to pilot error.
Additionally, exposés like the Collateral Murder video that brought Wikileaks to prominence have stirred up significant criticism for the program. A computer might not shoot at the wrong time, and if it does it will not need therapy afterward. From a purely utilitarian perspective, why not cut the pilots out altogether, if we can? To this question, the US Department of Defense has no answer.
This report looks up to 25 years into the future, beginning by pointing out that the only true autonomy in the US military today is designed to take over during an emergency like a lost connection to control. At most, an autopilot executes a very limited set of instructions under close supervision — say, to fly in a circle over a particular stretch of Pakistani desert and report any movement. Real autonomy, says DoD, would involve recording, playback, projection, and parsing of data, followed by delivery of “actionable” intelligence to the appropriate overseer. For an autonomous combat robot, direct mention of which is mostly avoided in this document, the requirements would be even stricter.
One of the only mentions of kill-bots is a reference to the DoD’s official kill-bot policy, DoD Directive 3000.09 (pdf). This lays out only a few concrete rules beyond basically requiring them to be rigorously tested, though it does make sure to point out that robots should not start indiscriminately killing civilians upon losing a connection to command. Interestingly, all legal language is phrased in relation to a hypothetical human overseer; it’s the humans who launch the robots that are bound by the treaties and the generally agreed upon rules of war, not the robots themselves. This is essentially a “guns don’t kill people” sort of idea, but if a gun is incapable of taking responsibility for an action then perhaps the gun should be restricted from taking that action at all.
Robots have been helping clear explosives and blind corners for years — but they still need human drivers.
In the end, this comes down to budget constraints. Under the original rules of sequestration DoD faced up to $500 billion in cuts over the next 10 years, and even with new reforms it could face cuts of as much as $50 billion in 2014 alone. Still, it’s not sequestration that seems to be driving this push for autonomy, but a more general implication that manpower is a the bottleneck in, at this point, too many efficiency reports. This report readily admits that a set of algorithms with human-level versatility is but a pipe dream today, but takes it as a foregone conclusion that there is no way to both increase global dominance and decrease spending without significant cuts to (and replacement of) manpower.
“One of the largest cost drivers in the budget of DoD,” it says, “is manpower… Therefore, of utmost importance for DoD is increased system, sensor, and analytical automation…” Though automated drones will certainly cut away at the need for regular soldiers in the numbers seen today, the primary short-term target of these austerity measures is the drone programs themselves. If unmanned systems are about to become the order of the day, then DoD wants to shrink the teams necessary to direct them — preferably to as near zero as possible.
Bear in mind that when this report says “unmanned systems” it means a whole lot more than just predator drones. In terms of unmanned systems, DoD prioritizes air first, maritime second, and ground last. In 2014, DoD expects about $4 billion will go to unmanned aerial systems, while maritime funding won’t exceed $350 million.
In terms of autonomy, the problems with communication make submerged vehicles a prime candidate — though shrinking the often enormous naval crews needed for modern American vessels would certainly please the budget obsession on display in this report. Ground automation will be largely ignored for several years, and won’t exceed $50 million in funding until 2017 — though much of the brunt work for an autonomous ground unit is being done by private industry.
Though BigDog will be a helpful load bearer, the robot will be far more useful when it can follow a unit without the need for any human attention.
The report makes only a few concrete predictions. From now to 2017, the department will focus on moving its compliment of unmanned systems from automated to autonomous. The distinction between an autopilot and a robot pilot is in the ability to make decisions, and that’s one of DoD’s biggest priorities.
Its predictions for 2020 and beyond are slightly more insightful, predicting “smart teams of unmanned systems operating autonomously to conduct operations in contested environments,” among other things. Then there is the idea of autonomous “loyal wingmen” to escort and assist manned elements. This could include autonomous quadcopters to scout ahead of a unit or even hunt targets through indoor areas. From a tireless BigDogpack mule to a literal robot wingman, the DoD wants to make robots the American soldier’s best friend.
In terms of current capabilities, if DoD is conducting any meaningful field tests of truly autonomous vehicles, it’s not talking about it — and since software can be discreetly switched on and off without alerting any outside forces, the department certainly has no reason to start. Automated defense systems have only the most limited ability to act without permission, and then only when the need to act quickly (say, if a supersonic missile is headed for American shores) is extreme.
Everyone from search and rescue teams to Kickstarter entrepreneurs is working on autonomous quadcopter software, but their work likely pales to what DARPA (the military’s moonshot division) has behind closed doors. Though details are scarce, we do know that agency is hard at work making pilots, drivers, and helmsmen obsolete, having just completed its Grand Challenge in robot AI. Very little such work has seen the light of day, but that’s precisely what DoD wants to change over the coming 3-5 years.
DARPA’s Transformer X concept, currently under development, sees autonomy applied to a troop transport that can both fly and drive.
Potential challenges to this future are many, and are more than just technological. The United Nations recently advised the world that autonomous war machines are a de-facto threat to humanity, and should be banned. Additionally, while responsibility for a single case of human error, negligence, or malevolence ultimately falls on the soldier alone, a computational error that leads to a number of dead civilians could cast doubt on every unit under control that software. And, frankly, the algorithms of today simply cannot measure up to the flexibility of a human actor — and while it might seem short-sighted to say so, we should consider the possibility that they perhaps never will. At the very least, our justified squeamishness about killer robots could restrict specifically those robot freedoms that would allow autonomy to truly come into its own.
As this report points out, we are at a turning point in the development of autonomous systems. What we have today are highly sophisticated automatic functions that are closely monitored and directed by human beings. Over the next three to four years, DoD plans to redirect a significant portion of its R&D efforts toward phasing out most of those humans, and reducing the role of those that remain. When the first autonomous robots begin publicly taking to the field for test runs, that will be an interesting time indeed for international politics, and yet another (inevitably failed) test of the United Nations’ ability to control its most powerful member states.
So, does austerity breed autonomy? That is certainly the DoD’s view, as seen in this report. It mentions the pinch of the sequester and its roughly $500 billion budget more than 60 times in 168 pages, and autonomy is mostly framed in terms of maintaining operational ability in the face of cutbacks, rather than expanding those abilities outright. Still, the budget constraints don’t seem to warrant this much long-term panic, especially since the sequester is (hypothetically) a temporary concern. We must ensure that the quest to hit a budget, or improve efficiency, or even keep American soldiers out of harm’s way, doesn’t make the US military rush heedless into a moral and political quagmire with no end in sight.
No comments:
Post a Comment