Monday, January 7, 2013

Could Human Enhancement Turn Soldiers Into Weapons That Violate International Law? Yes

http://www.theatlantic.com/technology/archive/2013/01/could-human-enhancement-turn-soldiers-into-weapons-that-violate-international-law-yes/266732/      
- Dr. Patrick Lin is the director of the Ethics + Emerging Sciences Group,
based in the philosophy department at California Polytechnic State
University, San Luis Obispo.  He is also lead editor of the book, Robot
Ethics
.

Could Human Enhancement Turn Soldiers Into Weapons That Violate International Law? Yes

By Patrick Lin
Share22 New technologies reveal ambiguities and hidden assumptions in international humanitarian law.
linart2.jpg
Alexis C. Madrigal
Science fiction, or actual U.S. military project? Half a world away from the battlefield, a soldier controls his avatar-robot that does the actual fighting on the ground. Another one wears a sticky fabric that enables her to climb a wall like a gecko or spider would. Returning from a traumatic mission, a pilot takes a memory-erasing drug to help ward off post-traumatic stress disorder. Mimicking the physiology of dolphins and sled-dogs, a sailor is able to work his post all week without sleep and only a few meals.
All of these scenarios are real military projects currently in various stages of research. This is the frontlines of the Human Enhancement Revolution -- we now know enough about biology, neuroscience, computing, robotics, and materials to hack the human body, reshaping it in our own image. And defense-related applications are a major driver of science and technology research.
But, as I reported earlier, we also face serious ethical, legal, social, and operational issues in enhancing warfighters. Here, I want to drill down on what the laws of war say about military human enhancements, as we find that other technologies such as robotics and cyberweapons run into serious problems in this area as well.
Should enhancement technologies -- which typically do not directly interact with anyone other than the human subject -- be nevertheless subject to a weapons legal-review? That is, is there a sense in which enhancements could be considered as "weapons" and therefore under the authority of certain laws?
In international humanitarian law (IHL), also known as the laws of war, the primary instruments relevant to human enhancements include: Hague Conventions (1899 and 1907), Geneva Conventions (1949 and Additional Protocols I, II, and III), Biological and Toxin Weapons Convention (1972), Chemical Weapons Convention (1993), and other law. Below, I discuss these agreements and what their implications may be for human enhancement.
1. Would human enhancements count as weapons under the Geneva Conventions?
Let's start with the basic requirement that new weapons must conform to IHL. Article 36 of the Geneva Conventions, Additional Protocol I of 1977, specifies:
In the study, development, acquisition or adoption of a new weapon, means or method of warfare, a High Contracting Party is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party.
But does Article 36 apply to human enhancement technologies? That is, should they be considered as a "weapon" or "means or method of warfare" in the first place? Unlike other weapons contemplated by IHL, enhancements usually do not directly harm others, so it is not obvious that Article 36 of Additional Protocol I would apply here. If anyone's safety is immediately at risk, it would seem to be that of the individual warfighter -- thereby turning the debate into one about bioethics. To that extent, warfighters, whether enhanced or not, are not weapons as typically understood.
Yet in a broader sense, the warfighter is not only a weapon but perhaps a military's best and oldest weapon. Warfighters carry out missions, they sometimes kill enemies, and they represent one of the largest expenditures or investments of a military. They have cognitive and physical capabilities that no other technology currently has, and this can make them ethical, lethal, and versatile. The human fighter, engaged in hand-to-hand combat, would be the last remaining weapon when all others have been exhausted. So in this basic sense, the warfighter is undeniably a weapon or instrument of war.
If a military were to field a weaponized rhino in an urban battlefield that contains innocent civilians, we would be reasonably worried that the war-rhino does not comply with Article 36. Are weaponized humans any different, legally speaking?
Still, should Article 36 be interpreted to include warfighters themselves as weapons subject to regulation? There could be several reasons to think so. First, other organisms are plausibly weapons subject to an Article 36 review. Throughout history, humans have employed animals in the service of war, such as dogs, elephants, pigeons, sea lions, dolphins, and possibly rhinoceroses. Dogs, as the most commonly used animal, undergo rigorous training, validation, and inspections. If a military were to field a weaponized rhino in an urban battlefield that contains innocent civilians, we would be reasonably worried that the war-rhino does not comply with Article 36. If rhinos cannot reliably discriminate friends from foe, e.g., a rhino may target and charge a noncombatant child in violation of the principle of distinction. A similar charge would apply to autonomous robots in such a general environment in which distinction is important, as opposed to a "kill box" or area of such fierce fighting that all noncombatants can be presumed to have fled.
If autonomous robots are clearly regulatable weapons, then consider the spectrum of cyborgs -- part-human, part-machine -- that exists between robots and unenhanced humans. Replacing one body part, say a human knee, with a robotic part starts us on the cybernetic path. And as other body parts are replaced, the organism becomes less human and more robotic. Finally, after (hypothetically) replacing every body part, including the brain, the organism is entirely robotic with no trace of the original human. If we want to say that robots are weapons but humans are not, then we would be challenged to identify the point on that spectrum at which the human becomes a robot or a weapon.
The inability to draw such a line may not be a fatal blow to the claim that humans should be treated as weapons; after all, we cannot draw a precise line at which a man who is losing his hair becomes "bald", yet there's clearly a difference between a bald man and one who has a head full of hair. But a simpler solution may be to say that humans are weapons, especially given the reasons offered previously.
As it applies to military enhancements, integrated robotics may be one form of enhancement, but we can also consider scenarios involving biomedical enhancements such as pharmaceuticals and genetic engineering. Again, on one end of the spectrum would stand a normal, unenhanced human. One step toward the path of being fully enhanced may be a warfighter who drinks coffee or pops amphetamines ("go pills" in military-speak) as a cognitive stimulant or enhancer. Another step may be taking drugs that increase strength, erase fear, or eliminate the need for sleep. At the far, more radical end may be a warfighter so enhanced that s/he no longer resembles a human being, such as a creature with four muscular arms, fangs, fur, and other animal-like features. If a war-rhino should be subject to Article 36, then so should this radically enhanced human animal, so it would seem. And to avoid the difficult question of drawing the line at which the enhanced human becomes a weapon, a more intuitive position would be that the human animal is a weapon all along, at every point in the spectrum, especially given the previous reasons that are independent of this demarcation problem.
If we agree that enhanced human warfighters could be properly weapons subject to Article 36, what are the implications? Historically, new weapons and tactics needed to conform to at least the following: (1) principle of distinction, (2) principle of proportionality, and (3) prohibition on superfluous injury or unnecessary suffering, often abbreviated as SIrUS.
To explain, first, the principle of distinction demands that a weapon must be discriminating enough to target only combatants and never noncombatants. Biological weapons and most anti-personnel landmines, then, are indiscriminate and therefore illegal in that they cannot distinguish whether they are about to infect or blow up a small child versus an enemy combatant. Unintended killings of noncombatants -- or "collateral damage" -- may be permissible, but not their deliberate targeting; but to the extent that biological weapons today target anyone, they also target everyone. (If they don't target anyone in particular but still kill people, then immediately they would seem to be indiscriminate.) However, future biological weapons, e.g., a virus that attacks only blue-eyed people or a certain DNA signature, may be discriminate and therefore would not violate this principle (but could violate others).
Second, the principle of proportionality demands that the use of a weapon be proportional to the military objective, so to keep civilian casualties to a minimum. For instance, dropping a nuclear bomb to kill a hidden sniper would be a disproportionate use of force, since other less drastic methods could have been used.
Third, the SIrUS principle is related to proportionality in that it requires methods of attack to be minimally harmful in rendering a warfighter hors de combat, or unable to fight. This prohibition has led to the ban of such weapons as poison, exploding bullets, and blinding lasers, which cause more injury or suffering than needed to neutralize a combatant.
However implausible, we can imagine a human enhancement that violates these and other provisions -- for instance, a hypothetical "berserker" drug would likely be illegal if it causes the warfighter to be inhumanely vicious, aggressive, and indiscriminate in his attacks, potentially killing children. (For the moment, we will put aside enhancements that are directed at adversaries, such as a mood-enhancing gas to pacify a riotous crowd and a truth-enhancing serum used in interrogations; the former would be prohibited outright by the Chemical Weapons Convention in warfare, partly because it is indiscriminate, and the latter may be prohibited by laws against torturing and mistreating prisoners of war.) The point here is that it's theoretically possible, even if unlikely, for a human enhancement to be in clear violation of IHL.
But let us assume that the human enhancement technologies generally conform to these basic principles. (If they do not, then there's already strong prima facie reason to reject those technologies as unlawful under IHL; those are the easy cases that do not need to be examined here.) Given this assumption, are there other, less-obvious international laws that could prohibit military enhancements? Let's examine a few more possible areas of concern:
2. Would human enhancement count as a biological weapon under the Biological and Toxin Weapons Convention?
First, the above discussion on whether enhancements are weapons is relevant not only to Article 36 of Additional Protocol I but also arguably to the Biological and Toxin Weapons Convention (BTWC). The first article of the BTWC states that:
Each State Party to this Convention undertakes never in any circumstances to develop, produce, stockpile or otherwise acquire or retain: (1) microbial or other biological agents, or toxins whatever their origin or method of production, of types and in quantities that have no justification for prophylactic, protective or other peaceful purposes; (2) weapons, equipment or means of delivery designed to use such agents or toxins for hostile purposes or in armed conflict.
Whether or not they are properly weapons, are military human enhancements "biological agents" in any reasonable sense? The BTWC is silent on this question, though it does anticipate unforeseen developments in genetic engineering, biotechnology, synthetic biology, and other scientific fields. The usual assumption is that these "agents" are both limited to roughly being microbial in size and to biological substances that are directed at adversaries, not directed to the enhancement of one's own military personnel. This assumption, unfortunately, is not explicit enough in the BTWC; that is, it does not define what a biological agent is. As a result, it is still an open question of whether the BTWC applies to human enhancement technologies.
To answer this open question, let's try to better understand what a "biological agent" is. This seems to mean an agent that is biological in nature (e.g., anthrax virus), as opposed to purely chemical (e.g., chlorine gas) or physical (e.g., a falling object); and an agent is a substance or actor employed for some effect or purpose (e.g., LSD is a psychotropic agent). In a broader but consistent sense, agents can be persons too (e.g., a government spy is a "secret agent"). If so, then enhanced warfighters can be agents. Even if we reject this understanding and stipulate that biological agents must be nonperson substances -- an interpretation that is not explicit in the BTWC -- we can still consider the enhancement technology itself as an agent, apart from the warfighter it enhances.
Again, insofar as the BTWC does not specify that biological agents must be of the kind that directly harms adversaries, then some human enhancements -- such as anabolic steroids for increased strength -- would seem to count as biological agents: they are substances employed for some effect and are biological in nature. They would serve "hostile purposes" in that they create a warfighter more capable of defeating adversaries and fulfilling military missions; so these enhancements would at least indirectly harm others.
If we understand the biological weapons convention to be interested in only microbial-sized agents -- and returning to the position that humans can be agents -- then consider a hypothetical process that can shrink a human soldier to the size of bacteria.
With respect to scale, it is difficult to see why size would matter for the BTWC, which again is not explicit on the issue. If we understand the BTWC to be interested in only microbial-sized agents -- and returning to the position that humans can be agents -- then consider a hypothetical process that can shrink a human soldier to the size of bacteria, such as in the theatrical film Fantastic Voyage: If size matters, then the BTWC would seek to regulate the microscopic soldier, but not the full-sized soldier who has the exact same capabilities. Why the difference in concern here? It may be that the microscopic soldier can be stealthier, infiltrate more places, and so on, but none of these concerns is cited in the BTWC as a motivating reason for regulation.
Related to enhancements, the BTWC arguably would have something to say about bioengineered insects and animals, for instance, that are used as weapons. Like pathogens, insects and most animals do not obey human orders and would therefore be unpredictable and indiscriminate as a weapon -- and tiny attack-insects do not seem significantly different in kind than microscopic organisms also designed for attack. One possible difference is that microorganisms typically harm us from the inside-out, and somehow this could be less humane and more frightening than biting our bodies from outside-in. Yet we can also envision bioengineered animals that operate from the inside-out too, as tapeworms and mosquitoes do (or at least the disease they transmit into our bloodstreams). So if it's not unreasonable to think that bioengineered insects would be subject to the BTWC, then size does not matter for the BTWC, or at least the interest is not limited to microscopic organisms.
As for other qualifiers in the BTWC, some enhancements could be noncompliant in that they have no "prophylactic, protective or other peaceful purposes." A hypothetical berserker drug could be an example: its only obvious function is to make a person a fiercer, rampaging combatant. This is to say that, under some plausible understanding of the BTWC, at least some possible warfighter enhancements could count as "biological agents" and therefore subject to the BTWC. If the BTWC intends or ought to rule out enhancements under its purview, then its language needs to be made more explicit.
3. Could human enhancement violate international humanitarian law because they are "repugnant to the conscience of mankind"?
Contributing to the above problem with the BTWC -- i.e., what counts as a "biological agent" -- is also a lack of specificity on the motivating reasons for the BTWC in the first place. That is, the convention is unclear on why we should want to prohibit biological and toxin weapons. But there are some clues. In the preamble to the BTWC, state parties to the convention declare they are:
Convinced of the importance and urgency of eliminating from the arsenals of States, through effective measures, such dangerous weapons of mass destruction as those using chemical or bacteriological (biological) agents, ...
Convinced that such use would be repugnant to the conscience of mankind and that no effort should be spared to minimize this risk,
That is, biological agents, such as highly infectious bacteria or viruses, are difficult to control in their propagation and therefore are indiscriminate to use as a weapon. Anthrax spores, for instance, may be carried by the wind and can infect a child or entire populations just as easily and likely as a soldier. This would be a clear violation of the principle of distinction in IHL.
If this were the only motivating reason for the BTWC, then perhaps we can conclude that human enhancements are not the biological agents that the convention intends to address; enhancements generally are not infectious or "weapons of mass destruction." But this cannot be the only reason. In its categorical prohibition of biological and toxic weapons, the BTWC does not distinguish between infectious and noninfectious ones. For instance, a poison dart that can be used only once in a precisely targeted attack would still be banned, even though it is not a weapon of mass destruction, given that it is a toxin and especially if there were no "prophylactic, protective or other peaceful purposes" for the poison.
To explain why the prohibition is categorical, we can examine the next clue, that the BTWC is motivated by "the conscience" of humanity. That is, some methods of killing are more insidious and repugnant than others. Biological and toxin weapons, then, are of special concern, because they are usually silent, invisible, and indiscriminate ways of killing people -- often with horrific, painful medical symptoms over the course of several days or weeks.
But is any of this relevant to human enhancements? Again, enhancements usually do not directly harm others, much less kill people in "repugnant" ways. Even if we say that enhancements indirectly harm others, they do not typically do so in ways more repugnant than conventional means, since an enhanced warfighter is still bound by IHL to never use certain weapons and tactics against adversaries.
Like the "weapons of mass destruction" clue, that a biological agent is "repugnant to the conscience of mankind" also does not seem to be a necessary requirement, just a sufficient one. Consider that some poisons or pathogens may kill quickly and painlessly, such as those administered in death-penalty executions: They seem to be much more humane than conventional means, such as shooting bullets and dropping bombs that render an adversary hors de combat through massive, bloody injury to human bodies and brains. Nevertheless, these "clean" poisons are prohibited by the BTWC and elsewhere, such as the Hague Conventions. So, even if human enhancements are not repugnant in the same ways that anthrax or arsenic may be, and even if they are not weapons of mass destructions, they could still fall under the authority of the BTWC, again since the convention is not explicit on its motivating reasons.
In any event, enhancements could be repugnant in different ways. We previously mentioned the possibility of creating a "berserker" drug, as well as a warfighter so enhanced that s/he no longer resembles a human being, such as a creature with four muscular arms, fangs, fur, and other animal-like features. If this sounds far-fetched, we need only look at the history of warfare to see that intimidating adversaries is a usual part of warfare. From fierce Viking helmets, to samurai armor designed to resemble demons, to tigers and sharks painted onto warplanes, to ominous names for drones (e.g., "Predator" and "Reaper"), scaring adversaries can demoralize and make them easier to defeat. This suggests that it may not be so irrational nor inconsistent with customary practices to design enhancements to be inhuman and therefore perhaps inhumane.
Further, biomedical research is presently ongoing with "chimeras", or animals composed of genes or cells from other organisms not involved with the reproduction of those animals. These may include animals created with human genes, for instance, in order to grow transplantable organs in vivo and for research to find medical cures. Manipulation of human embryos, too, can lead to human-animal chimeras, though this possibility has caused much ethical concern and debate, so much so that U.S. legislation -- Human Chimera Prohibition Act of 2005 -- had been proposed to prohibit this line of research, calling it an affront to human dignity as well as an existential threat.
Not all enhancements, of course, are as fanciful as a human-chimeric warrior or a berserker mode, nor am I suggesting that any military has plans to do anything that extreme. Most, if not all, enhancements will likely not be as obviously inhuman. Nonetheless, the "consciousness of mankind" is sometimes deeply fragmented, especially on ethical issues. So what is unobjectionable to one person or culture may be obviously objectionable to another.
Something as ordinary as, say, a bionic limb or exoskeleton could be viewed as unethical by cultures that reject technology or such manipulation of the human body. This is not to say that ethics is subjective and we can never resolve this debate, but only that the ethics of military enhancements -- at least with respect to the prohibition against inhumane weapons -- requires specific details about the enhancement and its use, as well as the sensibilities of the adversary and international community. That is, we cannot generalize that all military enhancements either comply or do not comply with this prohibition.
Beyond the BTWC, inhumanity as a prohibitory reason is a common theme that underlies IHL. In the preamble to the first Hague Convention in 1899:
Until a more complete code of the laws of war is issued, the High Contracting Parties think it right to declare that in cases not included in the Regulations adopted by them, populations and belligerents remain under the protection and empire of the principles of international law, as they result from the usages established between civilized nations, from the laws of humanity and the requirements of public conscience.
Known as "the Martens Clause", this basic principle is found throughout the laws of armed conflict, such as the Geneva Conventions and its Additional Protocols and opinions issued by the International Court of Justice. As one would expect, much debate has occurred on what the "laws of humanity" and "requirements of public conscience" are, especially related to the actual or even threatened use of nuclear weapons. And the same debate could be applied to emerging technologies, most notably in a recent report by Human Rights Watch on attack drones.
I won't engage that lengthy and unresolved debate here, except to note that a prohibition against inhumane weapons and methods is a fundamental principle, sometimes explicit and sometimes implied, that underwrites the laws of war and therefore relevant to an ethics assessment of military enhancements. This is also to say that an ethics assessment of new weapons, such as military enhancements seems to be legally required by IHL, at least in the context of the Martens Clause if not also Article 36 of the Geneva Conventions, Additional Protocol I.
4. How will human enhancement redefine the ethical limits on how combatants may be treated?
The concept of inhumanity is important to clarify, not just for the legal evaluation of weapons but also for the ethical limits on how combatants may be treated. The prohibition on torture, for instance, presumes certain facts about the human condition, such as the kinds of treatment that cause pain, how much pain a person can withstand, how much sleep a person needs, and so on. For instance, if our tolerance for pain could be dramatically elevated, then what counts as torture today may no longer be so -- and therefore such behavior may become morally permissible.
If our tolerance for pain could be dramatically elevated, then what counts as torture today may no longer be so -- and therefore such behavior may become morally permissible.
More generally, ethics itself also presumes a similar set of facts about the human condition, such as that we are fairly susceptible to being killed. These facts inform our ethics, for instance, when self-sacrifice is permitted or prohibited and, again, what kinds of action toward others are unethical. If we change these presumed facts about human bodies and minds, then ethical prohibitions and permissions may also be affected. This gives us reason to believe that an ethical code of behavior for robots could very well be different from how humans ought to behave; for instance, robots -- to the extent that they have no instinct for self-preservation, cannot feel pain, etc. -- may be permitted to sacrifice themselves in more trivial scenarios than human ethics might allow.
At the beginning of this report's section, I suggested that there is a continuum from a fully human animal to a cybernetic organism to a fully robotic machine. This spectrum is perhaps defined by how many human body parts we replace with mechanical ones, ranging from zero to all. Enhanced warfighters, then, could fall somewhere in the middle of this continuum. If "robot ethics" is different from human ethics, at least where relevant facts about humans and robots differ, then it seems that "cyborg ethics" too would diverge from human ethics where there's a relevant difference in the construction and abilities between cyborgs and humans. Though not all enhanced persons are cyborgs, e.g., if the enhancements are genetic, pharmacological, or otherwise not robotic, we can also reasonably conclude that ethics for enhanced persons generally may be different from the standard human ethics.
So it becomes an interesting question of whether it would still be illegal or inhumane to whip a prisoner of war, or deprive him of food or sleep, if the individual can better withstand a whipping or does not have the same food or sleep requirements that normal people typically do. These actions possibly would not cause pain or suffering, or at least as much of it, to the enhanced subject; therefore, it would be difficult to count those actions as torture.
Beyond prisoners of war, questions about inhumane treatment could be directed at how we treat our own enhanced warfighters. For instance, drill sergeants may be tempted to push an enhanced soldier harder than other ones without augmented strength and endurance, and perhaps reasonably so. But where there are prohibitions on what military trainers are permitted to do, we may need to reevaluate those rules where an enhancement might change the presuppositions about human limits that motivated those rules in the first place.
Conclusion
The above discussion certainly does not exhaust all the legal issues that will arise from military human enhancements. In our new report, funded by The Greenwall Foundation and co-written with Maxwell Mehlman (Case Western Reserve University) and Keith Abney (California Polytechnic State University), we launch an investigation into these and other issues in order to identify problems that policymakers and society may need to confront.
Beyond IHL, we also examine in the report US domestic law, military policy, bioethics, and risk assessments; and then we offer a new framework for evaluating human enhancement technologies in a military context. As an initial model, we also discuss further considerations -- related to virtues, emotions, as well as broader social impacts -- that can be integrated into this evaluative framework. (This essay is adapted from that report.)
Given a significant lag time between ethics and technology, it is imperative to start considering their impacts before technologies fully arrive on the scene and in the theater of war. Consider, for instance, the explosion in number of robots in war: in its invasion of Iraq, the US had zero ground robots in 2003 and suddenly about 12,000 in 2008; and its inventory of aerial robots multiplied by 40-fold between 2002 and 2010. This report, therefore, is intended to anticipate ethical, legal, and policy surprises from new technologies, which -- in the case of military drones -- has already led to international outcry, as well as harm to reputation and real lives. With emerging human enhancements, we can think first before we act.

No comments:

Post a Comment