Thursday, June 27, 2024

Review of Strawser’s "Moral Predators"

Historical Introduction

Remote-controlled weapons are not a new technology: the Soviets used “teletanks” – radio-controlled tanks with a range of just under a mile – during the Winter War, and the Nazis experimented with the “Goliath tracked mine” during World War II that had a similar range. Going back further, in World War I the Germans developed the “Ferlenkboot” - remote controlled boats with a range of approximately 12 miles – for use against British ships in the English Channel and off the coast of Flanders.

Three aspects make contemporary unmanned aerial vehicles (UAVs) different from these other unmanned weapons: first, the pilot of a UAV can be on the other side of Earth from the UAV itself; second, they are a proven and effective weapon system; third, they were developed and are heavily used by the United States. This last difference is the main reason why UAVs have drawn the attention of just war theorists – notice the lack of outcry over the suicide drone boats used by Iran and Houthi rebels.

In his 2010 paper “Moral Predators: The Duty to Employ Uninhabited Aerial Vehicles”1, Bradley Jay Strawser argues that there is nothing wrong in principle with the use of UAVs, and in fact their use should be obligatory. He explains the “obligatory” part as being a duty, but later in the paper2 he provides a better explanation. To explain why UAVs are not wrong in principle, he addresses six objections that can be made to their use.

A General Atomics MQ-1 Predator, armed with AGM-114 Hellfire missiles, piloted by Lt. Col. Scott Miller on a combat mission over southern Afghanistan. (U.S. Air Force Photo / Lt. Col. Leslie Pratt)

Objections Addressed by Strawser

The first objection is that the adoption of UAVs will lead to fully autonomous weapons, or what Strawser calls IAWs (independent autonomous weapons); IAWs are morally impermissible; therefore, UAV development is impermissible. Strawser raises this issue to delimit the scope of his paper strictly to “human-in-the-loop” or “human-on-the-loop” weapon systems and avoid “human-out-of-the-loop” systems. He rejects this argument because there is no proof that development of UAVs necessarily leads to IAWs, and that it is possible to continue UAV development and usage while banning IAW development – which he recommends Western nations do3.

The second objection is that UAVs lead to violations of the jus in bello Principle of Discrimination, the idea being that poor-quality or unreliable video feeds would limit the ability of the pilot to discriminate between combatant and non-combatant. This is a limitation that currently is not true, and he quotes statistics showing that, in comparison to conventional forces, UAVs resulted in far fewer noncombatant deaths relative to combatant deaths. Further, by flying at lower altitudes than manned aircraft, more accurate targeting is to be expected. Finally, Strawser notes that the manufacturer of a missile designed for use on UAVs has achieved “urban warfare precision,” and that the missile is controlled after it has been launched (“fire, observe, and update” as opposed to “fire and forget”).

It mustn’t be forgotten that UAVs are still piloted vehicles, their pilots are just not aboard them. This can lead, so the third objection goes, to cognitive dissonance on the part of UAV operators. The pilot can “kill the enemy from their ‘desk’ at work and then go home to dinner and their child’s soccer match,” and this somehow places unnecessary psychological stress on the pilot. In addition, since the pilot can view his profession as a video game, this would lead to either “weakening the operator’s will to fight” or lead to frequent violations of the Principle of Discrimination.

Strawser responds to this by noting that since the pilot is in no personal danger, the temptation to commit jus in bello violations is lessened. Also, the pilot can evaluate the target before firing. All UAV action can be recorded and monitored, and this provides an additional layer of accountability: “an entire team of officers and human rights lawyers could oversee every single lethal decision made by a UAV.” Finally, UAV-pilot-specific cognitive dissonance can be lessened by moving the pilot in-theater.

The fourth objection raised is that targeted killings (assassinations) fall outside the bounds of acceptable Just War Theory4, and UAVs somehow make the practice too easy. The issue here, Strawser notes, is one of policy and not the technology. There are three adjacent concerns: sovereignty issues, “ignoble warfare,” and the extent that UAVs make assassinations easy.

The first corollary is really a legal issue – lawyers can argue that sending UAVs into another country’s airspace is not the same a sending manned airplane or an agent – and is not relevant to most drone strikes. The third concern (that UAVs make assassinations not only possible but easy), is not a strike against UAVs themselves but rather their usage.

The second corollary concern – that using UAVs constitutes a form of “ignoble warfare” – is perhaps the most interesting objection one can make. Strawser states this objection in full as follows: “the battle for the ‘hearts and minds’ of local nationals in a given theater is significantly worsened by what they view as ignoble warfare; UAVs are thought to be ‘cowardly.’”

“So be it,” Strawser responds. He should have stopped there, but he goes on to admit that if studies prove that the winning of ‘hearts and minds’ is indeed made more difficult, then UAVs should not be used.

Strawser’s fifth objection is that UAVs create an unjust asymmetry in combat, in that “one side literally does not take any life-or-death risks whatsoever (or nearly so, since its warfighters are not even present in the primary theater of combat) whereas the opposing side carries all the risk of combat.” His response (except for subsequent over-analysis) is perhaps the strongest part of this paper:

[T]here is no chivalrous reason for a just combatant to ‘equal the playing field’ or ‘fight fair.’ If combatant A fights under a just cause, while combatant B fights for an unjust cause, combatant A owes nothing to combatant B by way of exposing his/herself to some minimal threshold of risk. Thus, it is right for combatant A to reduce the risk in an engagement with the unjust enemy.

The sixth and final objection Strawser addresses is that UAVs lower the jus ad bellum threshold, in that UAVs improve the likelihood of success. This objection is falls in the same way that the fifth objection falls. There is nothing special here about the use of UAVs: any difference in technology between two enemy nations would be exploited to increase the probability of success of any military action.

Two Further Objections

Two additional objections, not addressed by Strawser, can be made to the use of UAVs.

Objection: UAVs are strictly a rich nation’s weapon5.

Response: This is simply not the case. Ukraine (though certainly not a poor nation thanks to American taxpayer support) is effectively utilizing off-the shelf commercially available drones in their war against Russia. These drones do not cost a fortune but can be obtained for at most a few hundred dollars each.

Objection: the drone pilot’s location is… problematic.

Response: At some point, the range of American artillery exceeded that of an enemy’s. Does this mean that the enemy surrenders without a fight? No: the enemy will simply attack from a position so close that the use of our artillery would be prohibitive because of the likelihood of harming our own troops.

Now consider what an enemy would do when facing off against a UAV system. There are at least four points of attack: the drone itself, the takeoff and landing site(s) of the drone, the communication system controlling the drone, and the drone’s pilot. Just as in the case of the artillery, the enemy will move close to attack. This means that the pilot is a legitimate target. In the case of drones, though, the pilot could be located within the United States. Thus, there is a legitimate target within our own borders. This is not to say that this renders the use of UAVs unethical, but it does raise an issue that policy makers do not seem to consider.

Conclusions

As with any weapons system, the use of UAVs introduces a range of problems. This is exemplified by the 30 September 2011 drone strike that killed terrorist Anwar al-Awlaki. Al-Awlaki was an American citizen; he was killed while in Yemen; the drone strike was a CIA operation; and the pilot was located at the Creech Air Force Base outside Los Vegas, Nevada6.

Strawser separates the legal, political, and technological problems raised by incidents like that, and instead focuses on the ethical problems associated with UAVs. He considers the use of UAVs in the framework of Just War Theory and finds that by themselves, UAVs do not alter the justness of unjustness of either a war or an action taken during that war.


Footnotes

  1. Bradley Strawser, “Moral Predators: The Duty to Employ Uninhabited Aerial Vehicles.”
  2. See Strawser’s response to the fifth objection, below.
  3. Does he also recommend non-Western nations ban the development of IAWs, too?
  4. A person holding a “Realist War Theory” would not subscribe to that position.
  5. This wording is used at the start of Simpson & Müller “Just War and Robot’s Killings.”
  6. Amy Zegart, Spies, Lies, and Algorithms.

Bibliography

Simpson, T. & Müller, V. “Just War and Robot’s Killings.” The Philosophical Quarterly (1950-) 66 no. 263, April 2016. https://www.jstor.org/stable/24672810

Strawser, B. “Moral Predators: The Duty to Employ Uninhabited Aerial Vehicles.” Journal of Military Ethics 9 no. 4, 2010. https://doi.org/10.1080/15027570.2010.536403

Zegart, A. Spies, Lies, and Algorithms: The History and Future of American Intelligence. Princeton University Press, 2022.

No comments:

Post a Comment