An ethical perspective on autonomous weapon systems

As designers endow more weapon systems with degrees of autonomy and other characteristics that aim to equal and even surpass human capacities for discernment, serious ethical concerns persist over the possibility of delegating life-and-death decisions to autonomous weapon systems. Responses to such concerns often emphasize a system’s ability or inability to comply with principles of international humanitarian law— distinguishing civilians from military targets while ensuring that harm to the former is in acceptable proportion to the value of the latter—but such analyses regularly fall victim to one of two blind spots: either taking ethical conduct by humans for granted (“humans are ethical, and robots are not”) or giving up on human morality altogether (“humans fail to act ethically, so we need ethical robots”). Rather than addressing the specific characteristics of prospective autonomous weapon systems, this chapter will examine concepts such as the human-machine analogy in considering what ethical principles should inform an inherently human act.

Related Subject(s): Disarmament
Sustainable Development Goals:
-contentType:Journal -contentType:Contributor -contentType:Concept -contentType:Institution
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error