This post was originally written for an assignment under a different name.
It should not be surprising that the rise of digital computing has initiated conversations about ethics regarding autonomous systems. Autonomous systems have crept into many facets of our society, often improving our quality of life, but also holding unchecked power over us. But the relationship between ethics and autonomous systems is not unidirectional, and automation affects ethics even as ethics inform our social understanding of automation.
Autonomous systems are our tools, but they differ in a deep way from the tools of our past. One might say it is trivial to understand the ethical reality of a hammer. A hammer is wielded. Whether it is used to build or to destroy, fundamentally it is used by a moral agent. One does not praise the hammer for its creations nor condemn it for its part in destruction. Instead, we praise or criticize the hammer, and perhaps its crafter, for its capacity to be used for its intended purpose. The hammer is ethically trivial in the sense that it neither takes on nor approximates moral agency and the true moral agent wielding the hammer for good or bad is almost always clear. This ethical understanding of our tools does not clearly apply to our newer autonomous tools.
Autonomous systems can at least approximate moral actions, if not truly take them on, and these moral questions are not purely hypothetical. We rely on the moral (or apparently moral) behaviors of autonomous systems regularly in the modern world for our privacy, finances, and safety. And as we prepare to delegate more of our moral decision-making to our machines, allowing them to drive our cars and diagnose our illnesses, these issues become even more pressing.
Some issues in this space have straightforward answers most of us can endorse, particularly in cases with explicitly bad actors who disregard the rights or well-being of the people whom the systems which they create affect. But it may not always be fair or rational to always assign the moral responsibility, good or bad, of the actions of an autonomous systems to those who created it. These moral issues do not have a single, future-proof solution, but they demonstrate that our ethical understanding of the world is mutable, and that our tools can change not just in form or utility, but in moral meaning.
It should not be surprising that the rise of digital computing has initiated conversations about ethics regarding autonomous systems. Autonomous systems have crept into many facets of our society, often improving our quality of life, but also holding unchecked power over us. But the relationship between ethics and autonomous systems is not unidirectional, and automation affects ethics even as ethics inform our social understanding of automation.
Autonomous systems are our tools, but they differ in a deep way from the tools of our past. One might say it is trivial to understand the ethical reality of a hammer. A hammer is wielded. Whether it is used to build or to destroy, fundamentally it is used by a moral agent. One does not praise the hammer for its creations nor condemn it for its part in destruction. Instead, we praise or criticize the hammer, and perhaps its crafter, for its capacity to be used for its intended purpose. The hammer is ethically trivial in the sense that it neither takes on nor approximates moral agency and the true moral agent wielding the hammer for good or bad is almost always clear. This ethical understanding of our tools does not clearly apply to our newer autonomous tools.
Autonomous systems can at least approximate moral actions, if not truly take them on, and these moral questions are not purely hypothetical. We rely on the moral (or apparently moral) behaviors of autonomous systems regularly in the modern world for our privacy, finances, and safety. And as we prepare to delegate more of our moral decision-making to our machines, allowing them to drive our cars and diagnose our illnesses, these issues become even more pressing.
Some issues in this space have straightforward answers most of us can endorse, particularly in cases with explicitly bad actors who disregard the rights or well-being of the people whom the systems which they create affect. But it may not always be fair or rational to always assign the moral responsibility, good or bad, of the actions of an autonomous systems to those who created it. These moral issues do not have a single, future-proof solution, but they demonstrate that our ethical understanding of the world is mutable, and that our tools can change not just in form or utility, but in moral meaning.
No comments:
Post a Comment