This post was originally written for an assignment under a different name.
Existing ethical systems are sufficiently robust to address most issues raised by cyber technology which do not involve these kinds of entities. As an example, consider intellectual property, a field undeniably challenged by cyber technologies. Computers enable swift copying and distributing of information and unmask intellectual property as non-rivalrous. However, this does not prove the CEIU hypothesis. In reality, information has always been non-rivalrous in some sense and not perfectly analogous to other kinds of property, a difference which society’s conception of copyright fails to perfectly account for. Digital computing exacerbates this issue. Fundamentally, this fails to demonstrate unique cyber ethics because the endeavor to understand data as property exists within the “gated domain” of intellectual property. Reasoning within this domain must ensure that data as property makes sense regardless of how technology, real or hypothetical, can interface with that data. The field of intellectual property could only evidence unique cyber ethics if cyber technologies fundamentally altered one or more of the axioms upon which the concept of intellectual property is built, exempli gratia if it were impossible to reproduce information before the advent of computing. Many candidates for demonstrating the CEIU hypothesis fail in this way, such as the impact of computing on accessibility or professional responsibility.
There is a more substantial candidate adjacent to the convergence between person and tool which fails by similar but more subtle reasoning. One can argue that there has been a convergence between tools and moral agents in a very limited sense of the term. There is an incongruity in how responsibility is assigned for these independent decisions made by autonomous systems. If an autonomous system is found to exhibit a racial bias due to its training data, the system may be condemned for a racist action while those who designed the system may be condemned for an irresponsible action. Even if the developers are responsible for the action, they are not responsible in the same way, suggesting that the system is a synthetic moral agent. This may seem novel, but like non-rivalrous property, synthetic moral agents are not actually unique to computing. This can be observed in bureaucracy, a type of system which can make specific decisions independently of the intentions or values of any specific person within. Cyber technology’s ability to generate synthetic moral agents does not suggest CEIU but that ethical concepts within the gated domains of tool use and extended responsibility have been made more important by computing and may now need closer inspection.
Cyber technology might fundamentally challenge the axioms of one of these gated domains, but there’s a deeper and clearer reason that cyber ethics is unique: it introduces the possibility of synthetic people. A possible counterargument is that the set of entities considered to be people has expanded over the course of human history. However, synthetic people differ categorically. First, synthetic people are created by the minds of other people, a property which usually classifies objects as tools in the general sense. Second, dehumanized groups in history were always demonstrably people whose rights were willfully denied whereas machines today are not. A class of objects transitioning from non-person to person is absolutely unprecedented in human history. Unlike synthetic moral agents which challenge concepts regarding tools and responsibility, synthetic people challenge foundational axioms for these domains and others. They destroy the concepts of man being his own end and the creation of man being for his own use. They break society’s ability to unambiguously distinguish people from non-people. They challenge people’s ability to determine their responsibilities to or the rights of other entities. To those who use moral systems predicated on subjective experience, they shatter the ability to determine what entities actually have subjective experiences. The major ethical systems we have today are unprepared to resolve these novel entities unique to cyber technology.
Synthetic people do not exist. They might never exist. It is not yet known whether cyber technologies could create them. Whether one thinks that it is likely cyber technology will create these entities isn’t relevant to demonstrating that cyber ethics is unique. But our knowledge of computing today demands that we rationally acknowledge the possibility that synthetic persons could emerge from the field. Though humanity has imagined creating a being in its own image throughout history, cyber technologies have uniquely made this a real possibility. We must be prepared to ethically interact with constructed entities which demonstrate an ability to act to their own ends and a capability to rationally understand us as people before such entities exist. The possibility of synthetic people demonstrates that computing represents a unique and unprecedented challenge to ethics we must take seriously.
No comments:
Post a Comment