While the future of humankind is artificial intelligence, what exactly is the future of artificial intelligence? This is the insoluble ethical dilemma of artificial intelligence. These systems are autonomous and self-learning. There is no question that artificial intelligence will lead to many positive outcomes. However, the flip side is the question of ethics and the challenge of ensuring that machines driven by artificial intelligence will “behave” ethically towards living things.
The classic example of ethical challenges arising from artificial intelligence involves the self-driven car. How do we program the car in the event that a crash into a crowd of people is imminent? Do we program the car to protect the passengers or protect the crowd? What about if there is only one bystander in danger while the car holds four passengers? What about a mother dog and it’s seven puppies in the road? Obviously, there are ethical dilemmas in each of these examples as well as in a plethora of other scenarios.
Who is liable?
There is an enormous question of liability. If the car’s “decision” lead to the harm or death of innocent bystanders or pups, who is liable?
An artificial intelligence system that is fueled by data can act in unpredictable ways, yet predictability is essential in legal systems. With a self-learning system that acts autonomously and independently, the liability question positively looms.
The ethical dilemma that humankind faces is to halt the development of artificial intelligence, which might be compared to halting the industrial revolution because of the impending abuse of labor that the revolution brought upon us. The more realistic approach is to accept that advances in technology are inevitable and are generally viewed as positive developments. Thus, humankind should create a set of rules that addresses ethical dilemma issues.
How artificial intelligence comes up with decisions is based on the work of several people such as designers, developers, financiers, and users. This distributed agency system dictates that responsibility and liability are also distributed. The issue is that traditional ethical systems address human behavior on an individual basis.
Only recently has the law established constructs to address liability that involve the behavior of responsible parties of multiple players. This includes contractual, tort and strict liability models as well as no-fault models. These systems hold all parties responsible.
Current Ethical Dilemma Issues
Still another ethical dilemma lays in the fact that “things” driven by artificial intelligence will become so invisibly ubiquitous, that we may be in danger of losing human self-determination. Take the case of Cambridge Analytica. This was an example “of the potential of AI to capture users’ preferences and characteristics and hence shape their goals and nudge their behavior to an extent that may undermine their self-determination,” according to an article by Mararosaria Taddeo and Luciano Floridi.
Artificial intelligence applications are becoming pervasive very quickly. These applications lower costs, reduce risks, increase consistency and reliability, and enable new solutions to complex problems. However, this era in the development of artificial intelligence still has obstacles to surmount. Take the case of COMPAS. This system was supposed to help courts decide the likelihood of recidivism among defendants. The decisions that COMPAS made turned out to be biased against Africana-American men.
To help overcome these and other ethical dilemmas, other nations are developing strategies that will allow for the healthy development of artificial intelligence that is benevolent to human and animal kind.
Working Together for Better AI
The AI4People was launched by the European Parliament where the goal “is to create a common public space for laying out the founding principles, policies, and practices on which to build a ‘good AI society,’” according to its website.
The IEEE project, Ethically Aligned Design, v2, aims to “advance a public discussion, Standards and Policy about how we can establish ethical and social implementations for intelligent and autonomous systems and technologies, aligning them to moral values and ethical principles that prioritize human well-being,” according to its website.
The European Union is developing a strategy for artificial intelligence, called the EU Declaration of Cooperation on Artificial Intelligence. This imitative has several goals, including addressing “Ensuring an adequate legal and ethical framework, building on EU fundamental rights and values, including privacy and protection of personal data, as well as principles such as transparency and accountability,” according to its website. Most of the goals surround increasing the capabilities and human good that the application potentially brings to the table.
Many other nations and organizations are proceeding with the development of ethical standards, its contributions to society and the expansion of artificial intelligence technology. At this point, the United States is not among these groups.
Artificial intelligence holds an incredibly powerful potential for the advancement of humankind. It is the next step in our intellectual evolution. However, we hold a profound responsibility to the future of humankind. We must get this right the first time around and make every effort to answer these questions of ethics before we go much farther.