Ethics of Automated Vehicles by Abdullah Metin ÖZTÜRK 040090522 04.19.2015

 

The last fifty years or so have seen incredibly rapid development in computer technology. Rise of the usage of computers and the internet, pushes computer industry to expand the computer market by combining with the other sectors, in this case, automotive industry. As in Driverless Car article, "An autonomous vehicle is fundamentally defined as a passenger vehicle that drives by itself. An autonomous vehicle is also referred to as an autopilot, driverless car, auto-drive car, or automated guided vehicle (AGV)"(as cited in Forrest & Konca, 2007, p. 4). These cars are going to make their own decisions to be able to drive itself and the way to do it is computer systems.

An autonomous car is going to need two main things to be able to drive itself, a hardware system to gather environmental data and a software system to interpret this data. According to Guizzo (2014) Google's car simply working like this:

Urmson, who is the tech lead for the project, said that the "heart of our system" is a laser range finder mounted on the roof of the car. The device, a Velodyne 64-beam laser, generates a detailed 3D map of the environment. The car then combines the laser measurements with high-resolution maps of the world, producing different types of data models that allow it to drive itself while avoiding obstacles and respecting traffic laws. (p. 2)

The laser system of the hardware system is surely the most important and arguably the most difficult thing to achieve. These lasers must be accurate and agile to be able to drive safely. Acquirement of exact adjustment for lasers with several concurrent beams is exhausting and extremely difficult than doing it for single-beam lasers (Levinson et al., 2011). Currently the implementation of the hardware part can be seen sufficient for the technology. However, there are some problems with software part. Since, these cars will use artificial intelligence for decision making, the programmers have to develop reliable AI that have to be remarkably fast. There are some kinds of solutions for these software problems, for example, "... a safety driver is always present behind the wheel, taking over whenever there is a software issue or unexpected event" (Levinson et al., 2011).

The idea behind this technology is to reduce the number of accidents and prevent all of them when the technology is perfected. However, programmers that create this technology are human and humans make mistakes. Association for Safe International Road Travel claims that, each year 37.000 people are dying, likewise 2 million people are getting wounded as stated in the Annual U.S. Road Crash Statistics (as cited in Jackson, 2015). Until the autonomous car technology is perfected, there will still be a lot of accidents and ethical aspect of this topic should be discussed. Whom should be held accountable or responsible? The manufacturer or the state or the owner of the car or the autonomus system itself? If the driving system of the car is flawed of course the answer is manufacturer should be held accountable. On the other hand, if the road which, the car is driven, is defective, or the traffic light that, should warn the car, is broken, then the state that constructed the road or built the traffic light, should be held accountable. According to Hevelke & Nida-Rümelin (2014), a different approach is to blame the buyer of the autonomous cars for potential crashes. A way to do it could involve giving the 'driver' of these cars a task, which involves watching the road and traffic and also intervening when risk occur (pp. 5-6). Then, if the 'driver' can not intervene in time, which is not likely, the owner of the car should be held responsible. Another interesting point is, system itself can be held responsible. One day in advancement a robot can reach a stage that its engineers and programmers are not liable for its actions any more – like the parent of a kid is not liable commonly for their actions after they become mature enough (Asaro, 2006). Holding the computer system responsible might sound ridiculous but we are not too far from that level of advancement.

In case of an accident, how the responsibility should be handled topic brings us to the decision making mechanism of these cars. If everything goes according to the plan, there will be no accidents or so; however, when unexpected things occur, these self-driving cars have to make a choice to what to crash into. Therefore, there must be a decision making mechanism that follows human ethics in some sense. According to Asaro (2006):

There are at least three distinct things we might think of as being the focus of "ethics in robotics." First, we might think about how humans might act ethically through, or with, robots. In this case, it is humans who are the ethical agents. Further, we might think practically about how to design robots to act ethically, or theoretically about whether robots could be truly ethical agents. Here robots are the ethical subjects in question. Finally, there are several ways to construe the ethical relationships between humans and robots: Is it ethical to create artificial moral agents? Is it unethical not to provide sophisticated robots with ethical reasoning capabilities? Is it ethical to create robotic soldiers, or police officers, or nurses? How should robots treat people, and how should people treat robots? Should robots have rights? (p. 10)

Utilitarianism approach simply suggests that, the car should save as many people as possible. Nonetheless, according to Lin (2013), which is worth rescuing, a grown-up or juvenile? What if numbers differ, two grown up compared to one juvenile? Humans do not want to consider these distressing and tough options;however, programmes may have to consider these things (para. 16). On the other hand, deontological ethics approaches to the topic like, we have a duty to protect our children; therefore, in Lin's example, the car needs to save the juvenile.

Being responsible or accountable for an accident is one thing, but what about liability about these accidents? In this case it surely is not the autonomous system itself. If the hardware part of the car is not working properly, manufacturer should be held liable about the accident. However, since manufacturing defect concept has not been applied to softwares by courts because of there are no concrete product, manufacturing defects allegations in the self-driving car situation are really intricate (Gurney, 2013). If the cause of the accident is software related, then, owners of these cars can sue the manufacturer but in this situation, "A traditional manufacturing defect claim will not help plaintiffs with algorithm defects because of the malfunction doctrine’s limitations; so, plaintiffs will likely assert design defects" (Gurney,2013, p. 260). On the other hand, the user of the car can be held liable. Hevelke & Nida-Rümelin (2014) state that:

The liability of the driver in the case of an accident would be based on his failure to pay attention and intervene. Autonomous vehicles would thereby lose much of their utility. It would not be possible to send the vehicle off to look for a parking place by itself or call for it when needed. One would not be able to send children to school with it, use it to get safely back home when drunk or take a nap while traveling. However, these matters are not of immediate ethical relevance. (p. 6)

If the users of these cars are going to be held liable, then a solution for this suggested by Hevelke & Nida-Rümelin (2014):

In the case of a responsibility of the driver as a form of a ‘‘strict liability’’ ... It is justifiable to hold users of autonomous cars collectively responsible for any damage caused by such vehicles–even if they had no way of influencing the cars behaviour. However, this responsibility should not exceed a responsibility for the general risk taken by using the vehicle. A tax or a mandatory insurance seems the easiest and most practical means to achieve that. (p. 11)

Under these circumstances, government should make rearrangements in the law about this self-driving cars concept. The responsibility, the accountability and the liability topics of the autonomous vehicles are not certain yet, it needs to be researched by professionals and a decision has to be made.

Excessive usage of automobiles in our daily life leads to the next topic, what happens when all of those automobiles are replaced by autonomous cars. Lin (2013) claims that, to what degree are autonomous vehicles vulnerable to hacking? Technological gear produced by mankind is hacked without exception. If government and holder (e.g., rental car company) can manipulate the car from distance, this system provides a simple way for cyber-carjackers (para. 25). Even though the hacking of these cars might be taken care of, there are still problems like giving to the government or rental car company the ability to intervene with these cars from distance. According to Gough, social contract theory means that, "In moral and political philosophy, the social contract ... is a theory or a model ... that typically addresses the questions of the origin of society and the legitimacy of the authority of the state over the individual."(as cited in "Social Contract," n.d.). However, giving the capability of manipulating all the cars to some authorities can cause problems. Furthermore, computer systems are known for keeping log files. Since these cars use computer systems for driving themselves, there are other problems like personal privacy. What should be kept in the log file? If the system keeps too much information, then some third party may reach this information and violate the privacy of the user of the car. Discarding the data as soon as possible is the riskless thing to do on behalf of the ensuring the user's privacy (Von de Moort, Pieters & Consoli, 2015). Moreover, according to Jackson (2015) another privacy problem is:

...if autonomous cars do become mainstream, will personal privacy be traded for convenience? In the scenario where self-driving cars have become predominant modes of transportation, all movement in the car may be tracked for insurance and legal purposes (Lin, 2013). This means, that when and where a person goes would be subject for review by insurance companies and law enforcement. For some people, this is a violation of personal privacy, and thus presents an ethical dilemma. (p. 35)

Although autonomous cars would reduce the quantity of accidents, there are plenty of different problems with the concept. People and government should embrace these cars together.

Finally, how this autonomous cars are going to be treated by humans? Coeckelbergh (2010) states that:

For robots, one can make a similar demand for consistency coupled with an emancipatory claim that can also be found in the animal rights movement broadly understood (based on deontological and utilitarian arguments): if (in the future) it turns out that robots share features with humans such as rationality or consciousness, then if we hold these features as a basis for human rights, why restrict those rights to humans?

 

If they might one day become sentient, then why neglect their interests in avoidance of suffering? Why continue to treat artificially intelligent robots as things we can use or abuse if we have good reasons to include them in our community of moral consideration and rights? (p. 211)

Since, these self-driving vehicles will not be things anymore, they will be considered as community members. So, they will be held responsible for crashes they made of course but what about liabilty? Can community punish these autonomous vehicles? Moreover, will these autonomous cars 'choose' to drive a human to the given location? These questions are not answered yet, and far from being answered.

In conclusion, autonomous cars are going to improve the quailty of living standards and facilitate lifestyle. However, usage of these cars brings a lot of questions and problems as mentioned above. These questions and problems must be figured out before putting these cars into action. These self-driving cars are on the horizon and they are coming fast.

 

References
ASARO, P.M. (2006). What whould we want from a robot ethic? 
International Review of Information Ethics, 6(006), 9-16.

COECKELBERGH, M. (2010). Robot rights? Towards a social-relational justification of moral consideration.
Ethics and Information Technology, 12(3), 209-221.
FORREST, A., & KONCA, M. (2007). Autonomous cars and society.
Retrieved from (IQP OVP 06B1)
GUIZZO, E. (2011). How Google's self-driving car works. IEEE Spectrum.
Retrieved from

GURNEY, J. K. (2013). Sue my car not me: Produtcs liability and accidents involving autonomous vehicles.
Journal of Law, Technology & Policy, 2013(2), 247-423.

HEVELKE, A., & NIDA-RÜMELIN, J. (2014). Responsibilty for crashes of autonomous
vehicles: An ethical analysis.
Science and Engineering Ethics, 20(2), 1-12.

JACKSON, E. S. (2015). Technology and ethics.
Journal of Information Technology, 1(7), 30-37.

LEVINSON, J., ASKELAND, J., BECKER, J., DOLSON, J., HELD, D.,
KAMMEL, S., THRUN S. (2011). Proceedings from IEEE 2011: Intelligent Vehicles Symposium (IV), Baden-Baden, 163-168.
LIN, P. (2013). The ethics of autonomous cars.
The Atlantic. Retrieved from

Social Contract. (n.d.).
Retrieved April 19, 2015 from the Social contract
VAN DE VOORT, M., PIETERS, W., & CONSOLI L. (2015). Refining the ethics of computer-made decisions: a classification of moral mediation by ubiquitous machines.
Ethics and Information Technology, 17(1), 41-56.

Comments

Be the first to post a comment

Post a comment