Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

ETHICAL DILEMMA | Baby or grandma: who should a self-driving car run over?

0

In a self driving car, it’s not a human being who assesses the situation before an impending accident – it’s a machine. Who will it save?

driverless

Two possible culprits that will take over humankind: zombies and robots. If it were a race today, the robots would be winning, but we’re just glad that conceptually, zombies can’t drive.

Robots, however, have been rehearse driving on our streets in the form of self-driving cars. Uber and Tesla have been very vocal about their intention to populate the roads with autonomous vehicles. Google’s Waymo recently boasted of gaining 8 million miles of driving experience.

Naturally, humans – and future passengers of these robot cars – are concerned about safety. And to this car makers responded with platitudes of high-technology sensors, lightning-quick response times, and advanced artificial intelligence of their respective cars. That’s great, the humans respond.

But in the event of a crash, who is responsible?

The good thing about self-driving cars is that they don’t drink and drive. They are never tired or sleepy. And they don’t get distracted by their favorite songs coming on the radio. We can take away driver-caused crashes from the equation.

But what if, for some mechanical reason, the car’s brakes give, and its AI has to make a decision: continue driving and crash into a grandma crossing the street or swerve to the other lane and hit a child chasing a ball across the street.

Waymo self-driving car

Solving the Trolley Problem

As human beings, we’re still not sure about how we will answer this “trolley problem”, a popular thought experiment in ethics. The general form of the trolley problem is this: a runaway trolley is speeding towards five people tied up on the tracks. You only have time to pull a lever that controls which track the trolley will take. On the other track, there is one other person also tied up.

You can choose to do nothing, the trolley will continue on its path, and five people will die. Or you can switch the trolley’s course and it will take the other track, where a single person will die. Do you save five lives or do you save one?

It gets complicated when you put faces to these options: man or woman, doctor or thief, baby or grandma. If the single person tied up is your friend, do you automatically save the friend?

But in a self driving car, it’s not a human being who assesses the situation before pulling the lever – it’s a machine. Will our machines be intelligent enough to take ethics into account at all?

Hopefully, autonomous cars have time to scour the internet for solutions and opinions that can help weigh their decisions, like we did for this article.

Moral machine

The Moral Machine is a platform created by researchers at MIT. It presents the user with scenarios equivalent to our self-driving car dilemma above and makes you choose which lives should be spared and which will not be so lucky. Young vs old. Male vs female. Pets vs humans.

Some scenarios are riddled with the user’s potential biases: would you hit someone fit or fat; someone crossing the road legally or a jaywalker; an upright citizen like a doctor or a criminal like a robber.

And some scenarios are just quite the puzzler: would you save the pedestrians or yourself?

At the end of the test, the user will be presented an analysis of their leanings. Moral Machine then compares it to users from the rest of the world. For example, in general, individualistic countries tend to spare the lives of the young, while countries that revere the aged like Japan tend to spare the lives of the old.

However, the research does not really provide the answer to “what is ethical”. It only tallies the scores to see what most people in a given location find acceptable.

Do we then program our self-driving cars according to our cultural backgrounds? Should governments then be making laws that will govern these cars?

Jaguar iPace SUV as a Waymo driverless car service

The good of many versus the good of one

In the simplest form of the trolley problem, we only need to decide between saving the lives of five or the life of one, otherwise known as the utilitarian approach to the dilemma. Sounds simple – it see humans as only as a body count.

As early as 2015, individuals have been contemplating this approach. In the article “Self-driving cars and the Trolley problem”, the writer posed a scenario wherein the five people tied up are robbers and the one person on the other track is a scientist (let’s assume he is the cancer-researching one, not the nuclear bomb-creating one). Here, saving the life of the scientist means he or she will have the opportunity to save the lives of millions more. Therefore, the scientist’s life must be saved.

Do we then create a Black Mirror-ish world wherein each of us are rated according to our contributions to society, our popularity, perhaps even how many followers we have on Instagram?

If so, then China’s social credit system is readily compatible with self-driving cars. Easy – if the total number of social credits of the passengers exceed that of a group of joggers in the path, safety of the passengers is prioritized. Now would be good time to befriend some scientists while on a road trip.

In many ways, this will not end beautifully. Governments can have autonomy of the scoring system, as in China, and use it to dictate the actions of citizens, all for the sake of having a score high enough to be saved by cars. It’s disturbing to think that the same ranking system may be used in other applications like airplanes, banks, or government agencies to determine your priority level in being saved or getting a loan.

And to what extremes do we go to make sure our individual social credit is sufficiently high?

No judgment

It turns out that a governing body has already attempted to set guidelines on how autonomous transport systems should behave. Germany’s Federal Ministry of Transport and Digital Infrastructure (BMVI) released its first report in 2017.

Some key points:

-> The primary purpose of automated transport systems is to improve safety for all road users. Another purpose is to improve mobility and to make way for other possible benefits.

-> In unavoidable accident situations, machines are not allowed to recognize distinctions based on personal features like age, gender, physical or mental constitution. Those parties involved in the generation of mobility risks must not sacrifice non-involved parties.

-> Human safety takes precedence over damage to property or to animals.

-> The public sector is responsible for licensing and monitoring automated vehicles. Damage caused by an automated driving system will be treated like a product liability and therefore is to the account of the manufacturer.

-> Machines should not be put in a situation where it has to decide on a dilemma. There is no standardized way of programming a solution into such problems. An independent group should be in charge of processing learnings from such situations.

The last bullet point clearly states that machines are not capable of making complex judgments that a human driver may face on the road – and they shouldn’t have to. If a person momentarily decides to protect himself in favor of a crossing pedestrian, it is possible that he will be brought to court for his decision, a case he may or may not be judged as culpable.

Whom do we bring to court in the age of self-driving cars? Based on the initial guidelines of Germany’s Federal Ministry of Transport, the product manufacturer is accountable. Knowing that these manufacturers are big corporations, how do we average human beings stand to win?

Another clause in the BMVI report deserves attention: “Those parties involved in the generation of mobility risks must not sacrifice non-involved parties.”

Autonomous car buyers clearly bought into the implied mobility risks of their transport of choice – does this mean that innocent pedestrians – who are not involved in the passenger’s choice – must be saved first?

Will you take a ride in a self-driving car?

Imagine your self-driving car going up against Metro Manila’s veteran bus drivers, daily commuters, and jaywalkers of our roads – what learnings will your machine acquire and how can it possibly apply these learnings in unpredictable situations inside and outside the metro? Not to mention autonomous cars, like Siri and OK Google, will rely on a fast mobile internet connection if it is to retrieve data and use the learnings of other cars in the system. How fast can it possibly make split-second decisions?

However, we can look at autonomous transport as a system – that is, it is not a question of which manufacturer makes the best and safest self-driving car. Naturally, companies like Mercedes and Volvo would make promises of safety and innovation – they want to sell the most cars and they want to sell them first. But, the BMVI guidelines show some promise in this: that self driving cars should not be put in a situation wherein they have to decide between two human lives.

Therefore, it should the initiative of the government to embrace the potential benefits of an autonomous transport system.

Perhaps there shouldn’t be pedestrians crossing roads such that computers will have to choose whether it will run over a dog or a jogger, a criminal or a doctor. Perhaps our concept of how pedestrians cross the street will have to change.

In a world of self-driving cars, perhaps it’s smarter to solely allow computers to drive us without any human intervention. That way, the concept of “defensive driving” will be the same across the board.

Then there’s the matter of standardization. Autonomous vehicles will have to know how to read road signs, partitions, know when to keep right or left, know one-way roads and no parking areas. How will cars distinguish the differences in rules between Quezon City and Makati City? And then how will it navigate outside of the metro? Will cars be programmed differently in different countries and will these local standards include ethical decisions?

There are more questions than there are answers at this point, and based on the risks people have been reflecting on, here in Manila, it will be a while before we can jump into self-driving cabs and confidently nap on the way to work.

Leave a Reply

Your email address will not be published. Required fields are marked *