Monthly Archives: August 2016

Random Access #1: Ethics of driverless cars

Standard

Hello world.

Have you ever had a thought, out of nowhere that seems to challenge everything? Or, take you places you’ve never had the occasion to go…within your mind? I think we all have. What do we do about them? In my case, mostly nothing. Google, at most. Over the years, they’ve accumulated and the rattling in my head is getting pretty intense.

So, I’m trying something new. I’m going attempt organizing those thoughts (attempt is the key word), and just lay them out here on the blog.

Some of you may be experts in the field, or have perspectives that totally obliterate my concerns. Great! Other topics may lead to conversation. Also, Great!

So, here it goes…

Random Access #1: The ethics of driverless cars.

The recent death of a Tesla Model S driver in autopilot mode got me thinking. Who decides who lives and dies.

Now, in this particular case, there really isn’t a decision to be made. Many factors contributed to a “perfect storm” of conditions that the autopilot system (and the driver) could not adequately cope with. The cameras did not see the trailer as it pulled across the road against the brightly lit sky and the vehicle collided with the truck (although, I don’t see how radar sensors would have been affected by this, but back to the point).

To get to the core of my questions, let’s start modifying the situation in very slight and very plausible ways.

First, let’s get rid of the blinding sun. Now the autopilot system is working in optimal conditions. Second, let’s say the trailer does not cross the road. Let’s say it starts to cross, sees the Tesla and stops so that it is blocking the right most lane of the divided highway. (A divided highway means two lanes going in one direction, a median, and two lanes in the opposite direction.)

The autopilot (and/or driver) now has two options: to brake as hard as possible, or to swerve to avoid the collision.

No objections to my assumptions so far, I hope.

Let’s add a third assumption: braking will NOT prevent collision. The truck is too close. The Tesla is too fast. Or, the road is too slick.

The autopilot is (probably) programmed to swerve to the next lane (or shoulder) to avoid collision.

Wow! Close one! But we survived. Driverless cars are amazing!

OK, let’s make a couple more assumptions and get into the real meat of the problem.

Now, imagine that there is another vehicle to the left, trying to overtake us. The truck pulls out, the brakes are applied but cannot stop in time. Will autopilot, should autopilot, swerve to save the owner’s life at the risk of the other driver? The second vehicle will be pushed to the shoulder or median and could lose control. Or, it could instantly lose total control of contact is made between the cars. If the second vehicle is a motorcycle, there is a fair chance it will lead to death.

To further complicate things, we can start considering the passengers. Babies, retirees, Presidents, ex-cons, the Pope, friends, rapists, transplant organs, drugs, da Vinci works, trailer full of nuclear waste, seeing eye dogs, delivery of rubber Walmart flip-flops, can all be in one vehicle or the other. Oh, and pedestrians in the surrounding environment. Don’t forget about them.

In the conventional (manual pilot) sense, we could think of these decisions as a type of social interaction, colored (for better or for worse) by driving skill, experience, social class, personality, values, priorities, emotional state, compassion, etc. The challenge is how to make these decisions and write them into the operational algorithm before they occur. How (and more importantly, who) will prioritize these things in a socially acceptable manner? Is it even possible?

Ultimate responsibility of vehicle operation still falls on the driver, regardless of autopilot or any other system that is operational. This has been true for decades. If you are going 10 mph over the limit because your speedometer or cruise control is out of whack, it’s still your responsibility to go the speed limit. You cannot blame it on faulty equipment. These are just tools. The final responsibility to drive safely lies with you.

But the point remains: how should the autopilot be programmed? It doesn’t make its own decisions. Its operations are prescribed according to programs and algorithms created by humans. In a sense, it is premeditated.

I think this is a challenge. I have little doubt self-driving cars will play a significant role in our lives in the fairly near future. Is it possible to live with a clear conscience in this world if these ethical and intellectual obstacles are not overcome?

Afternote
There are two ways for autopilot to be programmed that don’t require nuanced decision making and be considered “fair” that I can think of (so far). They are “fair” but not necessarily compassionate.

Braking only model
In the example above, when there is risk to other vehicles or pedestrians, all cars of all makes are prohibited from veering off course and interfering with the path of other vehicles. The damages are contained within the primary vehicles. So the driver would die, but others are not affected. How polite!

Every car for itself model
The autopilot programs prioritize the safety of its passengers over anything else. Defensive driving, at all costs. If every car maker was allowed to program their autopilots for this, including algorithms for evading primary threats (like the truck), and secondary threats (like cars swerving to avoid trucks) etc, this would also be “fair” (I think???). This would inevitably lead to a “battle-bots” like arms race…a survival of the fittest autopilot. For example, swerving to avoid a minor collision, in the process, taking out two bikers and sideswiping a minivan that careens into a group of school children crossing the street, all to avoid a minor collision that would have resulted in broken ankle for the driver.

Note to the afternote
The overreaching assumption is that autopilot is always overridden by manual inputs from the driver, so some of these effects could be avoided. But the programs still need to be written.