Welcome to Netrider ... Connecting Riders!

Interested in talking motorbikes with a terrific community of riders?
Signup (it's quick and free) to join the discussions and access the full suite of tools and information that Netrider has to offer.

Should an autonomous car swerve to avoid a bike?

Discussion in 'The Pub' started by A boy named Sue, Oct 25, 2015.

  1. Why Self-Driving Cars Must Be Programmed to Kill | MIT Technology Review

    Why Self-Driving Cars Must Be Programmed to Kill
    Self-driving cars are already cruising the streets. But before they can become widespread, carmakers must solve an impossible ethical dilemma of algorithmic morality.

    When it comes to automotive technology, self-driving cars are all the rage. Standard features on many ordinary cars include intelligent cruise control, parallel parking programs, and even automatic overtaking—features that allow you to sit back, albeit a little uneasily, and let a computer do the driving.

    So it’ll come as no surprise that many car manufacturers are beginning to think about cars that take the driving out of your hands altogether (see “Drivers Push Tesla’s Autopilot Beyond Its Abilities”). These cars will be safer, cleaner, and more fuel-efficient than their manual counterparts. And yet they can never be perfectly safe.

    And that raises some difficult issues. How should the car be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it choose between these extremes at random? (See also “How to Help Self-Driving Cars Make Ethical Decisions.”)

    The answers to these ethical questions are important because they could have a big impact on the way self-driving cars are accepted in society. Who would buy a car programmed to sacrifice the owner?

    So can science help? Today, we get an answer of sorts thanks to the work of Jean-Francois Bonnefon at the Toulouse School of Economics in France and a couple of pals. These guys say that even though there is no right or wrong answer to these questions, public opinion will play a strong role in how, or even whether, self-driving cars become widely accepted.

    So they set out to discover the public’s opinion using the new science of experimental ethics. This involves posing ethical dilemmas to a large number of people to see how they respond. And the results make for interesting, if somewhat predictable, reading. “Our results provide but a first foray into the thorny issues raised by moral algorithms for autonomous vehicles,” they say.

    Here is the nature of the dilemma. Imagine that in the not-too-distant future, you own a self-driving car. One day, while you are driving along, an unfortunate set of events causes the car to head toward a crowd of 10 people crossing the road. It cannot stop in time but it can avoid killing 10 people by steering into a wall. However, this collision would kill you, the owner and occupant. What should it do?

    One way to approach this kind of problem is to act in a way that minimizes the loss of life. By this way of thinking, killing one person is better than killing 10.

    But that approach may have other consequences. If fewer people buy self-driving cars because they are programmed to sacrifice their owners, then more people are likely to die because ordinary cars are involved in so many more accidents. The result is a Catch-22 situation.

    Bonnefon and co are seeking to find a way through this ethical dilemma by gauging public opinion. Their idea is that the public is much more likely to go along with a scenario that aligns with their own views.

    So these guys posed these kinds of ethical dilemmas to several hundred workers on Amazon’s Mechanical Turk to find out what they thought. The participants were given scenarios in which one or more pedestrians could be saved if a car were to swerve into a barrier, killing its occupant or a pedestrian.

    At the same time, the researchers varied some of the details such as the actual number of pedestrians that could be saved, whether the driver or an on-board computer made the decision to swerve and whether the participants were asked to imagine themselves as the occupant or an anonymous person.

    The results are interesting, if predictable. In general, people are comfortable with the idea that self-driving vehicles should be programmed to minimize the death toll.

    This utilitarian approach is certainly laudable but the participants were willing to go only so far. “[Participants] were not as confident that autonomous vehicles would be programmed that way in reality—and for a good reason: they actually wished others to cruise in utilitarian autonomous vehicles, more than they wanted to buy utilitarian autonomous vehicles themselves,” conclude Bonnefon and co.

    And therein lies the paradox. People are in favor of cars that sacrifice the occupant to save other lives—as long they don’t have to drive one themselves.

    Bonnefon and co are quick to point out that their work represents the first few steps into what is likely to be a fiendishly complex moral maze. Other issues that will need to be factored into future thinking are the nature of uncertainty and the assignment of blame.

    Bonnefon and co say these issues raise many important questions: “Is it acceptable for an autonomous vehicle to avoid a motorcycle by swerving into a wall, considering that the probability of survival is greater for the passenger of the car, than for the rider of the motorcycle? Should different decisions be made when children are on board, since they both have a longer time ahead of them than adults, and had less agency in being in the car in the first place? If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?”

    These problems cannot be ignored, say the team: “As we are about to endow millions of vehicles with autonomy, taking algorithmic morality seriously has never been more urgent.”
    • Like Like x 1
    • Informative Informative x 1
  2. Seems perfectly logical to me, and anyone else who has experienced a computer malfunction, especially in a car.

    The only reason that I'd be in favour of computer controlled cars on the road is that it might replace some/many of the rubbish drivers currently on the loose.
  3. They'll have to change the programming for Australian conditions anyway...

    ie, if there is a hazard ahead.... car will need to start texting and then swerve TOWARDS the pedestrian/cyclist/motorcyclist :D
    otherwise, you will have heaps of people deliberately swerving toward autonomous cars, just to make them crash... :p

    but seriously though, if the autonomous car can detect hazards far enough to be able to stop in time, and a group of 10 drunk Rugby League fans jump out on front of it... then they deserve to get squished :D
    would have thought best option would almost always be to hit the brakes to reduce severity of impact, rather than trying to avoid the collision entirely.
    • Agree Agree x 2
    • Like Like x 1
  4. Could become the next internet phenomenon, like planking or the cinnamon challenge.
    • Like Like x 1
    • Funny Funny x 1
  5. It's actually not that difficult. Progam the vehicle to sacrifice the at-fault party.
    I know some of you are going to suggest that it won't work because some at-fault parties will be disabled or children. But they aren't at fault be definition so the fall-back position is that the vehicle identifies non-responsible parties and saves them at the expense of all others.
    So just as soon as we can prove that the system has infallible moral discrimination we can start using it, and not a minute before. Simple.
    • Like Like x 1
  6. The road trials to date have had acidents only from other people crashing into the autonomous cars, not the other way around. I'd suggest no swerving allowed as a simple solution. People have been living with trains for hundreds of years without ethical problems (and those things can barely brake). I've also yet to see a situation where I could push a fat person in the way of a speeding train to save a bus load of children.
  7. what will happen is that the overriding algorithm will be to adjust the speed such that the car can always stop in time if a hazard is observed. This will result in the car driving vveeerrrrrrryyyy sssllloooowwwllllyyyyyy past any parked cars just in case a drunken rugby team pops out suddenly.
    • Funny Funny x 2
  8. "But officer, he was eyeing off a schoolbus and looked hungry!"
    • Funny Funny x 2
  9. Seems pretty easy to me. The car will be designed in such situations to break only. Whatever happens after that is it. There's no way a company is going to program an active features which deliberately changes the direction of a car that deliberately results in the death of someone who wouldn't otherwise have been killed.

    It's a bit like the whole train going down a track. 20 people on the line who are going to be killed. Someone has the power to change the track the train is on, but it will end up hitting a wall and killing the engineer of the train. 20 lives to 2 sounds like an easy comparison - but no company would actively make that decision if the situation wasn't caused by them in the first place. They'd stand by and stay 'out of fault' rather than actively be involved in taking someone's life.
  10. [​IMG]
    • Like Like x 1
  11. Shoot the man on A and let the train run over his family on B :)
    • Funny Funny x 1
    • Winner Winner x 1
  12. why should you care where the train goes? there's like 12 clones, and some dopey guy and his family that somehow got chained to train tracks... they probably deserve whatever they get :p
    unless your cunning plan is to pull the lever and "comfort" the guys grieving wife?

    who the hell ties people to train tracks in this day and age anyway?
  13. I really don't see the appeal of autonomous cars in the first place. For people who don't want control of their destiny there is already a more efficient system. It's called public transport.
    • Agree Agree x 3
    • Funny Funny x 1
  14. I find it hard to believe that any commercially available (at whatever point in the future) autonomous car technology will be any worse or more dangerous than the horde of bored, uninterested, indifferent and barely competent humans who make up perhaps 80-85% of current drivers. Bring it on I say.
    • Agree Agree x 5
  15. That does seem the likely outcome. Or they go through stats and roll with the most common human reaction to a given circumstance ("It's no worse than what a human would do, and it means the autocar will be doing what other road users would expect)... which will probably turn out to be panic braking :p.

    Situations like this are exactly why everyone in the world needs to be armed!
  16. I suspect that one of the earlier adopters of self driving cars will be companies/groups/individuals that use lots of taxis/uber trips.

    A self driving taxi/uber trip (that you can book to pick you up to go somewhere at a particular time) instead of a somewhat unreliable person seems like a good idea to me.

    Imagine a fleet of google cars that are garaged at Melbourne or Sydney Airport that you can book using a smart app to take you to or from the airport to your door instead of an airport train or an airport shuttle bus or a taxi/Uber driven by a guy with BO.

    Or railway stations that had ride sharing self driving vans that you can book using an app to pick you up at a particular time around a suburb and take you to the nearest Railway Station so you don't have to drive there.

    I'd use those!
    • Agree Agree x 3
  17. Good idea Geoff3DMN. Starting up a business?
    • Like Like x 1
  18. I manage a transport company, I could manage an auto-car fleet but my plate is full at the moment.
  19. Moot point I think. If incidents when the autonomous cage has to make such a choice become common then it is quite clearly an underdeveloped piece of shit. I think autonomous cages are the most disgustingly misanthropic idea anyway; they reek of throwing ones hands in the air and giving up on being good at caging. Will flee if I encounter in traffic and will not ride in one.
  20. In 30-50 years, the only people driving will be the minority choosing to do so for a hobby. Everyone else will have many hours, once wasted menially operating vehicles, liberated to spend more usefully (that time will probably mostly be stolen by employers, but still ..). Road deaths will plummet. Emissions will be less of a threat to the climate stability agricultural civilisation depends on. Urban planning will no longer be grossly perverted by the need to accommodate ridiculous bloated vehicles in or near every building. Those attached to 20th century technology will howl at the moon exactly as people always do when new technology displaces old. The howls will have precisely the same effect.
    • Agree Agree x 2