Cake
  • Log In
  • Sign Up
    • This is not surprising. "Morals" are slippery suckers at the best of time and they get even more so when one's personal wellbeing is in play.

      Expecting an AI, no matter how complex and well trained, to make morals and ethics based decisions is naive at best. An AI can't grok empathy and without that all decisions are simply logic.

    • It would be a simple matter to program different rules in different countries, so I don't see regional variation as anything to be concerned about. More difficult is defining any set of rules to begin with. Some of the dimensions covered in the Nature study--like social status and possibly even age--would be a huge challenge to detect with today's auto technology and are probably impossible for the near term.

      Apart from the ethical rules, there is the question of who gets to make the rules. Is it governments? The manufacturers? How about giving the individual driver the power to set them as preferences?

      Trolley problems are interesting philosophical exercises but probably rarely occur in real life and perhaps will become even more uncommon with safer autonomous vehicles. I think it would be a mistake to slow implementation because we can't agree on whether it's better to kill someone in a wheelchair or someone in a Bentley. We probably ought to make a collective choice about rules governing protecting those in the car versus those who are not but beyond that, perhaps when a true trolley problem is detected the decision should simply be randomized for now. We can always revisit the problem once we have sufficient real-world experience--it may turn out that that the number of cases is vanishingly small and not worth addressing. We should also keep in mind that adding complexity to any system increases the chances for bugs or unforeseen consequences.

    • We, human drivers, are biased towards saving ourselves. Survival of the fittest is still going strong. It's hard to imagine getting it so wrong that a machine is "morally" worse than the average human.

    • Expecting an AI, no matter how complex and well trained, to make morals and ethics based decisions is naive at best.

      Having worked in a field where artificial intelligence was part of the gig, I've always wondered where people get the idea that morals and ethics are things that we can and should embed into our machines when we can't even consistently convey them in a persuasive way to each other. Why should we be any better at teaching machines to be moral then we are that teaching children to be moral?

      I mean that seriously. If we take any statement about artificial intelligence or the limitations thereof and simply ask why we should be allowed to create children which are natural intelligence and often have the same failings, no one wants to deal with that. And yet, it's a far more compelling construct.

      Humans have a vast array of complicated, conflicting, morals and ethics. There is not one moral. There is not one ethic. Why, then, should any machine share any of ours?

      Perhaps we should simply question whether morals and ethics are real responses to the world and not just failed shortcuts of reasoning which machines should, reasonably, avoid.

    • More difficult is defining any set of rules to begin with. Some of the dimensions covered in the Nature study--like social status and possibly even age--would be a huge challenge to detect with today's auto technology and are probably impossible for the near term.

      If a human can't reliably determine these things in moments, and yet somehow make decisions in that moment, why do we expect to be able to create a machine which does so?

      It's not an issue of machine autonomy. It's an issue of people engaging in wishful thinking – and in particular, a type of wishful thinking which ultimately means "I wish everyone just believed and acted the same way that I do." It's just a lot more polite to phrase it as a criticism of a made up machine than to simply stated honestly and upfront.

      I see that as a problem. In that problem leads to this one:

      We probably ought to make a collective choice about rules governing
      protecting those in the car versus those who are not but beyond that,
      perhaps when a true trolley problem is detected the decision should
      simply be randomized for now.

      Who is "we?" If "we" make a collective choice about rules governing those in the car versus those who are not, who is making the decision, those in the car or those who are not? From a game theoretic and economic position, those in the car have a real amount of skin in the game and those not in the car and who didn't pay for the car – don't.

      If I pay for a vehicle, and it drives itself, I'm paying for it to protect me. Just as when if I'm driving a vehicle, I'm driving the vehicle in a way that protects me. The alternative leads to a lot less me and, in the long run, no decision-making that mandates a lot less me is going to be particularly well supported by me. My gut says that is a line of reasoning that maps pretty reliably onto the bulk of the species, otherwise we wouldn't have much of a species.

      In the main, when I hear someone say "we," what I really hear them say is "people that agree with me," and is someone who prefers to think for myself I have to say, it's not a very comfortable place to be.

    • By "we" I really meant government regulation as opposed to leaving up to
      individual manufacturers or owners. I agree that most people would
      likely opt for their own protection. But car owners are also pedestrians
      at times and probably ought to take that into consideration as well.

      While I generally think that these trolley problems are more theoretical than
      practical, consider the following situation: A car is traveling at 30
      MPH when a pedestrian abruptly steps in its path so close that it can't
      stop in time. It could swerve to avoid the pedestrian, which would
      result in damage to the vehicle, but serious injury to its passengers
      would be unlikely due to its sound design. What should the car do? If it
      were mine, I'd choose to have it avoid running the pedestrian over. I'd
      like to think that's what I would do if I were driving, but of course, I
      can't be sure. OTOH, I suspect that a car could be programmed to always make
      that choice. Should it be?

    • if a machine becomes capable of ethical reasoning, then I would (tentatively) agree to let the machine share with us the outcome of its logic. However, in the situation where the machine is operating from programmed instructions, it is more ethical *on the part of the designer* to provide morality-oriented decision routines (albeit the morality the designer chooses), than simply to choose randomly. If we discover that our morality is "failed" or that our desision routines are illconsidered, we can presumably upgrade them later. We make the best choices we can and move forward, no?

    • if a machine becomes capable of ethical reasoning, then I would (tentatively) agree to let the machine share with us the outcome of its logic. However, in the situation where the machine is operating from programmed instructions, it is more ethical *on the part of the designer* to provide morality-oriented decision routines (albeit the morality the designer chooses), than simply to choose randomly.

      Please differentiate for the class the difference between programmed instructions and ethical reasoning in human beings. Not as a philosophical issue but as a mechanical issue.

      I don't think that you're going to be able to find a difference in any real sense.

      From the perspective of an engineer, the decisions that he makes in terms of deciding how a vehicle acts is going to be based not a morality but in effectiveness, and we already know the answer to that question: you protect the person that paid for the vehicle. It's a question which is already well answered in the auto industry. Simply look at the design of crumple zones and safety engineering that people are already paying for.

      Everything else is sophistry at best. At the very best.

      I am a firm believer in "making the best choices we can and moving forward," but the problem is that people seem to insist on other people making decisions which are not "best" but "perfect", where both terms are effective ciphers for "exactly what I would decide."

    • By "we" I really meant government regulation as opposed to leaving up to
      individual manufacturers or owners.

      Just exactly the people I want making moral decisions for me on a regular basis. The exact same people who, when it comes to morals, go out of their way to express that they don't share any of mine – on either side of the aisle. In any government.

      You're going to have to forgive me if my level of excitement about such a group deciding about how much my life should be protected is distasteful.

      Should it be?

      This is a simple minimization problem. And it's a solved problem. But more importantly it has nothing to do with morality; it has to do with damage minimization.

      If you don't swerve to avoid the pedestrian, you are definitely going to have significant damage to the front end of the car. Crumple zones can only do so much, humans are fairly high energy transfer obstacles, so at speed that's going to be a lot of damage. Vehicle design has already made the decision that absorbing damage to the car itself is far more desirable than harming the passengers, so the obvious outcome is that the vehicle should swerve.

      However, a smart designer would simply have it swerve the other way so that nobody and nothing takes damage.

      All you've done is set up another trolley problem, except this one is less interesting. None of them are particularly interesting because they explicitly don't take into account the realities of a situation except to say, "I'm more moral than you because I made this choice." They have nothing to do with the development of artificial intelligence because they have nothing to do with the development of natural intelligence.

      All autonomous vehicles really have to do is be better at day-to-day driving than easily distracted, unfocused, often tired squishy humans. That's not particularly hard. Introducing "ethical dilemmas" which really aren't either just seems to serve as a proxy for people saying "I'm uncomfortable working alongside something so different from myself" without the actual self reflection necessary to see that it is just as applicable to other human beings.

      That is in no small way frustrating.

    • It's not about getting the AI to make moral decisions. Rather they are programmed right into the decisions the car would make. It is already being done and has been done to some extent.

    • Which brings us back to Whose morals?

      If people are programming decisions into vehicles based on their morals, who decides? I personally don't see morals being part of a cars programming. I see logical decisions being made by the AI on practical bases, no morals or ethics involved.

    • In this case, I'm not sure you can separate logical/practical from the implictly ethical. What would a purely logical rule look like?

    • In this case, I'm not sure you can separate logical/practical from the implictly ethical. What would a purely logical rule look like?

      "Minimize damage to those in the car. Following that, minimize collateral damage to surrounding objects and entities. Following that, minimize damage to the car."

      That's logical and practical. Ethics are how you feel about those logical, practical directives. They have nothing to do with making that set of decisions in the first place.

      A cynic might suggest that morals and ethics are societal conventions that we pretend have objective meaning in order to constrain the behavior of people we've never met in ways that we can predict. They exist in the nebulous liminal space between people and the world. They are not actually real things only convenient abstractions.

      Or, teal deer, you don't have to like practical engineering concerns but you likewise don't get a say in them.

    • "Minimize damage to those in the car. Following that, minimize collateral damage to surrounding objects and entities. Following that, minimize damage to the car."

      "Minimize damage to the car. Following that, minimize collateral damage to the surrounding objects and entities. Following that, minimize damage to those in the car."

      Equally objective, but ethically dubious, right? What you see as clearly logical has no more logic than my restatement, but merely makes sense to you because you have internalized the commonly accepted ethical values. Ones which most engineers share as well, which is why it seems like the obvious choice. Nevertheless, it's grounded in ethics, not logic.

    • It's only ethically dubious if you believe in ethics. I thought I'd made it clear that I don't.

      However, it's a trivial computation figure out that no one will buy a product that doesn't protect them if they're in it, so making that the hierarchy of directives is going to make sure that you minimize sales for your product.

      See, that's the problem with trying to introduce the idea that someone else can't possibly believe otherwise than yourself. They do, and they often have really good reasons for doing so.

      You might be able to make a solid argument that the car minimizing damage to itself is likely to minimize damage to the inhabitants. I would accept that to be a reasonable logical argument. But if your goal is to sell a device to the market, you won't get any traction by telling the purchasers that they are less important than the people who didn't purchase the product.

      Now, if you were to simply come out and state, outright, that you don't care if anyone has automated transportation, I would accept that to be true. If you don't care that a machine can do a job better than a person in the widest possible set of conditions in which humans do a reasonable job, then just say that. But there are going to be some logical cascades from that statement.

      After all, the majority of the operation of the vehicle doesn't involve any sort of decision between the inhabitants of the vehicle and those outside. Everyone involved in the situation would much rather have no interaction between them at all. There's no ethical question about that; it's just what is reasonable and efficient. It's what everybody wants.

      All the machine has to do is be better at avoiding the situation than a human being and at least as good as a human being in the case of a situation where they have to decide between their own safety and somebody else outside the car's. Given that apparently human beings can't even make a consistent judgment in that case that everyone agrees with – that shouldn't be too hard to do. Arguably, even those who would be horrified by a coin flip to decide would be equally as horrified by acts of human self-preservation which break toward protecting themselves even more often than a coin. So if humans aren't good enough and a machine isn't good enough then the only conclusion to come to is that you have given up driving and being near roads in order to preserve your ethical purity.

      I doubt that to be the case.

      If the value you want to maximize is the preservation of human survival and quality of life then bumbling around with questions of ethics can only have one outcome, the decrease of that value. The question you really want to ask is, "in the vast majority of conditions, can this machine do a better job than human beings?" Because if so, standing against the the development of such technology because once in a very rare while a difficult decision needs to be made that you might not agree with would be disgustingly unethical. How many lives which might otherwise have been saved by avoiding accidents in the first place must be sacrificed to appease your ethical qualms?

      From my perspective, it's sort of like campaigning to ban seatbelts because once in a great while, when conditions are just so, and the vehicle is hanging upside down, someone might asphyxiate because their seatbelt is holding them in position across their throat.

      I think we can both agree that would be asinine and incredibly stupid.

      That's how I see this entire debate.

      It is the ethical equivalent of banning trolleys because someone might leave a baby stroller on one track and tie a grandmother to another.

    • Please differentiate for the class the difference between programmed instructions and ethical reasoning in human beings. Not as a philosophical issue but as a mechanical issue.

      I think we will agree that the programmed instructions will follow a heirarchial evaluation that corresponds to human reasoning on ethical matters - faced with imminent collision, for example; the human will first react instinctively to save itself. Then, if there is more time to evaluate, the human will give further analysis to the situation and may conclude that self-preservation is not the best outcome (sometimes soldiers fall on a live grenade to save the rest of the squad). In auto accidents, the time for ethical evaluation is too brief for much human consideration, so we do not have stories of drivers steering into bridge abutments to avoid parked school buses. But if human thought were a bit faster, we might. We think an AI would be able to complete the more elaborate decision tree we impose (in a mechanical fashion), more quickly. The difference, in other words, is not one of outcome, but one of speed.

      The AI won't follow an "ethical decision tree" unless we direct it to do so - not any AI likely to be in a vehicle in the next two or three decades anyway, and although the engineer may be the one who directly "informs" the AI of the logic - that does not mean the engineer should be the one who decides the logic; anymore than today's automotive engineers choose the safety standards.

      Of course, engineering in a vehicle DOES affect the vehicle's safety. More safety costs money and can command a market profit premium. But what some would call "wise" design, may include provisions for safety that would not make it to the market if not government-imposed (three-point restraint belts being a good example). When an AI is driving the car, it should have a transparent set of collision-management decision guidelines. Government-imposed standards, like so many human endeavors, are fraught with failures and bad outcomes. That does not mean we would be ethical to eschew standards.

    • you don't have to like practical engineering concerns but you likewise don't get a say in them.

      Of course we have a say in practical engineering concerns! Wastewater plants, air pollution scrubbers, and trash incincerators aren't designed by engineers to look pretty - they are designed to meet environmental protection regulations. Medicines are not just made to make a profit, they have to be shown to actually have a reasonable chance of improving health and not causing harm. Your house is full of electronics which not only do whatever electronic thing they are sold to do, but they also don't cause interference with the other electronic things around them. The engineered products around us today are like they are in no small part because they met standards of design that were imposed on engineers (who, I know from experience, scream and yell to each other about how impractical such impositions are; then buckle down and overcome those obstacles).

    • It seems you haven't read the article. Morals varies depending on various factors. Recognizing different moral choices in different areas around the world should impact how the AI bases its decisions.

    You've been invited!