Cake
  • Log In
  • Sign Up
    • The family of Walter Huang sued Tesla today for his fatal crash in California while his Tesla drove on Autopilot. Their claim is Tesla is using its customers to beta test the feature. They have a point.

      Many people say Elon Musk talks about Autopilot and demos it in ways far from the guidelines the company provides, like taking his hands off the wheel during the 60 Minutes interview. Employees at Tesla reportedly try to reign him in but he is unreignable (is that a word?).

      Elon keeps saying that when they can show that self driving is safer than human driving (soon) then the world will want it. Here is the moral question I have: if the company can show Autopilot to be safer than humans, what do they do if they keep getting sued each time there is a crash? Do they let the lawsuits pile up knowing that they are saving lives, or do they avoid damage to their brand and bank account and let humans keep crashing their cars to avoid lawsuits?

      This Forbes article doesn't paint a pretty picture of Musk's honesty, but the author is torn too:

    • When I was touring business school programs, I sat in a business ethics class that talked about the Ford Pinto and how business execs decided that the cost of lawsuits due to fatalities was less than the cost of a recall.  This video I found does a nice job of explaining the history of auto problems from bugs living in the upholstery of Model Ts to Ralph Nader’s “Unsafe at Any Speed” to the Takata air bags.


      Is more government regulation needed over self-driving cars?  Or should we trust Tesla and Google to be responsible in introducing “leading edge” technology?