Stuart Russell, computer scientist at UC Berkeley, is co-author of Artificial Intelligence: A Modern Approach, the most widely used CS introduction to the topic. In this 90 minute interview, Russell discusses what we must do to make sure that future AI systems stay within our control. He does not believe that we are capable of specifying with certainty what our goals are, and that therefore, we need to build uncertainty and human interaction into future AI systems. I hadn't heard this idea before, but since it comes from the guy who literally wrote the book, I think it's worth considering. He also touches on other aspects of what could happen if we actually succeed in creating a general artificial intelligence and believes many of his colleagues are in denial about the dangers.