He thinks the #1 future problem is that AI will be smarter than us in 30 years. No one knows what to do about that or what it means.
Well, maybe. It's already true in some domains, but I think there's general agreement that we are a long way from creating an artificial general intelligence. I think there's now a widespread understanding that there be dragons here and we'd best start thinking about them. One of Nick Bostrom's key points in his book about superintelligence is that even if we're 100 years away, it might take us that long to figure out how to make it safe. The ghost of nuclear physics and the bomb still haunts scientists. I find it reassuring that the problem is being analyzed by a variety of academic and professional institutions (OpenAI is among them):
What I'm less confident about is that we will be able to deal appropriately with the less than existential risks of limited deployment of AI technology in specific areas, especially those exercising social control. China's experiments with social credit scores and widespread facial recognition scare the shit out of me.