Cake
  • Log In
  • Sign Up
    • Stuart Russell, computer scientist at UC Berkeley, is co-author of Artificial Intelligence: A Modern Approach, the most widely used CS introduction to the topic. In this 90 minute interview, Russell discusses what we must do to make sure that future AI systems stay within our control. He does not believe that we are capable of specifying with certainty what our goals are, and that therefore, we need to build uncertainty and human interaction into future AI systems. I hadn't heard this idea before, but since it comes from the guy who literally wrote the book, I think it's worth considering. He also touches on other aspects of what could happen if we actually succeed in creating a general artificial intelligence and believes many of his colleagues are in denial about the dangers.

    • I was in the book store yesterday and was very surprised at how many AI books were on the shelves. I find this topic very interesting and took pictures of the book covers so I could investigate them later. Here's the ones I saw on the shelves yesterday:

      Machines that Think by Toby Walsh, 2062: The World that AI Made by Toby Walsh

      Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark

      Do Robots Make Love?: From AI to Immortality - Understanding Transhumanism in 12 questions by Jean-Michel Besnier and Lauret Alexandre

      The Sentient Machine: The Coming Age of Artificial Intelligence by Amir Husain

      Thinking Machines: The Quest for Artificial Intelligence and Where it's taking us next by Luke Dormwehl

      Related books: What Future: The years best writing on what's next for people, technology & the planet edited by Meehan Crist & Rose Eveleth, The End of Life as We Know it by Michael Guillen

      It's not a huge book store and it's notable how popular this topic had become in the last year.

    • Fascinating interview! Long listen but very worth it.

      Last week I listened to this interview with Sam Altman:

      Kara wanted to talk about today's problems—social media, etc. But Sam is the Chairman of OpenAI and said the problem is we are good at talking about present-day problems, but we are not good at thinking through future problems.

      He thinks the #1 future problem is that AI will be smarter than us in 30 years. No one knows what to do about that or what it means.

    • He thinks the #1 future problem is that AI will be smarter than us in 30 years. No one knows what to do about that or what it means.

      Well, maybe. It's already true in some domains, but I think there's general agreement that we are a long way from creating an artificial general intelligence. I think there's now a widespread understanding that there be dragons here and we'd best start thinking about them. One of Nick Bostrom's key points in his book about superintelligence is that even if we're 100 years away, it might take us that long to figure out how to make it safe. The ghost of nuclear physics and the bomb still haunts scientists. I find it reassuring that the problem is being analyzed by a variety of academic and professional institutions (OpenAI is among them):

      What I'm less confident about is that we will be able to deal appropriately with the less than existential risks of limited deployment of AI technology in specific areas, especially those exercising social control. China's experiments with social credit scores and widespread facial recognition scare the shit out of me.

    • Dang guys I can't keep up with the number of AI books to be read and you keep adding great AI videos and such. Not enough time!! We need time travel or augmented learning.