Cake
  • Log In
  • Sign Up
    • I find the study of AI and selfawareness in machines to be a fascinating
      one. Do you think humans should be concerned about machines becoming
      self aware? Why or why not?

      I share your fascination. I just finished reading Max Tegmark's Life 3.0: Being Human in the Age of Artificial Intelligence and recommend it highly. You would likely enjoy the chapter on consciousness. I'm not sure that everyone would agree that consciousness is identical with self-awareness, but I think they're close enough to treat them as interchangeable.

      Tegmark would say that from the physicist's perspective, it's all just quarks and electrons arranged in a particular way, so there's no a priori reason to assume that a machine could not become conscious. It's an open question whether that will happen or not. Consciousness might be beside the point, however. The fundamental question is whether the goals of an AI are aligned with human goals and whether we can guarantee that they stay aligned if the AI becomes capable of recursive improvement. It need not be conscious to become destructive any more than consciousness would necessarily preclude being destructive.

      Another thing to consider is that machine consciousness might be radically different from human consciousness. We evolved five senses to help ensure our survival in the natural world, but it's entirely possible that an AI would develop different forms of experience. It probably would have little use for a sense of taste, but perhaps it would directly experience logic instead of relying on symbolic processing. What if instead of merely perceiving colors, it directly experienced an instant spectrograph? To paraphrase Thomas Nagel, what's it like to be an AI? Fun stuff to ponder.

      Tegmark has a number of Life 3.0 lectures on YouTube that are worth watching. Here's one he gave at Google last year: https://www.youtube.com/watch?v=oYmKOgeoOz4