• Log In
  • Sign Up
    • indeed, think about an update for phones, how many are upgraded before many of us even know there is an update. The speed of communication from device to device and our need to have them all speak to each other means matrix like instant learning is a high likelyhood.
      The advancement in paient care with robotics will push us down the predicted path I am certain.
      I want a desk map that treats me nicely even though I see the possible future issues.

    • I don't think AI can ever become "aware" in the way that you and I are aware. It depends how you define "awareness", but by my definition of "aware", I'd say no. There's so much that goes into awareness that we simply don't understand it. What causes pain? Will AI ever become capable of feeling pain? I have a hard time seeing how that becomes possible. Will AI ever be able to become self-aware such that it "knows" it's merely the creation of other humans and not here by any natural/biological means? I have hard time seeing how that becomes possible as well.

    • I find the study of AI and selfawareness in machines to be a fascinating
      one. Do you think humans should be concerned about machines becoming
      self aware? Why or why not?

      I share your fascination. I just finished reading Max Tegmark's Life 3.0: Being Human in the Age of Artificial Intelligence and recommend it highly. You would likely enjoy the chapter on consciousness. I'm not sure that everyone would agree that consciousness is identical with self-awareness, but I think they're close enough to treat them as interchangeable.

      Tegmark would say that from the physicist's perspective, it's all just quarks and electrons arranged in a particular way, so there's no a priori reason to assume that a machine could not become conscious. It's an open question whether that will happen or not. Consciousness might be beside the point, however. The fundamental question is whether the goals of an AI are aligned with human goals and whether we can guarantee that they stay aligned if the AI becomes capable of recursive improvement. It need not be conscious to become destructive any more than consciousness would necessarily preclude being destructive.

      Another thing to consider is that machine consciousness might be radically different from human consciousness. We evolved five senses to help ensure our survival in the natural world, but it's entirely possible that an AI would develop different forms of experience. It probably would have little use for a sense of taste, but perhaps it would directly experience logic instead of relying on symbolic processing. What if instead of merely perceiving colors, it directly experienced an instant spectrograph? To paraphrase Thomas Nagel, what's it like to be an AI? Fun stuff to ponder.

      Tegmark has a number of Life 3.0 lectures on YouTube that are worth watching. Here's one he gave at Google last year: