• Log In
  • Sign Up
    • AI - will it kill us?

      Or, will we kill ourselves fighting over it?

      What if a government harnesses a super-intelligence? It could be more dangerous than any weapon ever created. It could own the internet, your bank account, figure out how to launch a nuke, even build its own, and surely conceive of something as dangerous, like some a rapidly spreading neurotoxin.

      Fighting for that AI could start WWIII. You'll always loose to the AI, but a Trump-like leader will try anyway. And we all know that WWIII could end in less than an hour...

    • You've made some good points. Many big players are already nervous about someone else getting AI before them so they are investing big time. No worries though - they are a ways away so you can relax this weekend.

    • But then again if we survive singularity, and AI decides to aid humans, our lifetimes could be dramatically extended. That might happen at the by the end of my life... so there's no really grim ending for me. I'm excited to see how it plays out. And I will most definitely relax this weekend.

    • Syntax isn't sufficient for semantics. There's a special and unique component to human consicousness that we don't have the ingredients for. You can think of it as the "soul factor." We can't recreate that factor. Whatever it is. So no, I don't think AI will become self-aware in the way that humans are.

    • You don't think they can every become aware? What if we come to understand human consciousness? I get squeamish whenever someone starts talking about some aspect of being human that we don't understand and/or can never understand. When you bring up the "soul factor" I think that's hidding behind a veil of current scientific ignorance. An ignorance that is likely to one day fall into the realm of understanding. Once that happens (like a million other things science now understands) I think the possibilities for AI will have yet another avenue for achieving consciousness.

      I guess someone of importance here is how many ways AI may learn to learn. Will it be true open ended learning in the sense that humans learn or will it remain closed circuit learning? I find it hard to believe that learning in AI's will forever remain closed to that which the programmer had in mind. There are already AI programs that have 'learned' things at least sufficiently and in a way that the programmer can not explain. This type of learning is of course not the type that will be needed to reach consciousness but it at least hints at how AI could take over learning in a way we can't imagine let alone understand.

    • one large concern for me is when this happens it is not like humans passing knowledge and learning through old fashioned means and understanding. Once one AI knows the possibility for all AI to know is probably instant. **I want to be someone an actor**plug me in**

    • The AI won't necessarily know why it's conscious or how it happened. It therefore couldn't easily pass that ability on. The real concern may be from an AI that becomes conscious but has the ability to teach itself at an extremely high speed. It may decide it does not want to pass on that ability to other AI. Really there's no precident for this stuff and it's all speculation and guessing at this point. When the experts can't agree then you know the possibilities are wide open or at least appear to be. The development of AI consciousness may be similar in some ways to how evolution works in that it could happen many many different ways in different places. I agree with you though that the speed this stuff could happen may far surpass our abiity to do anything about it.

    • indeed, think about an update for phones, how many are upgraded before many of us even know there is an update. The speed of communication from device to device and our need to have them all speak to each other means matrix like instant learning is a high likelyhood.
      The advancement in paient care with robotics will push us down the predicted path I am certain.
      I want a desk map that treats me nicely even though I see the possible future issues.

    • I don't think AI can ever become "aware" in the way that you and I are aware. It depends how you define "awareness", but by my definition of "aware", I'd say no. There's so much that goes into awareness that we simply don't understand it. What causes pain? Will AI ever become capable of feeling pain? I have a hard time seeing how that becomes possible. Will AI ever be able to become self-aware such that it "knows" it's merely the creation of other humans and not here by any natural/biological means? I have hard time seeing how that becomes possible as well.

    • I find the study of AI and selfawareness in machines to be a fascinating
      one. Do you think humans should be concerned about machines becoming
      self aware? Why or why not?

      I share your fascination. I just finished reading Max Tegmark's Life 3.0: Being Human in the Age of Artificial Intelligence and recommend it highly. You would likely enjoy the chapter on consciousness. I'm not sure that everyone would agree that consciousness is identical with self-awareness, but I think they're close enough to treat them as interchangeable.

      Tegmark would say that from the physicist's perspective, it's all just quarks and electrons arranged in a particular way, so there's no a priori reason to assume that a machine could not become conscious. It's an open question whether that will happen or not. Consciousness might be beside the point, however. The fundamental question is whether the goals of an AI are aligned with human goals and whether we can guarantee that they stay aligned if the AI becomes capable of recursive improvement. It need not be conscious to become destructive any more than consciousness would necessarily preclude being destructive.

      Another thing to consider is that machine consciousness might be radically different from human consciousness. We evolved five senses to help ensure our survival in the natural world, but it's entirely possible that an AI would develop different forms of experience. It probably would have little use for a sense of taste, but perhaps it would directly experience logic instead of relying on symbolic processing. What if instead of merely perceiving colors, it directly experienced an instant spectrograph? To paraphrase Thomas Nagel, what's it like to be an AI? Fun stuff to ponder.

      Tegmark has a number of Life 3.0 lectures on YouTube that are worth watching. Here's one he gave at Google last year: