Cake
  • Log In
  • Sign Up
    • Are people like Ray Kurzweil part of a religion by believing in the singularity? Yuval Noah Hurari makes such an argument in Homo Deus where he calls it a techno-religion. I heard of Kurzweil and the singularity many years ago and found it to be a very compelling idea and a real possibility. I now still think it is a possibility but think Kurzweil was extremely off on his expectation of when it would occur and the inevitability of such things. I am an optimist and somewhat of a science and technological optimist but I still think such things pose a clear and very serious risk to the human species. Certainly reading Our Final Invention: Articial Intelligence and the End of the Human Era (by James Barrat) had something to do with that. I'm neither as fearful or cavalier about the perceived threat of AI but see it as as reason to think long and hard about the future of AI and how we will monitor and/or control the industry. What are your thoughts on the singularity? Is AI consciousness likely or inevitable? What are the chance AI will coopt humans for their own needs? Will humans become cyborgs in a way that counters the threat of AI? Is the Singularity near? Do you hope it'll save you from certain death?

    • Other than confusing the remotely possible with the inevitable, I think the biggest problem I have with singularitarians (is that a word?) is that they neglect to consider the social context that may constrain technology. Presumably, the rich will be the first to benefit, but how will the rest of us react if Eric Trump becomes immortal? It might not be pretty. Owing to poor individual choices and irrationality of the healthcare delivery system, actual life expectancy in the US is declining despite scientific breakthroughs. The social consequences of climate change may derail civilization before long, which is sure to alter the course of technological progress. So no, I don't think the singularity is near.

    • Ray and other 'singularitarians' believe that technological progress is exponential. But, in reality, it more often follows an S curve. Slow start, fast progress, and then a plateau. Look at processors, for example. It was a wild ride for a long time, but in the last decade or so, progress slowed down considerably - hence the move to multi-core CPUs instead of just cranking out faster CPU cores. Now even that is hitting a wall (programs cannot take advantage of too many cores), and so we move to ASICs - chips specialised for particular tasks so we get GPUs, 'neural processors' and the like. Another 10x improvement is not in anyone's roadmap.

      A whole different problem is that general AI is predicted exclusively by the people not in the actual AI research. I ran into an interesting article about it yesterday: https://www.rlleaders.com/rll-insights/2018/10/17/ai-not-in-our-own-image.

      Omnipotent master-AI is the wrong thing to be afraid of. The right thing to be afraid of is regular, mundane AI as we have already, used incorrectly or for malicious purposes. Law-enforcement AIs with built-in racial bias (systemic racism, anyone?). Hiring AIs with bias against women (hello, Amazon!). AI weapons? AIs in the hands of ruthless dictators or totalitarian regimes can be a significant force-multiplier in perpetuating injustice. That is where our attention needs to be.

    • I look forward to your reply. I had a look at the link you posted and I'm not sure how serious the people who made that site really are. I'll wait for your thoughts on the longer reply.

    • So human ineptness will postpone or even eliminate the possibility. But statistics shows that the world is actually becoming a more friendly and better place. Yes it's a somewhat subjective thing but by many measurable and agreeable measures it is actually becoming a better place despite what the media presents. Many it's closer than you think ;)

    • I certainly agree with your comments about the more immediate risks of non conscious AI. I'm not so sure we can discount Moore's law just yet. Quantum computers seems reasonably close at hand...

    • True, a breakthrough might come from somewhere, but even if does, it doesn't look like the path to General AI is just heaping more CPU power into it. Consciousness is something science doesn't even understand, so attempting to replicate is basically guesswork. We might end up with something radically different, which might bring its own, novel problems.

      But, in any case, I'm optimistic and looking forward to what will come.

    • Ever since the 2016 US election, I've gotten out of the prediction business. I'm aware of (and grateful for) the ways in which the world has improved in my lifetime, and though I take the warnings about climate change seriously, I don't pretend to know that extinction is around the corner. I guess my point was that technology does not exist in a vacuum and that social forces may alter its development course. It needn't be anything as extreme as a nuclear war or environmental disaster. Something with a great deal of promise like editing the human genome may be resisted and even prohibited because of associations with Nazi eugenics or religious objections to interfering with god's plan.

      I should add that I have other objections to the idea of the singularity, both technical and philosophical, but I'll save these for future posts.

    • As far as I know, quantum computing is only useful for a limited class of problems. It will not necessarily replace current computer architectures.

    • I agree with your comments, but I would point out that artificial general intelligence may not require consciousness. That doesn't make it an easier problem to solve, of course.

    • I agree with you about the computing power not being enough. What I'd add however is that we do NOT need to know how to create conscious nor do we need to understand it. We only need to create some program that starts teaching itself in such a way that it grows geometrically.

    • Of course it's all just speculation but important speculation because it's risk is so great. Even if the probability is remotely small, the consequences could mean our extinction. I'd also agree with you about the social influences on the development of science and technology (that's what I did my MA thesis on) but social influences could become a mute point if some self learning program with massive computing power started learning at an exponential rate. Unlikely but possible in mho.

    • Agreed. Something could destroy us without being conscious, aware or even thinking. It may just be that a conscious entity is a safer bet.

    • The idea of recursively self-improving AI leading to an exponential explosion is frequently mentioned, but not challenged very often. Perhaps I'm missing something fundamental, but it seems to me that we don't know what the shape of the curve will be. It seems to be steep now (at least in pattern recognition), but it could easily level off or become asymptotic to some decidedly non-spectacular level, perhaps well below human levels of general intelligence.

      But I agree that the possible consequences are dangerous enough that we need to pay heed. Nick Bostrom pointed out in his book on superintelligence that while it may take fifty years (or more) to create superintelligence, it might also take that long to figure out how to control it. It's not something we can retrofit, so we had better start now. A good first step would be to make existing AIs better able to explain their decisions. Since we're already starting to use the technology to make decisions that affect people's lives (hiring, credit and others), we have a right to transparency. But ultimately, we need to make sure that AI's goals are aligned with human goals, and I don't think we have much of an idea of how that can be guaranteed.

    • I agree with most of what you've said but good luck getting AI to explain how they've done it. We already have current AI that does stuff better than humans but we don't know how they do it and getting a dumb program to explain how is more complicated than creating the program in the first place.

      While I agree about not know what the curve is like, I find it hard to believe that AI wouldn't be smarter than humans. Maybe smarter in some things and not so smart in others. I also think that if we look say 10,000 years into the future then the odds of these things happening are much higher, assuming of course that human society keeps trucking along without any major disruptions in our technological and scientific progress. Certainly there'll be bumps along the road but it's likely that AI will be on average smarter than humans.

    You've been invited!