Cake
  • Log In
  • Sign Up
    • Thanks for all the great responses on this topic, it's been interesting to read them all. I really like Yuval's writing, but I can see how some may find it a stretch too far. I agree with Elge though:

      I don't think Harari is being too dark or apocalyptic at all. He has said quite a few times that he actually feels very optimistic about the future of humankind, and that his "dark" tone is merely a warning speculation. And if it works - gets people thinking, - that's awesome.

      I'm glad this opened up some debate though, it's something I find myself thinking about and, I guess, it really does depend on how we ultimately use this advancement of technology.

    • It's impossible to forecast the future after superintelligence because machines smarter than the collective human intelligence have the potential to shape the world in ways we can't fathom. The Internet changed the world in ways unpredicted, but I think this will be a much bigger surprise.

      What will a world look like when poverty is eliminated globally, a cure is developed for almost every disease, and lifespans are prolonged significantly, etc. etc.? I think the quality of life will be pretty good for everyone if we can survive the threats to humanity before we get there.

    • or we end up like wall-e or Idiocracy :-) certainly hope it is not however. Perhaps more hunger games or irobot? something I need to work on as my glass is half empty when thinking in this area.

    • My bet is that productivity will dramatically increase in the coming decades to where humans need to work less and less to survive. In theory, machines doing all our jobs is not a terrible thing, but the wealth generated by those machines needs to be fairly distributed among humanity.

      I'm pessimistic right now given the growing inequality, specifically among Americans. More people are having to work harder and harder to provide a comfortable living environment for their families, despite record productivity levels. It's a sure bet that the rich and powerful will capitalize on the intelligent machines of the future. How do you give every human a fair stake? I think a universal basic income will be a necessity.

    • The trajectory is evident that we will get to singularity in the coming decades if researchers and companies continue their course. The financial motive to do so is huge.

      Extinction from a pandemic and nuclear war is a genuine threat. We can't get there if we're all dead.

      Then, if AI's interests diverge from human interests, it can either treat us as deer and let us coexist or ants and kill us. Hopefully it'll take us with. That's why finding a super intelligent solution that will let us evolve with the machines is critical.

    • Super intelligence will understand the brain better than any neurologist and psychiatrist, and would presumably diagnose and treat mental illness. If humans desire happiness and machines have our best interests, those machines will keep us from entering a Wall-e dystopia.

      As far as i-Robot -- let's just evolve with them to prevent them from killing us :)

    • The trajectory is evident that we will get to singularity in the coming decades if researchers and companies continue their course.

      What is your evidence?

    • Software-wise we're getting there. Machines are progressively getting better and better at more complex tasks. Decades ago they were better than any human at arithmetic. Now intelligent machines can drive cars, play jeopardy or chess far far better than humans.

      Hardware-wise, some forecast singularity with Moore's law. If the exponential curve of of increasing transistors/per dollar continues, there's a critical point where the transistor equivalent of a human brain.

    • ... and yet a 4 year old can beat the world's most advanced supercomputer in any creative task whatsoever, even something as simple as "draw your house for me."

      Humans have feelings (affect valence) which tells our organic neural networks what information to keep and what to discard. Feelings shows us how important data is. What shows computers what data is important? In the current paradigm: humans.

      Andrew Ng (Stanford, Coursera) proposed the rule of thumb that AI is good at substituting humans at 1 second tasks (tasks that take humans about 1 second). You can chain several 1 second tasks into more complicated tasks, like driving: 1) visually identifying an obstacle, 2) judging relative speed, 3) deciding the easiest way to avoid a collision. Nobody has ever suggested that abstract reasoning, which is required to draw any correlation to human IQ, can be broken down into 1 second tasks.

      A computer cannot imagine, feel, explore, or even learn (no, it cannot). You can give a neural net a vast amount of examples and it will do a good job "predicting" unseen examples. But the neural net is constant, the problem definition is constant. I believe we agree that learning is the ability to create internal models of reality and adapt to your environment. Computers fail at every verb and noun in that sentence.

      To conclude, there is no path that takes us from the present paradigm into a future where a singularity is reached and AI "takes over".

    • While there has been good progress using machine learning for certain problems, it seems pretty clear (to me, at least) that superintelligence is pure science fiction for now. It's not just a matter of neuron and transistor counts--elephant brains have three times as many neurons as human brains. Intelligence depends on the organization of the information processing, and our understanding of how it works in humans is still quite limited, though improving. It's possible that we will create artificial intelligence that is based on some other form of information processing, but there's nothing inevitable about that.

      We have gotten pretty good at using massive amounts of data to train neural nets on classification tasks, but we have little idea of how to replicate any human's ability to learn from a single example--when is that appropriate and when is it not? Semantic understanding is elusive, which is why machine translation is still so crude. Basic science requires experimentation, not searching Google--are AI robots going to build the successor to the LHC? And, of course, we have barely started to address the need to align AI goals with human values, a critical requirement.

      AI will undoubtedly let us build increasingly powerful tools, but AGI is another matter. It remains to be seen whether the singularity is near.

    • Great points!

      This brings up some philosophical questions, like how does consciousness work and how does it affect human creativity? And consciousness is something a machine might never possess.

      The term singularity in the context of technology is hard to define. It's the point when technology advances so far that it's unpredictable.

    • AI will be controlled by the small elite groups as the cost and interfaces into our daily lives will be through products, so even if AI is not going to control the masses, a smaller profit driven group has a big potential too.

      Sure it will have some resistance lower down, but look at pretty much any time in history, the have nots do not fair well in any situation. The court AI system as an example . https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses

    • I've been closely watching kids age 2-10 in our house interacting with Alexa and it's pretty surprising to watch what they ask her. Alexa, what are your favorite things? What's your favorite kind of hat? Alexa, I love you.

      Their mom says they have no real emotional attachment but that it's just fun to ask human questions and watch her fairly human-sounding responses.

    • I'm optimistic that numerous countries are more seriously investigating GAI (guaranteed annual incomes). Unfortunately our new Ontario premier (a Conservative) just axed the test project the Liberals were doing on it. It was nice to see other countries and people asking them not to cancel it because they wanted to see the results.

    • Kevin, I think you should write your own book on this topic. I enjoyed reading your comments and I'd probably change elements of my initial post after reading them.