• Log In
  • Sign Up
    • I was in my car two weeks ago listening to Yuval Noah Harari on Russell Brand’s podcast, ‘Under the Skin’. The episode is titled, “Is Humanity Finished?” It got me thinking about the future of jobs in a world of Artificial Intelligence.

      I’ve been reading Yuval’s books and listening to his talks for the past couple of years. His words force me to think about the global perspective of life. In his new book, ’21 Lessons for the 21st Century’, he says:

      “My agenda here is global. I look at major forces that shape societies all over the world, and are likely to influence the future of our planet as a whole. Climate change may be far beyond the concerns of people in the midst of a life-or-death emergency, but it might eventually make the Mumbai slums uninhabitable, send enormous new waves of refugees across the Mediterranean, and lead to worldwide crises in healthcare… Reality is composed of many threads…”

      It makes you think, doesn’t it?

      What’s the job market?

      In his guest appearance on Russell Brand’s podcast, Yuval talks about how nobody knows what the job market will look like in ≈30 years. With Artificial Intelligence advancing, we’re at a point in our evolution where we cannot confidently predict the next few decades, or, at least, predict what it will mean for life as we know it. That’s quite a scary realisation. Historically, generations before us have had some idea of what the next 30–40 years will look like, but we don’t.

      We have no idea what the job market will look like in the next few decades. With that in mind, we’re faced with a dilemma on what to teach kids in schools. What skills will they need in 30 years? Is school a waste of time? Is school obsolete? Irrelevant? To save from being too dramatic, there’s obviously a relevancy for the social and basic educational aspect of schools, but are the ‘real-world’ skills they’re teaching really going to matter in 30 years’ time? We just don’t know, but probably not.

      Technological evolution and capitalism

      Is this advancement in technology laying the perfect foundation for the eternal existence of capitalism? If economics are to remain at the heart of all our systems, and this technological evolution is happening in tandem with capitalism, that can’t be a good thing, right? Or as Russell Brand put it, does that mean we’re “double f****d?”

      Artificial Intelligence doesn’t have empathy or the mind to care for humans, so that means there would be no ethical or moral components in our economics system.

      One could argue this point. Capitalism has a unique dynamic that’s based on expansion of capital transforming into personal wealth and property. However, the recent innovations in technology have developed things like: renewables, the possibility of returning sight to the blind, and large-scale wind turbines, all with the promise of securing a prosperous future for all mankind. Maybe this means our post-apocalyptic view of technology advancement is a mere fantasy? Maybe we’ll put it all to good use for the benefit of all?

      That’s a nice thought in theory but, we, as a species, seem to forever fail at ‘learning our lessons’. History tells us that we have a habit of screwing things up. Instead of creating a species of all-knowing masters of the universe, we might just end up jobless and aimless. Maybe we’ll all take to willing our lives away with a VR headset strapped to our faces, like that episode of Black Mirror. Here’s to the next revolution!

      Probably a little dramatic, but what’s more likely? Corporations and governments putting the world before personal gain and wealth? Or, using Artificial Intelligence to make more and more money, without pausing for a second to ask: “Wait, are we making ourselves redundant?”

      And that leads me to wonder, are we being so ignorant to the ‘bigger picture’ that we’re quickly creating a useless class?

      The useless class?

      ‘Useless class’ is another term I’ve pulled from Yuval Noah Harari. To explain, he says: “I choose this very upsetting term, useless, to highlight the fact that we are talking about useless from the viewpoint of the economic and political system, not from a moral viewpoint.”

      Once Artificial Intelligence advances to a point where it renders humans obsolete in systems like economics, political, and military, you need to wonder if governments will still invest in things like healthcare and education. If you’re useless in the economic system, there’s no longer an incentive to invest in your healthcare and education. If the economy and military doesn’t need you, neither does the government.

      When machines take over the role of workers and soldiers, our political and economic systems will simply detach from human values, rendering us, as Russell said, ‘double f****d’.

      Most jobs today may disappear in a matter of a few decades. What might a post-work world look like? What’s the meaning of life without work?

      We may see the evolution of new job roles, like ‘virtual-world designers’, but what will this mean for Bob the taxi driver in East London? And what about these virtual-world designers? With the pace of technology, might they too need to reinvent themselves at some point? Is work-life going to be more fluid than ever before? Are we facing a post-apocalyptic world of ‘survival of the fittest?’ Who can learn AI the quickest?

      Is Artificial Intelligence more intelligent than humans?

      Answer: it doesn’t need to be. AI doesn’t need to be more intelligent than humans to transform the job market and, scarily, it’s not far from that point. People who are kids today will face the consequences of AI and what they’re learning in school right now will probably have little, if any, real-world demand in 40 or 50 years. If they want to stay relevant in the working world and continue to have a job, they’ll most likely need to reinvent themselves several times. And quickly.

      What if you don’t like the prospect of a post-work world? Satisfaction will probably be a commodity that comes with a price. Your mood and happiness will be controlled by drugs; your excitement and emotional attachments found not in the world outside, but in immersive virtual reality.

      Maybe Charlie Brooker is really onto something with Black Mirror? If you think about it, a lot of those episodes actually already hold some truth. Harari says: “I’m aware that these kinds of forecasts have been around for at least 200 years, from the beginning of the Industrial Revolution, and they never came true so far. It’s basically the boy who cried wolf. But in the original story of the boy who cried wolf, in the end, the wolf actually comes, and I think that is true this time.”

      Are the kids alright?

      Some could argue that we’ve already been living immersed in fantasies for thousands of years. Things like religion, corporation, and money are all stories we’ve created along the way to enable conformity of large groups.

      Are we going to be stuck in a virtual reality world, pursuing make-believe goals and obeying imaginary laws? Aren’t we doing that already on some scale?

      Virtual reality may be the key to providing life’s meaning to a post-work ‘useless’ class. They might be generated inside computers, outside computers, or in the shape of new ideologies, maybe all-of-the-above. The possibilities are endless right now, and nobody knows for sure what kind of world will engage us in 2050.

      All of this leads to one question: what should we do? Harari says: “First of all, take it very seriously. Make it a part of the political agenda, not only the scientific agenda. This is something that shouldn’t be left to scientists and private corporations. They know a lot about the technical stuff, the engineering, but they don’t necessarily have the vision and the legitimacy to decide the future course of humankind.”

      Artificial Intelligence is quickly giving new meaning to businesses and our lives. How this will unfold in the next few decades is anyone’s guess.

      Are the kids alright? For now, probably yes. But choosing a career in AI sounds like a good way to stay ahead of the curve.

      What’s your thoughts on Artificial Intelligence and its eventual impact?

      Photo by Drew Graham on Unsplash

    • I haven't read Yuval's latest book, but I know after reading Sapiens I got the idea that he has a fairly negative outlook on humanity and our future in general.

      For a different perspective that takes into account not only the negative but also the positive future of AI and humanity, I highly recommend Max Tegmark's book Life 3.0. I think it gives a much more balanced outlook, that is not quite so dark or apocalyptic.

      As far as preparing our children, one of the things he talks about is to focus on training for jobs that can't be automated. Anything that requires creative problem solving or human interaction are likely to be around for quite a while. Anything that involves diagnostics and repetitive tasks are going to go away soon. Surprisingly, it looks like doctors (diagnostics) are probably going to go away sooner or later, and nurses (human interaction) are probably going to be around for quite a while.

      Unfortunately, killer robots and dystopian futures sell books and brings eyeballs, so that's typically what people write about.

    • Would be interesting to see the swing of dark/light across todays top minds.

      In my view most business minds are all for AI, full digital ahead, while most humanitarian minds may side with Dark. Almost starwars story line.

      The recent podcast with Elon touched on his view, it wasn't very hopeful.

      Money and greed will and is fueling the end. I am not as concerned for my kid (highschool), I think they will see the start maybe more. The first switch to be pulled to release a connected AI mind in my view spells the end. We are a weed on this planet even though we have many positive things I don't think it out weights our destructive tendencies.

    • Yuval is everywhere! I just read a Wired story where he said science fiction is the most important genre.

      It links to a Geek's Guide to the Galaxy podcast where he says science fiction shapes the understanding of the public on things like artificial intelligence and biotechnology, which are likely to change our lives and society more than anything else in the coming decades.

      I agree that he can be pretty dark about the future and I end up hoping to God he's not right and this is like everything before that we thought would unemploy us all.

      The downside of our current education through science fiction is the best stories come from the computer or robot gaining feelings, falling in love with you or deciding you need to die. He doesn't think that will happen anytime soon and before then what will really happen is they will come for our jobs.

      Still, imagine how terrifying it would be to live in a Syrian neighborhood and have one of these walk down your street, sent by Assad:

    • The kids will be alright. Actually, they will likely be more than alright, they will be better off than you or I.

      Some say, including myself, AI is overhyped. Geoffrey Hinton, one of original creators of deep neural nets, says we have hit a wall with the current approach, which will never lead to the kind of “intelligence” that sci-fi authors and punditry associate with. Actually, if you ask me, we don’t even have a firm grasp of what is intelligence, or for that matter what is the mind, let alone to create it.

      I work every day with deep neural nets and I can tell you from experience: they are pretty dumb. It is simply a higher-order curve-fitting method. It does well in bounded problems with good labels (eg. Chess, Go, Vision), and then performance deteriorates from there, quickly.

      Yuval should refrain from writing more books and appearing in more shows. His work in Sapiens was interesting. It is a good book and it framed new prespectives from historians for general public consumption. Then he became famous and started writing about the future. In Homo Deus, Yuval makes it plenty clear he is not a scientist, engineer, or understands technology. Just like Al Gore writing about the future, Yuval should know better than become a pundit. As Pinker exposed recently, Tetlock’s research on forcasting the future explains the makeup of good forecasters and why pundits are no better than “chimps throwing darts.”

      AI will soon settle from the current state of hype it finds itself into a niche in the technology landscape, right along internet, API, and the microchip, to help drive the next breakthroughs in medicine, retail, agriculture and what not. Instead of replacing us, AI will create many more jobs than it replaces, which has been and will continue to be the case of every advance in technology. Furthermore, the jobs will require more of our creativity and draw upon a wider range of disciplines, making education more, not less important.

      The result will be more wealth, distributed far and wide.

      The kids will be alright.

    • I agree with Rodrigo that AI is overhyped, but that doesn't mean that it will have no impact on the job market. The truth is, we really don't know whether new sectors will emerge to absorb all the displaced workers. The past may or may not be a good guide to the future. What we do know is that the kids are going to live through a lot of rapid change, and that our current institutions are not going to provide an adequate safety net for people in transition. The best preparation is probably a broad education that develops critical thinking and communication skills, rather than a narrow curriculum specialized in some technical area that may turn out to be irrelevant in thirty years. Curiously, if the most radical visions come to pass, this could include AI itself, as AI will be self-improving. I think superintelligent AI is pure fantasy for the foreseeable future, but it's not impossible in principle.

      Values change more slowly than technology or macroeconomics. Today's kids should avoid defining themselves by their occupations, as many of us did in the past. Flexibility is the key. In the utopian case, technology could create an new era of abundance in which work becomes unnecessary, although there are many reasons to think this is unlikely. But if it were to happen we would still need to address the mental shift required to free ourselves of the work ethic, which might take several generations. More realistically, I'm less concerned about artificial intelligence than about natural stupidity. If we can't get it together to eliminate daylight saving time, how in the world will we address climate change, growing inequality or the end of work?

    • It has always been difficult to say what the jobs of tomorrow are, so with the knowledge you guys have what will drive this increase in work rather than reduce with AI?

      The lower level jobs are in real threat and are already being replace in several industries.

      We have started the full automation of farming for example, even my local movie theater has minimal staff as tickets are now automated and vending machines become more advanced.

    • There's a very real concern that superintelligent machines will diverge from human interests and pose a threat to our existence. Many argue that the safest way for the human race to survive past singularity is to merge with AI to preserve human interests. Elon Musk is already working on a product called Neuralink, which will be a high bandwidth link between the brain and machines. This is a building block to expand human consciousness with a rich layer of AI. This will give humans an unimaginable intelligence.

      Those that possess such technology will have essentially limitless earning potential. So I wonder then if in the longer term if AI will open up new industries and possibilities for all get AI. We can't fathom how people will earn their living or what that world will look like.

    • As I was rushing to my terminal in CDG airport last week, could not help but noticing the unmanned, automated train which without a slightest hesitation transports thousands of people, daily. I think automation does not equate AI although the line blurs, and rightfully so. A machine is nothing but an extension of human mind, and to me, it never will be anything else. Even these What most fear, is replacing of one's craft and skills, with the machine. And I think it should not constitute a reason for fear, but an incentive to progress in skills and knowledge, for all humans! Take all experts in all fields, and replicate them to machines! What happens next? Quite a riot!

    • I can't begin to fathom where all the jobs are coming from, how we got to such low unemployment, when for 200 years we've made extraordinary progress automating to eliminate humans.

      Are the weavers and blacksmiths and elevator operators making TV shows for Neflix now? Where did the travel agents go when Expedia came out? In the early 2000s, we called the Internet the great unemployer. In the 70s all the talk was about a 4-day workweek. And yet here we are.

    • I don't think Harari is being too dark or apocalyptic at all. He has said quite a few times that he actually feels very optimistic about the future of humankind, and that his "dark" tone is merely a warning speculation. And if it works - gets people thinking, - that's awesome.

      He's also said a few times that he doesn't believe that AI will become "conscious" or anything like that, but that humans will gradually be upgraded more and more, and with the infotech and biotech merging, this might create an elite superhuman class and an inferior class of irrelevant humans. Kinda like in the movie Elysium, I guess, where the rich live in a paradise-like bubble with incredibly advanced medical care while the poor masses survive in filthy slums in the ruins of what was once Earth. This feels like a rather likely scenario.

    • I just finished reading Yuval's Homo Deus: A Brief History of Tomorrow. Who knows what the future will bring but the issues he brings up are ones we've already been dealing with for years. Books such as his help up reflect on the type of future we want.

    • Thanks for all the great responses on this topic, it's been interesting to read them all. I really like Yuval's writing, but I can see how some may find it a stretch too far. I agree with Elge though:

      I don't think Harari is being too dark or apocalyptic at all. He has said quite a few times that he actually feels very optimistic about the future of humankind, and that his "dark" tone is merely a warning speculation. And if it works - gets people thinking, - that's awesome.

      I'm glad this opened up some debate though, it's something I find myself thinking about and, I guess, it really does depend on how we ultimately use this advancement of technology.

    • It's impossible to forecast the future after superintelligence because machines smarter than the collective human intelligence have the potential to shape the world in ways we can't fathom. The Internet changed the world in ways unpredicted, but I think this will be a much bigger surprise.

      What will a world look like when poverty is eliminated globally, a cure is developed for almost every disease, and lifespans are prolonged significantly, etc. etc.? I think the quality of life will be pretty good for everyone if we can survive the threats to humanity before we get there.

    • or we end up like wall-e or Idiocracy :-) certainly hope it is not however. Perhaps more hunger games or irobot? something I need to work on as my glass is half empty when thinking in this area.

    • My bet is that productivity will dramatically increase in the coming decades to where humans need to work less and less to survive. In theory, machines doing all our jobs is not a terrible thing, but the wealth generated by those machines needs to be fairly distributed among humanity.

      I'm pessimistic right now given the growing inequality, specifically among Americans. More people are having to work harder and harder to provide a comfortable living environment for their families, despite record productivity levels. It's a sure bet that the rich and powerful will capitalize on the intelligent machines of the future. How do you give every human a fair stake? I think a universal basic income will be a necessity.

    • The trajectory is evident that we will get to singularity in the coming decades if researchers and companies continue their course. The financial motive to do so is huge.

      Extinction from a pandemic and nuclear war is a genuine threat. We can't get there if we're all dead.

      Then, if AI's interests diverge from human interests, it can either treat us as deer and let us coexist or ants and kill us. Hopefully it'll take us with. That's why finding a super intelligent solution that will let us evolve with the machines is critical.

    • Super intelligence will understand the brain better than any neurologist and psychiatrist, and would presumably diagnose and treat mental illness. If humans desire happiness and machines have our best interests, those machines will keep us from entering a Wall-e dystopia.

      As far as i-Robot -- let's just evolve with them to prevent them from killing us :)

    • The trajectory is evident that we will get to singularity in the coming decades if researchers and companies continue their course.

      What is your evidence?