I was in my car two weeks ago listening to Yuval Noah Harari on Russell Brand’s podcast, ‘Under the Skin’. The episode is titled, “Is Humanity Finished?” It got me thinking about the future of jobs in a world of Artificial Intelligence.
I’ve been reading Yuval’s books and listening to his talks for the past couple of years. His words force me to think about the global perspective of life. In his new book, ’21 Lessons for the 21st Century’, he says:
“My agenda here is global. I look at major forces that shape societies all over the world, and are likely to influence the future of our planet as a whole. Climate change may be far beyond the concerns of people in the midst of a life-or-death emergency, but it might eventually make the Mumbai slums uninhabitable, send enormous new waves of refugees across the Mediterranean, and lead to worldwide crises in healthcare… Reality is composed of many threads…”
It makes you think, doesn’t it?
What’s the job market?
In his guest appearance on Russell Brand’s podcast, Yuval talks about how nobody knows what the job market will look like in ≈30 years. With Artificial Intelligence advancing, we’re at a point in our evolution where we cannot confidently predict the next few decades, or, at least, predict what it will mean for life as we know it. That’s quite a scary realisation. Historically, generations before us have had some idea of what the next 30–40 years will look like, but we don’t.
We have no idea what the job market will look like in the next few decades. With that in mind, we’re faced with a dilemma on what to teach kids in schools. What skills will they need in 30 years? Is school a waste of time? Is school obsolete? Irrelevant? To save from being too dramatic, there’s obviously a relevancy for the social and basic educational aspect of schools, but are the ‘real-world’ skills they’re teaching really going to matter in 30 years’ time? We just don’t know, but probably not.
Technological evolution and capitalism
Is this advancement in technology laying the perfect foundation for the eternal existence of capitalism? If economics are to remain at the heart of all our systems, and this technological evolution is happening in tandem with capitalism, that can’t be a good thing, right? Or as Russell Brand put it, does that mean we’re “double f****d?”
Artificial Intelligence doesn’t have empathy or the mind to care for humans, so that means there would be no ethical or moral components in our economics system.
One could argue this point. Capitalism has a unique dynamic that’s based on expansion of capital transforming into personal wealth and property. However, the recent innovations in technology have developed things like: renewables, the possibility of returning sight to the blind, and large-scale wind turbines, all with the promise of securing a prosperous future for all mankind. Maybe this means our post-apocalyptic view of technology advancement is a mere fantasy? Maybe we’ll put it all to good use for the benefit of all?
That’s a nice thought in theory but, we, as a species, seem to forever fail at ‘learning our lessons’. History tells us that we have a habit of screwing things up. Instead of creating a species of all-knowing masters of the universe, we might just end up jobless and aimless. Maybe we’ll all take to willing our lives away with a VR headset strapped to our faces, like that episode of Black Mirror. Here’s to the next revolution!
Probably a little dramatic, but what’s more likely? Corporations and governments putting the world before personal gain and wealth? Or, using Artificial Intelligence to make more and more money, without pausing for a second to ask: “Wait, are we making ourselves redundant?”
And that leads me to wonder, are we being so ignorant to the ‘bigger picture’ that we’re quickly creating a useless class?
The useless class?
‘Useless class’ is another term I’ve pulled from Yuval Noah Harari. To explain, he says: “I choose this very upsetting term, useless, to highlight the fact that we are talking about useless from the viewpoint of the economic and political system, not from a moral viewpoint.”
Once Artificial Intelligence advances to a point where it renders humans obsolete in systems like economics, political, and military, you need to wonder if governments will still invest in things like healthcare and education. If you’re useless in the economic system, there’s no longer an incentive to invest in your healthcare and education. If the economy and military doesn’t need you, neither does the government.
When machines take over the role of workers and soldiers, our political and economic systems will simply detach from human values, rendering us, as Russell said, ‘double f****d’.
Most jobs today may disappear in a matter of a few decades. What might a post-work world look like? What’s the meaning of life without work?
We may see the evolution of new job roles, like ‘virtual-world designers’, but what will this mean for Bob the taxi driver in East London? And what about these virtual-world designers? With the pace of technology, might they too need to reinvent themselves at some point? Is work-life going to be more fluid than ever before? Are we facing a post-apocalyptic world of ‘survival of the fittest?’ Who can learn AI the quickest?
Is Artificial Intelligence more intelligent than humans?
Answer: it doesn’t need to be. AI doesn’t need to be more intelligent than humans to transform the job market and, scarily, it’s not far from that point. People who are kids today will face the consequences of AI and what they’re learning in school right now will probably have little, if any, real-world demand in 40 or 50 years. If they want to stay relevant in the working world and continue to have a job, they’ll most likely need to reinvent themselves several times. And quickly.
What if you don’t like the prospect of a post-work world? Satisfaction will probably be a commodity that comes with a price. Your mood and happiness will be controlled by drugs; your excitement and emotional attachments found not in the world outside, but in immersive virtual reality.
Maybe Charlie Brooker is really onto something with Black Mirror? If you think about it, a lot of those episodes actually already hold some truth. Harari says: “I’m aware that these kinds of forecasts have been around for at least 200 years, from the beginning of the Industrial Revolution, and they never came true so far. It’s basically the boy who cried wolf. But in the original story of the boy who cried wolf, in the end, the wolf actually comes, and I think that is true this time.”
Are the kids alright?
Some could argue that we’ve already been living immersed in fantasies for thousands of years. Things like religion, corporation, and money are all stories we’ve created along the way to enable conformity of large groups.
Are we going to be stuck in a virtual reality world, pursuing make-believe goals and obeying imaginary laws? Aren’t we doing that already on some scale?
Virtual reality may be the key to providing life’s meaning to a post-work ‘useless’ class. They might be generated inside computers, outside computers, or in the shape of new ideologies, maybe all-of-the-above. The possibilities are endless right now, and nobody knows for sure what kind of world will engage us in 2050.
All of this leads to one question: what should we do? Harari says: “First of all, take it very seriously. Make it a part of the political agenda, not only the scientific agenda. This is something that shouldn’t be left to scientists and private corporations. They know a lot about the technical stuff, the engineering, but they don’t necessarily have the vision and the legitimacy to decide the future course of humankind.”
Artificial Intelligence is quickly giving new meaning to businesses and our lives. How this will unfold in the next few decades is anyone’s guess.
Are the kids alright? For now, probably yes. But choosing a career in AI sounds like a good way to stay ahead of the curve.
What’s your thoughts on Artificial Intelligence and its eventual impact?