• Log In
  • Sign Up
    • This tech is supposed to debut at CES I think tomorrow. It is called NEON. These people are not real and are not mo captured. We are seriously living in the future. Cool and scary at the same time. I really do feel we are going to see a radical shift in the story telling process. Actors are going to hate this but I would imagine directors and producers will begin to rely on it. As a creative, imagine being able to control the rate of a blink when trying to convey an emotion to carry a line. I would also imagine I am in the minority on this. It is probably due to my love of silent films where the face was required to tell the story more than the dialogue. Sounds crazy but I could see Chaplin experimenting with this. Haha.

    • The reporter sounds pretty congratulatory, but I'm wondering who exactly this technology benefits and do we really need it, and as a tech reporter why he doesn't ask that question himself?

    • An age old question. The difference between needs and wants. Not sure if anyone would need digital humans but businesses may want them to communicate with their customers more effectively (too expensive to engage a human with a one on one chat but not with a digital character which could decrease consumer dissonance), gaming companies may want them to create more realistic experiences, and filmmakers may want them to help express experiences directly from how they envision a scene to play out. Believe me, I would imagine the majority share your opinion today. As we start to see these fold into society, mindsets may change. We will have to see though. Who knows, right??

    • I have the standard reaction: just because we CAN do something, it does not follow that we SHOULD.

      It all depends on whether, as a species, we are expecting to tend towards more or less "real" social interaction where parties are physically present. This technology will support less physical interaction, and I am not sure I like where that might lead civilisation in the future. Certainly not utopia, but quite possibly dystopia.

      Another observation would be that this technology could significantly aggravate the body-image problems suffered by many. It is one thing to feel bad about your body shape compared to static advertising images, but quite another if perfectly-shaped avatars that move and speak are involved. A lot of people would be a lot more unhappy for a lot more of the time, I think.

      All in all this would not empower the human condition much. Rather, it might diminish the nature of the human condition and individuality. It could drive us into an anthropological cul-de-sac.

      It is clever, of course. It is the applications that give me pause for thought.

    • I have the standard reaction: just because we CAN do something, it does not follow that we SHOULD.

      While standard, it is a pretty controversial topic. As a reductio ad absurdum, let's boil it down to The Croods animated movie. If we don't do things beyond immediate needs, we could do with a nice cave. All those discoveries, and yearnings to find out what is beyond that mountain - those are not really needed, right? At least for an individuum or a single family.

      And then as soon as there's a group of people, which might form a society, things immediately become muddled. Who decides what is needed for whom? And how? And what if that kid from that family is dreaming about flying like a bird, isn't that dangerous and should we toss her from the cliff before something bad happens to us all?

      And on another note, I happen to hold as yet another not very popular opinion that we as a society should perhaps put more emphasis on teaching people being happy from within rather from without. And that can not be done by obsessing over not hurting absolutely anyone's feelings. Humanity is but a thin, very fragile (if ultra opinionated about lots of things including itself) smudge of biomass on this planet - even insects are, physically, more out there. The planet in question itself is not the center of the Universe either. Feelings, lives, limbs, whole societies will be hurt by things much more powerful than some sort of body image projection. I am firmly convinced that we should start steering towards more realistic context, rather than a wildly popular approach that if e.g. someone's is not going to be addressed by a specific sort of a novelty pronoun invented last year, the world is going to end. If we don't, our lack of focus and petty grievances are bound to bite us hard and fast.

      That doesn't mean we should be mean to each other (pun intended), far from it. Kindness is a gift and a key. But IMNSHO we should stop trying to create an emotionally sterile environment based on unwarranted entitlement.

      I'll toss on a couple quotes, just for entertainment value.

      Far out in the uncharted backwaters of the unfashionable end of the western spiral arm of the Galaxy lies a small unregarded yellow sun. Orbiting this at a distance of roughly ninety-two million miles is an utterly insignificant little blue green planet whose ape-descended life forms are so amazingly primitive that they still think digital watches are a pretty neat idea.

      This planet has—or rather had—a problem, which was this: most of the people living on it were unhappy for pretty much all of the time. Many solutions were suggested for this problem, but most of these were largely concerned with the movement of small green pieces of paper, which was odd because on the whole it wasn't the small green pieces of paper that were unhappy.

      And so the problem remained; lots of people were mean, and lots of them were miserable, even the ones with digital watches.

      Many were increasingly of the opinion that they'd all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans.

      That's the very beginning of The Hitchhiker's Guide to the Galaxy by Douglas Adams.

      The universe is just there; that's the only way a Fedaykin can view it and remain the master of his senses. The universe neither threatens nor promises. It holds things beyond our sway: the fall of a meteor, the eruption of a spiceblow, growing old and dying. These are the realities of this universe and they must be faced regardless of how you feel about them. You cannot fend off such realities with words. They will come at you in their own wordless way and then, then you will understand what is meant by "life and death." Understanding this, you will be filled with joy.
      -Muad'Dib to his Fedaykin

      And this one is from one of the Dune books, by Frank Herbert.

    • It’s not really about doing what’s needed, it’s about recognizing that something might be harmful to society while only benefiting a few. Here, for example, we seem to have a tech reporter congratulating a company for making him one step closer to being obsolete. As @paulduplantis points out, the tech will be of value to those who find human labour too expensive. Why do we, as a society, continue to congratulate and then actually give business to those who devalue us? We always want the cheapest goods, and don’t seem to recognize that there are many hidden costs beyond just the dollar value. And the pundits aren’t really talking about this. It’s why we’ve ended up in this environmental mess, and probably why we’ll end up in a sociological mess. This isn’t really about someone inventing something we didn’t know we needed yet - it’s more about the rest of us fiddling while Rome burns, and even congratulating the arsonists for their fine work.

      Now, perhaps you might think I’m being hasty or alarmist to suggest that the city is burning, but what I’m really trying to point out here is a lack of investigation into whether people are lighting fires or not.

    • I wouldn't argue that your point of view is alarmist. It is a valid concern. But I do think that such points of view, or angles of attack, if you will, are themselves based on a sort of metacognitive bias. I'll try to explain.

      Let's zoom out just a bit and try to see what are we talking about. It is not anything really revolutionary or profound. At the root, it's just an iterative (no matter the iteration size) improvement of a sort of media projection technique. It has its uses, glorified or not - I can as easily point out an example of vain use as an example of some high-value deployment, let's see - use of an improved digital avatar to sell luxury swimwear vs using an improved digital avatar of someone to connect to an ill or geriatric close relative over vast distances, perhaps even just one last time.

      Now, I would posit that the real deep value stuff is actually all sorts of basic research and applied science that takes a whole lot of time and goes on and on in the background, but is very rarely being talked about. And why? Let's single out two most important reasons. First, and a lamentable one - because it has very little wow factor. And thus, on one hand, media is very little if at all interested to pick it up and/or take the time to understand what is going on, and on the other hand, the general public appears to have a shallower focus every year, not enough to enjoy a prime time deep dive on, I don't know, drip irrigation? Just these two sub-factors create a kind of a vicious circle. And second, but just as an important, thing - the more profound a kind of [an applied] scientific topic, the higher the probability it will trigger a knee-jerk denial reaction, plus all sorts of uncanny valleys. Human genetic research is plagued by the ghost of eugenics. Space progress is hamstrung by heavy political and budgetary squabbles plus extreme risk averseness (not just in terms of lives lost, but in terms of daring to start something different). Nuclear energy research is all but stoppered by overzealous pseudo-ecologists. Et cetera ad infinitum.

      I am never in favour of completely unchecked anything. But neither I am in favour of completely turtling just because something might happen. Because something will happen anyway, and if we forsake our possibilities to learn and go beyond, we will just be crushed by the uncaring universe, which will then go on existing without us not the least bit concerned.

      As a side nitpick, who do you think this (just as an example) technology devalues anyone? I would think this debate has been settled many times since the Industrial revolution began rolling. Did the steam pumps devalue Welsh miners? Does an agricultural combine devalue human pickers? Does the existence of Cake devalue carrier pigeons and their breeders and trainers?

      I would think that it is not the technology but the way it is deployed and portrayed that can value or devalue something, but ain't that a completely different discourse? :)

    • Love the Douglas Adams quote.

      I think our thinking is basically aligned. My comment on body image frustrations was a little frivolous. The weighty point, where maybe we agree, is that by isolating people more and more - to the extent that physical interactions become rare - we will not be happy from within. We be ever more slaves to whatever mantra is delivered to us from our screens.

      In fictional terms this would end up looking like the sterile world depicted in James Gunn's novel, "The Dreamers" -

      The book is recommended to anyone with a futurists' perspective.

    • I would think that it is not the technology but the way it is deployed and portrayed that can value or devalue something, but ain't that a completely different discourse? :)

      It's the way it's deployed (to replace human actors, who are too expensive) that I specifically called out, and yes, I agree, those are two very different topics.

      But neither is the topic I'm really trying to discuss, which is the uncritical nature of the tech reporting. You speak of AI as one more step in a long process of technological development, and I agree that it is. The point I'm trying to make is that we're reaping the rewards of this 100+ year old process now, and still failing to really think about the full cost of keeping goods and services 'cheap' (I would argue these only look cheap, but were actually really expensive for society as a whole, because we left too much data off the balance sheet when determining the costs).

      The pace of digital development is far faster than the pace of mechanization. So if we NOW don't take the time to ask questions about who is benefiting and who is paying, we will probably never get a second chance.

      So I'm not really interested in debating whether we should develop tech or not. I'd just like to advocate for more thorough analysis of the impacts, and asking that both pundits and consumers be more critical and aware before rushing to praise and adopt.

    • So I'm not really interested in debating whether we should develop tech or not. I'd just like to advocate for more thorough analysis of the impacts, and asking that both pundits and consumers be more critical and aware before rushing to praise and adopt.

      Thus formulated it's a thesis I can 100% get behind.

    • I guess the question is does technology in of itself devalue us? Where did the gas station attendants go? Were they able to find employment? What did the automobile do to the horse and buggy operators? Do humans lose value because of technology or because they have a difficulty in adapting to change. An incredibly important question to answer but I don't think it falls only of the shoulders of technology.

      Of course I see the down side to all of this but I also see the upside of humans being pushed to demonstrate more value for themselves. My question is how could technology help with this not how is technology destroying this. The latter seems to be the easier answer.

    • Humans lose value whenever someone decides they are too expensive to employ.

      In the past, the pace of change has been slow enough that, over a period of time, people have been able to adapt and find new positions in different fields. Farm workers became construction workers or factory workers. You could even argue that the technology freed people to do other things.

      But computers can work so fast that once true AI is developed there will be a rapid cascade effect of people being put out of work, and it’s not at all clear that we’ll be able to adapt quickly enough. Some jurisdictions are experimenting with a guaranteed basic income model to see if this might be the answer. There’s been quite a bit of thought put into post capitalism models, but at the moment I’m not seeing much institutional planning, even though this change to AI seems inevitable and imminent.

    • To expand a little on my last post, my favourite radio show ran a 3-part series in 2017 discussing the ins and outs of the digital revolution, and it gives a wide range of opinions from various experts. Some think we'll adapt just fine on our own, but others foresee that institutional intervention may be needed.

      Here's the introduction to the series:

      AI and robots seem to be everywhere, handling more and more
      work, freeing humans up — to  do what? In this 3-part serie, contributor
      Jill Eisen explores the digital revolution happening in our working
      lives. Artificial intelligence is on the verge of replacing our own
      intelligence. It took decades to adjust to machines out-performing human
      and animal labour. What will happen when robots and algorithms surpass
      what our brains can do? Some say digital sweatshops—repetitive, dull,
      poorly paid and insecure jobs—are our destiny. Others believe that
       technology could lead to more fulfilling lives. **This episode is Part 1
      of series. Part 2 airs Tuesday, July 31; Part 3 airs Tuesday, August 7.
      **This episode originally aired September 13, 2017.

      The digital age is transforming the way we work. Some would even say that
      artificial intelligence, robots, and automation are destroying it.

      "The unions haven't come to grips with this. They're floundering. They're
      getting hammered. The Left used to think about whether unions could be
      revolutionary or not. Now the question is: can they can just defend
      people? And they can't." - Sam Gindin

      No one doubts that AI, along with machine learning, advanced robotics, 3-D
      printing and the internet of things will disrupt the labour market.
      It's just a question of how much. One influential study coming out of
      Oxford University estimated that 47 per cent of jobs in the U.S. could
      be eliminated using existing technologies. Everything from fast-food and
      retail jobs to legal and medical jobs are in the crosshairs.

      The big question is whether new, well-paying jobs will come along to
      replace the old ones. If recent trends are any indication, it doesn't
      look good. Since the mid-1990's contract, part-time and temporary work
      have accounted for 60 per cent of all new jobs across most developed
      countries, and in 2016 an astonishing 90 per cent of new jobs in Canada
      were part-time.

      If the benefits of the new technologies are to be broadly shared, there will have to be big
      changes ahead. Labour laws and employment standards will have to be
      rewritten, our social safety net will need strengthening, governments
      will have to take a more active role in directing the economy and unions
      will have to find new ways of representing workers.

      The link below points to the introduction quoted above, the three episodes in the series, and lists the 11 contributors interviewed and 21 books and 3 web links for further reading.

    • Here's a review that appeared in The Guardian today of A World Without Work by Daniel Susskind - who is not one of the people listed in the reading and contributors list above, but who seems to draw similar conclusions:

      But AI, Susskind argues, has changed everything, starting with the
      definition of “routine”. Time and time again, it has been assumed that a
      task required a human being until a machine has come along and proved
      otherwise, and without needing to mimic human cognition. It used to be
      argued that workers who lost their low-skilled jobs should retrain for
      more challenging roles, but what happens when the robots, or drones, or
      driverless cars, come for those as well? Predictions vary but up to half
      of jobs are at least partially vulnerable to AI, from truck-driving,
      retail and warehouse work to medicine, law and accountancy.

      That’s why the former US treasury secretary Larry Summers confessed
      in 2013 that he used to think “the Luddites were wrong, and the
      believers in technology and technological progress were right. I’m not
      so completely certain now.” That same year, the economist and Keynes
      biographer Robert Skidelsky wrote that fears of technological
      unemployment were not so much wrong as premature: “Sooner or later, we
      will run out of jobs.” Yet Skidelsky, like Keynes, saw this as an
      opportunity. If the doomsayers are to be finally proven right, then why
      not the utopians, too? Committed to neither camp, Susskind leaves it
      late in the day to ask fundamental questions.

    • this change to AI seems inevitable and imminent.

      I would be interested to see some things supporting this choice of description.

      Not being an AI expert in the slightest, I do try to keep an eye on things related, and from what I can observe plus some even optimistic inferences I can't see neither inevitability nor anything resembling imminent approach to "true AI" in any decent definition of one. In fact, we don't even seem to have an inkling of how to approach developing one. There's a lot of trying going on, but it's trying to get an idea that would be even very slightly closer to having any of that understanding.

      And that's not even speaking about the mundane hardware restrictions that are looming in the face of any such developments - memory and storage capacities and performance, energy requirements, communications capacities and bandwidth, etc.

      I would rather expect some amazing breakthroughs in neurobiology and man-machine interfacing than "true AI" in any shape or form, within the next 20 to 50 years.

    • I also find (even though it seems to be a 2nd degree quotation from Larry Summers?) the usage of the Luddite reference very indicative.

      I would instead point out that

      Despite their modern reputation, the original Luddites were neither opposed to technology nor inept at using it. Many were highly skilled machine operators in the textile industry.

      That is, too, a quote from an excellent article on the topic in the Smithsonian Magazine.

      I do agree that the accelerating pace of technological innovation presents new and unique challenges to humanity, in particular as said humanity struggles to catch up and modernise its societal development. However I am not convinced that it is the technological part of this that is the problem.

      Bridging to the sci-fi related conversations, in surprisingly less known works of his Frank Herbert has been exploring the concept of too-efficient organisations, even though he has envisioned those organisations to be not technology-focused, but bureaucracy-focused, that is, governments. See Bureau of Sabotage

    • Sorry - I've been sloppily using AI to mean 'automation' or 'machine learning'. I do think real AI is inevitable, but not necessary to cause theemployment upheaval in question. And yes, the digitization of brains may well happen first, and I have no idea how to predict the effects of that!*

      How about this as a 'state of the AI':

      So where do we stand now with artificial intelligence?
      After years of headlines announcing the next big breakthrough (which,
      well, they haven’t quite stopped yet), some experts think we’ve reached
      something of a plateau.
      But that’s not really an impediment to progress. On the research side,
      there are huge numbers of avenues to explore within our existing
      knowledge, and on the product side, we’ve only seen the tip of the
      algorithmic iceberg.

      Kai-Fu Lee, a venture capitalist and former AI researcher, describes
      the current moment as the “age of implementation” — one where the
      technology starts “spilling out of the lab and into the world.” Benedict
      Evans, another VC strategist, compares
      machine learning to relational databases, a type of enterprise software
      that made fortunes in the ‘90s and revolutionized whole industries, but
      that’s so mundane your eyes probably glazed over just reading those two
      words. The point both these people are making is that we’re now at the
      point where AI is going to get normal fast.

      Thanks for the Herbert reco. I really enjoyed reading his book Hellstrom's Hive a few years ago.

      *Though I like the idea that we might live inside a simulation, and will ourselves develop new simulations in which to live (parallel universes?). Perhaps history is the continual process of developing simulations in which people live, until the people inside the similation develop the tech to create their own simulation and move inside, and so on ad infinitum.

    • And yes, the digitization of brains may well happen first, and I have no idea how to predict the effects of that!*

      On this I can offer two references of note. One, this talk from GDC

      And two, if you haven't read it, a brilliant and very intense book (I can't really spell out why I think it intense, it just hit some buttons for me, I guess) - Synners, by Pat Cadigan.

    • How about this as a 'state of the AI':

      It was a decent (though super concise, consumer grade) overview a year ago, and it starts from the very thesis I've been trying to advance - that the "AI" hyped and developed and used everywhere today has no relation to a real, "true" AI as pictured in various imaginations and works of fiction. What we have is a healthily developing art and science of pattern recognition and matching algorithms of increasing complexity and, sometimes, usefulness. But we're very far from the I in that AI, about as far as we are from the understanding of what our own Intelligence means, or consciousness, and what it takes to make one.

    • In my opinion, humans lose value whenever they lack the ability to share it. Your premise of someone else deciding this keeps them in a corner in my opinion. Granted we all don’t have the resources and fortitude to make lemonade from lemons. I am not talking about turning coal miners into coders overnight but there is something about the possibilities inherent in the technology right before our eyes to move people forward. 

      On another post you shared a presentation by Paul Mason who referred to "The Massive under exploitation of the potential of Information Technology." Which I agree with wholeheartedly. So yes, if we continue to build technology to replace human labor without building technology to engage human ingenuity and resourcefulness, we could be in trouble.

      I have a comment on your point about Universal Basic Income as well. I have been following the theory of UBI for some time with Andrew Yang bringing the concept to the forefront of public debate. I don’t have a problem with establishing a baseline to help prime a more egalitarian society but I struggle with the concept of receiving a wage unconditionally. 

      As technology continues to advance, I believe we will be at a place to allow even those with disabilities to provide value in exchange for a basic income. What could a human do in a connected world in exchange for a basic income provided by the state? Acquire a skill? Provide emotional support to another? Solve a problem? An exchange of capabilities between citizen and state for the benefit of both.  For those who just can't provide value whether they are extremely mentally, emotionally, or physically incapacitated, that is where the state and philanthropy come in with dedicated programs to helping the afflicted.