Cake
  • Log In
  • Sign Up
    • Thanks for the link. It sounds like my fears may be a bit premature, which is just as well. Still, I think the mere prospect gives us reason to devote more attention to the general problem of making AI more transparent and accountable.

    • At dinner the other night I sat by someone fascinating who said last year AI scientists discovered two AI machines had found existing languages too inefficient and had developed their own and were talking back and forth in a way the scientists could not understand, so they unplugged it.

      That sounded like it could very well be true and my blood chilled. Just think of how imprecise our words can be. I chased it down to find out more and it seems to be one of those things that has a grain of truth but is (so far) exaggerated.

    • I remember reading about that and thinking that it was immensely cool, but not especially threatening. It would not be surprising if adversarial generative networks could devise data compression algorithms that haven't occurred to humans. But in any event, this sort of stuff is easily misinterpreted--the bots weren't talking in code to evade eavesdropping or understanding by humans. They just evolved a different means of exchanging very specific information, which is fascinating in and of itself, but nothing to lose sleep about. In fact, I'm not even sure anyone demonstrated that the encoding was more efficient than standard English. In any event, it's a gross exaggeration to say that they "developed their own language." OTOH, it does exemplify the more general problem of transparency, i.e., understanding why AI does what it does.

    You've been invited!