I remember reading about that and thinking that it was immensely cool, but not especially threatening. It would not be surprising if adversarial generative networks could devise data compression algorithms that haven't occurred to humans. But in any event, this sort of stuff is easily misinterpreted--the bots weren't talking in code to evade eavesdropping or understanding by humans. They just evolved a different means of exchanging very specific information, which is fascinating in and of itself, but nothing to lose sleep about. In fact, I'm not even sure anyone demonstrated that the encoding was more efficient than standard English. In any event, it's a gross exaggeration to say that they "developed their own language." OTOH, it does exemplify the more general problem of transparency, i.e., understanding why AI does what it does.