Cake
  • Log In
  • Sign Up
    • Richard, have you ever read Flash Boys? It’s by the author of The Big Short and talks about the lengths hedge funds went to to get a competitive advantage in trading.

      Some of the hedge funds actually moved their server a few feet closer to the exchange server. Consequently, the picosecond(?) difference in trading time allowed their computer to see their competitors trades and execute a trade, based on that intel, before their competitor’s trade was processed.

      They’ve since put in trading hold controls to eliminate those unfair advantages, but the idea that a hedge fund could buy a stock in a fraction of a second and then turn around and sell it before the second was up still boggles my mind.

    • Yes, I read it some time ago. But as disturbing as it was, at least the basic principle was comprehensible: arbitrage carried to ridiculous extremes. I don't know anything about the newer AI techniques that traders are using, but what worries me is the likelihood that the decision making is opaque, based on patterns that only a neural net can see. I'm not an economist, but I suspect that could cast doubt on the basic assumption of rationality in the market, without which there's no longer any theoretical reason to expect good outcomes globally. If the decision making cannot be codified in human form, it becomes more difficult to impose regulation to constrain the consequences. I should add that I don't know that any of this is actually happening--one would think that traders would be reluctant to "just let the computer decide"--but who knows? I do know of some AI managed funds, but I don't know whether there are enough of them to move markets or whether they have built-in limits on trade size or speed.

    • Thanks for the link. It sounds like my fears may be a bit premature, which is just as well. Still, I think the mere prospect gives us reason to devote more attention to the general problem of making AI more transparent and accountable.

    • At dinner the other night I sat by someone fascinating who said last year AI scientists discovered two AI machines had found existing languages too inefficient and had developed their own and were talking back and forth in a way the scientists could not understand, so they unplugged it.

      That sounded like it could very well be true and my blood chilled. Just think of how imprecise our words can be. I chased it down to find out more and it seems to be one of those things that has a grain of truth but is (so far) exaggerated.

    • I remember reading about that and thinking that it was immensely cool, but not especially threatening. It would not be surprising if adversarial generative networks could devise data compression algorithms that haven't occurred to humans. But in any event, this sort of stuff is easily misinterpreted--the bots weren't talking in code to evade eavesdropping or understanding by humans. They just evolved a different means of exchanging very specific information, which is fascinating in and of itself, but nothing to lose sleep about. In fact, I'm not even sure anyone demonstrated that the encoding was more efficient than standard English. In any event, it's a gross exaggeration to say that they "developed their own language." OTOH, it does exemplify the more general problem of transparency, i.e., understanding why AI does what it does.