• Log In
  • Sign Up
    • As a research project I think Perspective is really cool. I also think machine learning like this can be valuable when it's used to flag things that might need human review. But on its own, it seems unlikely to be useful.

      I tried pasting various Cake posts into the demo at the website you linked to, and results varied pretty widely. Even though none of the posts I picked were even close to what I would consider toxic, Perspective flagged several of them as either "likely toxic" or "not sure".

      The use of swear words, even in positive or neutral contexts, seems to be a big factor in making it think something is toxic. It also has no ability to distinguish between original content and quoted text, so for instance it rated your own post here (the one I'm replying to) as "not sure" — 53% likely to be toxic.

      Toxic comments can also be disguised somewhat by padding them out with a ton of non-toxic text. For instance it confidently rated the text "fuck you" as 99% toxic. But "fuck you" followed by a copied and pasted Wikipedia article about football was only 67% toxic, putting it into "not sure" territory.

      I think we still have a long way to go.

    • AI so far (as per Facebook's automod) does a terrible job with context. Most AI filtering right now cuts like a meat cleaver when it needs to be a scalpel. I've fallen afoul of FB's automod twice, for utterly innocuous content in a private group where no one in the group reported it (I'm an admin, would see). After my 24 hour posting ban, I decided I would cut way back on what I posted there.

    • I have a feeling (purely based on experience, can't quote any science on this) that the very approach is flawed. Getting AI/ML to better understand broader context is fine, but to create yet another imperfect scale, which will be a) arbitrarily used to effect decisions with consequences of poorly understood magnitude and b) gamed by poorly understood sections of the public, - this to me reeks of reckless experimentation and unwise investment of resources.

      I have for a long time believed that what we need, and with urgency growing as we speak, is a global, computer-assisted but scientifically (starting with math and statistics) as rigorous as possible reputation system. That, coupled with a major initiative on educating people of all ages, on all levels, about how information works, would, IMHO, have a drastic positive effect on online conversation.

      Against that background, the plugin in question looks more like a gimmick.

    • Fascinating, mbravo. I sometimes talk to people at the massive sites about what it's like to moderate at scale and they often talk about a trust score. They say most of the trouble comes from 3% of people and beyond their posts being flagged, you can tell them because they are attracted to each other — across sites.

      A venture capital firm reached out to us because their AI looks for where early adopters go whose use of a product is predictive of a successful future, and somehow their AI pointed to Cake. I don't know how it gathers its data.

      During Zuck's congressional testimony it came to light that Facebook collects information about people on the net whether they are Facebook customers or not, focused on their browsing history and friends.

      One thing I don't know about global trust scores is I have joined a few Facebook Groups as an observer: two anti-vaxx groups and one moonlanding didn't happen group. I only did it because I want to understand their views. What does Facebook think of me now? What if I post something and get the boot? Does it wreck my global trust score?

    • The reputation score is definitely super complicated, which is part of the reason we do not have it yet.

      Also, to wit, why should I care what "Facebook", or some other corporate service entity, "thinks" of me, unless for fraud prevention or finding ways to serve me better? Actually, I might care, but only because there might be reasonable grounds to think that it's not a service aspect, but rather a very adversarial approach trying to make money off me in ways I didn't request or solicit.

      My wish is for very specifically a reputation system for individuals. It should not, in theory, be at all influenced by what groups you subscribe to, or which brand of instant coffee you prefer on third Thursdays of summer months of a leap year. It would be a character attribute, based on what you actually speak, or write (mind, you, this immediately assumes an existence of a reliable method to authenticate you in any online medium, which is just as far from practical reality as a reputation system, even though technical underpinnings are closer). Think more like a scientific citation index, but vastly more thorough and at the same time much more generic in application.

      And being booted from a group only because people in charge didn't like you shouldn't affect anything much if you engaged in a civilized dialogue. Unfortunately, I have pretty grim outlook on the culture of discussion today, online and offline. Victimhood culture is blooming, culture of dignity, which is kinda required for good discussion where it is okay to disagree as long as you discuss and debate and do not fling mud or just have public hysterics. So, again, many many layers in that cake.