Cake
  • Log In
  • Sign Up
    • Chris MacAskill

      Yesterday Google launched an experimental Chrome plugin that uses their Perspective AI engine to try and detect the toxicity level of comments. In my case, it's not as much the toxicity as the noise that I care about, but the two seem to go together.

      I thought it would be received positively but it's getting panned on Product Hunt and in the comment sections of sites like Gizmodo. Some examples (I filtered out the most toxic, noisey ones for this list, but the first one still seems toxic and noisey to me 🤔):

      From Product Hunt:

      Pros: 
      needless shite.

      Cons: 
      Why Google wastes their time on needless rubbish - and why PH sucks Google's dick, this is what we should question. Just ignore the trolls.

      👆 I wish it would filter comments like that because noise. And they're what I think causes people to avoid comment sections and for publications to turn them off.

      From Gizmodo:

      I hope this takes off. Nothing like knowing the assy comment you’re about to leave may never be seen by it’s intended recipient to make you cool your jets. It also gives you plausible deniability if someone asks if you saw their reply.

      👆 One of the few positive comments. I liked it because Tune reminds me of ad blockers on horrible sites and Gmail's spam filters. Also, Gmail is pretty good at highlighting what it thinks may be important in my inbox, which I like.

      From Engadget:

      Imagine being so fragile you have to have an extension to shelter your emotions. This is the exact opposite of what a clinical psychologist would do to help people deal with their phobias and emotional traumas.

      👆 I wonder if we're seeing the most comments from people who hate the idea because they feel their comments will get filtered out?

      👇 Rating on Product Hunt: 0.3/5. Ow. Your thoughts.

    • As a research project I think Perspective is really cool. I also think machine learning like this can be valuable when it's used to flag things that might need human review. But on its own, it seems unlikely to be useful.

      I tried pasting various Cake posts into the demo at the website you linked to, and results varied pretty widely. Even though none of the posts I picked were even close to what I would consider toxic, Perspective flagged several of them as either "likely toxic" or "not sure".

      The use of swear words, even in positive or neutral contexts, seems to be a big factor in making it think something is toxic. It also has no ability to distinguish between original content and quoted text, so for instance it rated your own post here (the one I'm replying to) as "not sure" — 53% likely to be toxic.

      Toxic comments can also be disguised somewhat by padding them out with a ton of non-toxic text. For instance it confidently rated the text "fuck you" as 99% toxic. But "fuck you" followed by a copied and pasted Wikipedia article about football was only 67% toxic, putting it into "not sure" territory.

      I think we still have a long way to go.

    • AI so far (as per Facebook's automod) does a terrible job with context. Most AI filtering right now cuts like a meat cleaver when it needs to be a scalpel. I've fallen afoul of FB's automod twice, for utterly innocuous content in a private group where no one in the group reported it (I'm an admin, would see). After my 24 hour posting ban, I decided I would cut way back on what I posted there.

    • I have a feeling (purely based on experience, can't quote any science on this) that the very approach is flawed. Getting AI/ML to better understand broader context is fine, but to create yet another imperfect scale, which will be a) arbitrarily used to effect decisions with consequences of poorly understood magnitude and b) gamed by poorly understood sections of the public, - this to me reeks of reckless experimentation and unwise investment of resources.

      I have for a long time believed that what we need, and with urgency growing as we speak, is a global, computer-assisted but scientifically (starting with math and statistics) as rigorous as possible reputation system. That, coupled with a major initiative on educating people of all ages, on all levels, about how information works, would, IMHO, have a drastic positive effect on online conversation.

      Against that background, the plugin in question looks more like a gimmick.

    • Fascinating, mbravo. I sometimes talk to people at the massive sites about what it's like to moderate at scale and they often talk about a trust score. They say most of the trouble comes from 3% of people and beyond their posts being flagged, you can tell them because they are attracted to each other — across sites.

      A venture capital firm reached out to us because their AI looks for where early adopters go whose use of a product is predictive of a successful future, and somehow their AI pointed to Cake. I don't know how it gathers its data.

      During Zuck's congressional testimony it came to light that Facebook collects information about people on the net whether they are Facebook customers or not, focused on their browsing history and friends.

      One thing I don't know about global trust scores is I have joined a few Facebook Groups as an observer: two anti-vaxx groups and one moonlanding didn't happen group. I only did it because I want to understand their views. What does Facebook think of me now? What if I post something and get the boot? Does it wreck my global trust score?