Cake
  • Log In
  • Sign Up
    • Some of you know how much I love YouTube. I think it's one of the greatest inventions of man and I probably spend an hour of my life each day watching YT videos.

      How they moderate is fascinating to me because Cake + my forum Adventure Rider face similar challenges, albeit at a micro fraction of their scale. It makes sense to learn all we can from them.

      This morning one of the social media journalists I respect most, Casey Newton, published a report on what it's like to moderate content for YouTube. Oh my God. How is it possible to solve these problems?

      From Casey's report:

      Unlike the contractors who make up most of Google’s workforce, Daisy was a well paid Google employee who had access to the world’s best benefits and gold-plated health care. But that didn’t stop her from developing severe long-term mental health consequences.

      A year into the job, Daisy’s then-boyfriend pointed out to her that her personality had begun to change. You’re very jumpy, he said. You talk in your sleep. Sometimes you’re screaming. Her nightmares were getting worse. And she was always, always tired.

      A roommate came up behind her once and gently poked her, and she instinctively spun around and hit him. “My reflex was This person is here to hurt me,” she says. “I was just associating everything with things that I had seen.”

      One day, Daisy was walking around San Francisco with her friends when she spotted a group of preschool-age children. A caregiver had asked them to hold on to a rope so that they would not stray from the group.“I kind of blinked once, and suddenly I just had a flash some of the images I had seen,” Daisy says. “Children being tied up, children being raped at that age — three years old. I saw the rope, and I pictured some of the content I saw with children and ropes. And suddenly I stopped, and I was blinking a lot, and my friend had to make sure I was okay. I had to sit down for a second, and I just exploded crying.”

      His full report is terrifying:

      Your thoughts.

    • That story is truly frightening.

      I can't imagine being able to get through the day viewing material like that described in the article.

    • I also stumbled upon this video on my "Recommended" list on YouTube this morning. The most compelling point by Casey was the long-term effects of exposure to such material. It doesn't matter if you have the perks of working at Google with good pay, perks, and benefits. What matters is the length and severity of the exposure.

      Even the moderators, whose job is to screen this material, can't be exposed to it in a traditional 9-5 workday schedule. Lots of breaks, counseling, and much shorter overall exposure is necessary (in my opinion) to make this type of work sustainable. These moderators play a critical protective role, and they, too, need to be protected, so we can enjoy watching YouTube for all of its goodness.

      I'm grateful for YouTube content moderators, but I wouldn't want to be one of them.

    • Not all of them do... but depending on their organization, there might at least be strategies to deal with stressful situations in the best-possible way.

      This ranges from mandatory defusing and debriefing sessions directly after a deployment, over further therapeutic access even some time after a deployment where necessary, to proper training for those in leadership positions so that they can try to prevent traumas from occurring in the first place, or at least detect them in their subordinates before it is too late.

      The thing with emergency workers is that, while a traumatic experience could happen at any time, it at least doesn't happen all the time.

    • Actually, I think the idea is great. We use the unsafe content mode, which they describe this way:

      Amazon Rekognition’s Unsafe Content Detection is a deep-learning based easy to use API for detection of explicit or suggestive adult content, violent content, weapons, and visually disturbing content in image and videos.

      Beyond flagging an image or video based on presence of unsafe content, Amazon Rekognition also returns a hierarchical list of labels with confidence scores. These labels indicate specific sub-categories of the type of unsafe content detected (such as violence), thus providing more granular control to developers to filter and manage large volumes of user generated content (UGC). This API can be used in moderation workflows for applications such as social and dating sites, photo sharing platforms, blogs and forums, apps for children, e-commerce site, entertainment and online advertising services.

      In practice, it's just woefully inaccurate. It flags flowers as unsafe but misses a ton of nudity that any human would discern at a glance.

      Optionally, you can turn on celebrity face detection if your moderators are concerned about illegal use of their likenesses. That seems like a nightmare to me. Can you imagine all the false positives it much generate for when you get a selfie taken with Matt Damon?

    • In practice, it's just woefully inaccurate. It flags flowers as unsafe but misses a ton of nudity that any human would discern at a glance.

      That's the part I find scary, if say, it's used for security purposes rather than content moderation.

      ..ooops! I hope you get your money's worth out of their service.

    • Desperation is definitely one of the drivers for people to take on this job as @Dracula pointed out. In the video Casey says that for many contractors this is a higher paying position that they would get elsewhere. But these same contractors aren’t treated as nicely as full time Google (YouTube) employees. No one really stands up for them and they are unwilling to speak up for fear of losing their jobs.

      Daisy (YouTube moderator) took the job because she was sold on the idea that her work would benefit millions of others: “You’ll see it, so others won’t” sounded to her like a worthy mission. Unfortunately mission alone can’t sustain such constant psychological assault. Understaffed and overworked moderators are left to their own devices to figure out coping strategies.

      In my opinion, to solve the “revolving door” of moderators there should be limits set to the numbers of hours of exposure to specific types of content. No one should be forced to work overtime due to poor management, unless they volunteer to do so during critical times.

    • In my opinion, to solve the “revolving door” of moderators there should be limits set to the numbers of hours of exposure to specific types of content. No one should be forced to work overtime due to poor management, unless they volunteer to do so during critical times.

      I also wonder if, in addition to long duration of exposure, a very traumatic part of their job isn't in fact the act of exposure to horrible scenes as it has been stated over and over, some things seen can't really be unseen and depending on each individual level of toughness, they will leave deep psychological marks, which may be irreversible. On the other hand, the noble intentions may or not be there for all or only some, but I am sure no one is naive enough in these days to not know what they are getting into. Kind of same idea with people volunteering to be inoculated with research viruses in exchange for money, it's their health, their choice how to "spend" it. Frankly, if I know I stumble upon media where there is going to be disturbing material I avoid it, and it is usually easy enough to spot from the start. But then we go down a perhaps incredible rabbit hole where each human has their own levels of acceptance.. and this is why moderation will never be perfect.