Cake
  • Log In
  • Sign Up
    • ducstar

      I’ve read that some engineers have requested to switch to the Instagram or WhatsApp teams. 

      One Designer on Twitter shared that he broke the golden handcuffs himself and quit without having another job lined up. 

      Why someone would  leave a good job without any solid prospects boggles my mine but that’s for another cake post. 

      Not everyone is fortunate enough as that designer to just quit. What if you have other obligations such as spouse and kids that relies on you and you need the job?

      What would you do?

    • yaypie

      This is something I've thought about a lot in the context of Facebook and other companies like Uber, and in my own life.

      Why someone would  leave a good job without any solid prospects boggles my mine but that’s for another cake post.

      I left my job at Yahoo in 2012 due to moral concerns, without any solid prospects lined up. I had worked there for five years and loved my job and my team, but the company at the time was making decisions (specifically filing offensive patent lawsuits against Facebook) that I strongly objected to and didn't want to be associated with.

      Luckily I had a good network of contacts and quickly had lots of interviews lined up (including at Facebook, though in the end I decided against pursuing a job there). But I felt so strongly about the wrongness of Yahoo's actions that I was prepared to spend months or even longer living off savings if I had to.

      Not everyone is as privileged as I was in that scenario. Someone supporting a family or just starting out in their career or in a job market without as much demand for their skills would have a harder decision to make. But I think some people fear change and perhaps don't realize how easily they could find a better job if they really tried.

      Corporations tend to inherit the morals of their most self-serving employees. The only way to fight that is for other employees to be willing to take a stand and refuse to support immoral behavior. That could mean fighting for change internally, it could mean whistleblowing, or, if all else fails, it could mean quitting. Passivity in the face of unethical or immoral behavior is complicity.

      That said, I think there's still hope for Facebook and for Facebook employees. Many of them are fighting for change internally. I hope they succeed.

    • Moose408

      I'm not sure I understand what the moral dilemma is. It seems like the employees had to know what they were getting into when they took the job. At Facebook the users are the product. Everything is about capturing as much information about the users as possible so that FB can sell that information to their customers, the advertisers. Instagram, Whatsapp, Google are all the same, in fact pretty much every free service that people use on the internet.

      There is nothing really morally wrong about what they are doing.

    • tomstar3000

      I think there may be a moral delemna for some of engineers. I would argue that most of them are just in it for the money.

      We can get as upset at management of these companies. At the end of the day, there will always be an engineer willing to do the bidding of some unscrupulous boss.. Cambridge Analytica wouldn't be were it is if it didn't have engineering and data analysts doing the heavy lifting.

    • Vilen

      I don't see this as a moral issue, but rather a fulfillment issue. There are many things that Facebook does that enhance people's lives such as keeping connections with friends and family, but like anything else that is free, it comes with a price. That price is selling your information to advertisers.

      People who work at Facebook have to be fundamentally bought into this business model. So by taking a job there, they are accepting this reality in exchange for monetary compensation. As long as they find fulfillment in their work I don't see why jumping ship is a good idea. However, if they are on the fence already this moment in time where "Facebook is Evil" can really push them over the fence.

      The real problem is that there isn't a paid version of Facebook, where you would pay to not see ads and keep your personal information as private as you want. Facebook is free, so it has to make money by serving ads, which require selling this information to advertisers. Some of those advertisers are bad actors, while most are just regular businesses we like and deal with on a daily basis.

    • Dave

      I'm not sure I understand what the moral dilemma is

      This is probably because morality is subjective. While some things are generally accepted as “immoral” by everyone (such as committing murder), other things are not as easy. Is it immoral to upsell the premium version of something to someone who doesn’t need it and likely won’t use the premium features to their extent? What about having more than one sexual partner? Many people have different views on what’s moral and what isn’t, so there isn’t much point in debating what is. We might as well debate what the best color is or what food tastes best.

      Why someone would  leave a good job without any solid prospects boggles my mine but that’s for another cake post. 

      I’ve done it. Twice actually. Basically my reason was that my work was affecting my life and personal relationships to a point that was making me miserable and borderline depressed. I literally moved from San Francisco to Buffalo to move in with my mother-in-law without a job after leaving a well paying one that I couldn’t handle the stress of. Literally not having income to support myself and my wife was a better situation than the toxicity I was in at that job.

      Regarding what I would do in this situation, it would depend on how the decision makers and decision processes worked. Maybe working there I would have learned more of how it worked and as such put myself in my own “moral dilemma.” With the recent developments I would try to find the direction the policy is going, and if there is a way to change the business model to something I can again support. I don’t think this would necessarily be a reason to up and leave without something else lined up, but it would definitely be a reason to start looking.

    • Moose408

      They are in the business of supporting their customers who are the advertisers. They give tools to the advertisers to allow them to target their audience. Is it Facebook's responsibility to censor and police who their advertiser's target? They already have in place restrictions on the ads themselves.

      Also in the first article you linked to, it said FB removed those tags. So it seems like when they are alerted to the issue they do the "moral" thing. I'm still not seeing the issue.

    • yaypie

      I think there are two aspects to Facebook's morality that should be considered.

      First is the question of whether Facebook itself intentionally engages in immoral behavior. An example of this would be intentionally misleading users into opting into sharing their private data in ways the user doesn't fully understand.

      Second is the question of whether Facebook, whether through ignorance, inaction, incompetence, or otherwise, enables others to use its platform to achieve immoral goals. An example of this would be building ad targeting systems that allow advertisers to ensure that only white people see their ads, or that only anti-semitic people see their ads, or so on.

      On the first question, I would tend to agree that Facebook generally doesn't set out to do immoral things. I don't think Facebook is inherently evil.

      But on the second question, we have a great deal of evidence that Facebook has (probably unintentionally) enabled immoral acts through carelessness, and has failed to act rapidly or thoroughly to address many of these problems when they were informed of them.

      The second case is the one I find most problematic, because it means that even without necessarily trying to, Facebook may be harming not just individual people, but possibly society as a whole. By failing to recognize and act to prevent this damage (or the potential for damage), Facebook is engaging in immoral behavior through inaction.

      I wouldn't want to support that, personally, no matter how good Facebook's actual intentions may be.

    • yaypie

      I linked to multiple articles. The followup articles pointed out that the problem continues to exist despite Facebook's assurances they would address it.

    • Chris

      Several years ago I was questioning my Mormon faith and someone invited me to join a Closed Group of like-minded people who were disturbed about things like Prop 8. At that time we all loved the church and weren't planning to leave it and we certainly didn't want to worry our families about our faith.

      What we didn't understand is Closed means the membership list of the group is open for all to see, not just members of the group. When I joined, it unleashed a torrent of people concerned about my spiritual well-being because they could all see I joined the group.

      So I started a Secret Group, the next level of privacy. I invited a few people to join. This time I wanted to thoroughly understand what could go wrong. One thing that disturbed me is as long as I had less than 5,000 members, I could, at the press of a button, make my Secret Group a Public one.

      I discussed both these things with Facebook and explained that it's one way some gay teens got outed and humiliated. Facebook did what it has done for 15 years: thank me for explaining, apologized, but didn't make any changes.

      I tried doing the same thing with Slack. However, Slack has a clear stand: they don't let you make a private group public. And they don't have one in the middle with a misleading name that claims to be closed but is partly public. I asked how they chose those policies. They replied "morals."

      Not only do they keep private groups private in all ways, they go out of their way to explain in simple terms how information you share in a private channel could be exposed.

      I get a very warm feeling when Stewart Butterflied of Slack explains privacy, because he talks in specifics and draws clear moral lines. I feel unsatisfied when Zuck or Cheryl do their perpetual apology tour Facebook has done since its inception. They say the same thing every year: "Oh, we were just naive and trusting so we didn't imagine the ways privacy could be breached."

    • yaypie

      Yep, though mostly just to keep up with my family, who use it almost exclusively.

      I use Facebook open source software more than I use the website itself. Cake uses React, which is a Facebook open source project.

    • tomstar3000

      I agree that FB could do better. I'm having a hard time believing they will. You know those kids that get outed because a moderator opened a private group to the public never saw it coming. That's unfortunate, but that kind of thing will continue to happen because group moderators don't understand the consequences and the FB just doesn't care about the consequences.

    • The biggest thing I see that I don't like is the constant "beg forgiveness" tactic Facebook deploys every time there is an incident. I could see that in public beta but for Facebook to be as old and stodgy as it has become, there's no excuse. Privacy isn't a new thing and it shouldn't be obfuscated-be like Slack; plain and simple.

      Oh. What would you do if an employee begged forgiveness after every mistake (and there had been more than a few)?

    • Chris

      I found the book Chaos Monkeys to be unbelievably fascinating. Antonio Martinez is a Phd physicist who worked for Goldman Sachs as a Quant before joining Facebook to help them get better responses on their ads.

      The thing is who you follow and what you like on Facebook doesn't correlate too well to what you buy. But they had the trump card, which is your real identity. So they could buy offline data about you and correlate it to your Facebook account. I don't know how many people knew they were doing that.

      On a related note from Bloomberg yesterday about Peter Thiel's company (he sits on Facebook's board):

      Palantir Knows Everything About You

    • I think there are companies the general public doesn't know much about. Much the same as Lexus/Nexus whose specialty is data aggregation. They're the guys who buy census data, DMV data, etc. and turn it into your personal profile. Even though what they bought was "anonymized".

    • lidja

      Along this same line... How will cake deal with objectionable (some will say immoral) opinions?

      Will you take up the familiar mantra, “Cake is just a platform”?

      Will you post a credo of community standards and enforce those standards? Or maybe just expect users to abide by the standards?

      What will you do the first time you become aware that cake is being used to bully?

      What will you do when you have a panel of experts and one of the experts shares misinformation?

    • Chris

      Hi lidja,

      I think of these questions all day every day and we have our own moderation channel in Slack where we debate them, sometimes with a lot of passion. We all bring our own biases and sense of morality to the debates.

      We view ourselves as publishers of user-generated content. We're different from someone like The New York Times because we don't hire journalists and we don't edit what our users post. But we do have to make judgements on what is hate speech, bullying, dangerous, etc., and there will be furious debates when people disagree.

      The Internet has an incredible variety of sites that let you say almost anything. Those are easy to build. It's much harder to build a service that's all about great public conversations, but that is our mission.

      We will soon enable panel conversations, which I don't think have ever been done on the Internet, except in video form. The thing is, the Internet is the only venue on earth where there is an expectation that anyone can join the conversation, but it's the worst place to have that expectation because it only takes one person to ruin it for everyone. In real life, the expectation is when you assemble a panel on a stage, it's in everyone's interest to let the panel speak without someone from the audience trying to interject.

      It's in our interest and yours to find really interesting panelists that millions of people would want to hear from, but even then we will have to block some panels from seeing the light of day.

      Make sense?

    • yaypie

      To add to what Chris said, I wanted to answer your individual questions one by one, since they're fantastic questions that deserve direct answers.

      Will you take up the familiar mantra, “Cake is just a platform”?

      No, absolutely not.

      We've discussed this a lot internally because we see Facebook, Twitter, Reddit, and other platforms taking that stance, and that's not what we want Cake to be. We want Cake to be a place where people have great conversations, and that means we're not interested in being a platform for abuse, hate speech, etc., because those things are the opposite of great (and they drive away good people).

      Will you post a credo of community standards and enforce those standards? Or maybe just expect users to abide by the standards?

      Our terms of service include clauses describing both specific and general standards for content on Cake. For example: "You may not threaten, defame, bully, stalk, abuse, harass, impersonate or intimidate people or entities."

      We will enforce this by removing posts and suspending users if necessary. We will also exercise our own judgment in deciding whether to carry out enforcement actions that aren't specifically delineated in the terms of service as new situations arise.

      What will you do the first time you become aware that cake is being used to bully?

      We'll do everything in our power to remove the bully's ability to bully (for example, by suspending the user in question and removing abusive posts) and to ensure the physical and mental safety of the victim(s). We'll also do our best to learn from these situations and try to improve Cake so that they can be prevented in the future.

      What will you do when you have a panel of experts and one of the experts shares misinformation?

      This one's a bit tricky to answer as a hypothetical, since I think it would depend a lot on the circumstances. For example, if we had a panel with the New York Yankees and one of the players posted incorrect information about their batting average, that's probably not a big deal.

      But if we became aware that, for example, Russian trolls were spreading misinformation via Cake conversations with the intent of influencing an election or sowing discord, then we would take action to prevent further damage (such as by deleting the conversations or posts in question and suspending their authors).

      That said, while we do want to be responsible and avoid being an avenue for people to spread misinformation, Cake isn't a journalistic organization and can't fact-check everything, so we do rely to some extent on users flagging problematic content.

    • lidja

      I genuinely hope you guys will be able to find your way through that maze. I would pay money (now there’s an idea) to participate in a civilized, intelligent, creative, constructive online community and not worry about the threats that plague other online platforms. Those platforms seem to value free speech over thoughtful speech, and the result is often very discouraging.

      The orientation of cake — focusing on content instead of profiles, and lacking a clear organizational structure/site map for users to see (and then use to pile on) — may work in your favor. I hope so!

    • flei

      Facebook might have a moral dilemma, but they don't seem to have a financial one. It appears immorality pays well. http://www.chicagotribune.com/business/ct-biz-facebook-profit-hits-all-time-high-20180426-story.html

    You've been invited!