I think there are two aspects to Facebook's morality that should be considered.
First is the question of whether Facebook itself intentionally engages in immoral behavior. An example of this would be intentionally misleading users into opting into sharing their private data in ways the user doesn't fully understand.
Second is the question of whether Facebook, whether through ignorance, inaction, incompetence, or otherwise, enables others to use its platform to achieve immoral goals. An example of this would be building ad targeting systems that allow advertisers to ensure that only white people see their ads, or that only anti-semitic people see their ads, or so on.
On the first question, I would tend to agree that Facebook generally doesn't set out to do immoral things. I don't think Facebook is inherently evil.
But on the second question, we have a great deal of evidence that Facebook has (probably unintentionally) enabled immoral acts through carelessness, and has failed to act rapidly or thoroughly to address many of these problems when they were informed of them.
The second case is the one I find most problematic, because it means that even without necessarily trying to, Facebook may be harming not just individual people, but possibly society as a whole. By failing to recognize and act to prevent this damage (or the potential for damage), Facebook is engaging in immoral behavior through inaction.
I wouldn't want to support that, personally, no matter how good Facebook's actual intentions may be.