
I just learned an article I shared yesterday was taken down by Facebook because āit looks like spam to them.ā
The article? My opinion piece urging people not to politicize the Mollie Tibbetts case and urging them to pray for the family instead.
Hereās a screenshot of the message I received:

I want to be clear and say that I know human beings who work at Facebook did not do this. They have an algorithm that does this work for them. So Iām not saying this is discriminatory.
I contacted them to let them know I did post the article and it is not spam so I anticipate a reasonable human being at Facebook after a review will see that and put the post back.
I hope that happens anyway.
Since documented viewpoint discrimination has taken place at Facebook, the optics of this sort of thing looks horrible.
Some remedies:
1. Stop trying to be the arbiter of what is fake news.
U.S. Senator Ben Sasse (R-Nebraska) cautioned Facebook to err on the side of free speech during testimony that Facebook founder Mark Zuckerberg gave at a Senate hearing on the subject.
āI wouldnāt want you to leave here today and think that there is a unified view in Congress that you should be moving toward policing more and more and more speech. I think violence has no place on your platform. Sex traffickers and human traffickers have no place on your platform, but vigorous debates? Adults need to engage in vigorous debates,ā Sasse said.
Facebook has made adjustments in their process of purchasing political ads which, I believe, will nip a lot of the āfake newsā problem we saw from the Russians on the platform. If there is a post that is egregious and it violates their community standards then let humans report it so humans can review it. There is legitimately āfake newsā posted on Facebook, but much of what is being called āfakeā both on the left and the right isnāt necessarily fake, but rather an opinion or a news piece they disagree with.
Also, we can police ourselves. Iāve commented on friendsā posts who share a story from an unreliable site that the website is notorious for promoting false information. As a result, some have taken the post down; some havenāt. People can decide for themselves what news sites they can trust and which ones they canāt.
2. Be transparent about what the algorithm looks for.
Algorithms and AI donāt have a bias, they donāt have a particular viewpoint, but those who program them do. It would behoove Facebook if they are going to proactively look at content that they disclose what they are looking for.
I donāt think it would be proprietary information to share the keywords or phrases their algorithm is programmed to look for. Because Iām not the only one who has had a post taken down for āspamā or an alleged violation of community standards.
Why did Facebookās algorithm determine my post above to be spam? I would love to know.
Ultimately, if Facebook would like to build trust with its conservative users, they should either stop the pro-active removal of content or share what they are looking for.
I would also suggest the algorithm is far, far too sensitive. In the long run, this does nothing to help foster a culture of free speech which, I would think, Facebook would be interested in.
Update: Facebook finally restored my post, but gave no explanation why they considered it spam. Frankly, the damage has been done in regards to closing down the comments and likes that would help a
