Mark Zuckerberg’s Problem is Not Just the Russia Thing (Updated)

With the coming Congressional testimony Tuesday and Wednesday by Mark Zuckerberg, Facebook’s many foibles will be on display. I encourage people to watch closely for what I hope will be some pointed questions by committee members, particularly in the House Energy and Commerce Committee not only about the political influences of the Russians allegedly trying to sow seeds of discontent prior to the 2016 elections. I am hoping that some specific questions will be offered regarding the consistently biased treatment of Groups and Pages on Facebook that present the conservative side of politics among domestic users of Facebook. Following are some examples of the types of questions I am hoping to see.

Excerpts from Zuckerberg’s advance copy of his testimony is provided as a basis for the questions, so he is the one that pointed these things out to Congress. He has, by so doing, opened the door to having to face this type of questioning.

HIS TESTIMONY: “It’s not enough to just connect people, we have to make sure those connections are positive,” Zuckerberg plans to say in his testimony. “It’s not enough to just give people a voice, we have to make sure people aren’t using it to hurt people or spread misinformation.”

Whom do you propose to have the authority to make the completely subjective determination of whether or not something posted by an individual/group “hurts people” or “spreads disinformation?”  

 If these decisions will be delegated to individuals, what safeguards are provided to accused users to allow their side of the situation to receive a fair hearing?

Who do you propose to have the authority to make the completely subjective determination whether connections made on Facebook among individuals or businesses are “positive?”

Given the readily available examples of instances where Facebook has restricted, suspended, and/or shut down users based on vague claims from your company that “the content did not meet or violated FB community standards,” what do you propose to do to ensure that the “FB community standards” are not being used to suppress political or ideological speech or personal opinions that might conflict with the opinions or political positions held by those in authority within the Facebook organization or another user who might file a complaint?  This is concerning as to the role of algorithms and artificial intelligence being used in making these decisions without the benefit of human intervention?

In the context of the previous question, what assurances will you provide to your users that when there is no clear-cut evidence of violation of the FB Community Standards that they will have a prompt and objective appeal process regarding the reasons for the intervention into their communications including having the decision explained quickly, clearly and completely so that they understand the bases of the decisions being made against their freedom of speech?

This concern arises from the ongoing dispute being argued between Facebook and the conservative sisters known as Diamond and Silk who had their group severely curtailed by Facebook and have spend many months trying to obtain a straight answer as to why they were declared “dangerous to the FB community.”(see the final question, below).

Given the fact that Facebook does not take responsibility for the content of users’ newsfeed, why should your organization take it upon itself to censor someone’s newsfeed or posts about political, ideological, or other content that FB might consider objectionable for some subjective reason outside of the community standards issue?

Given the fact that each user can restrict anyone with access to their page or newsfeed through several levels of restrictions provided by Facebook as a means of letting users decide what appears on their page, why is Facebook getting involved with anything other than posts that represent a breach of law (such as child nudity/pornography/criminal activity, etc.)?  Would it not be more appropriate for Facebook to leave the censorship of political or ideological speech wholly in the hands of individual users to determine?  If not, why not? 

How can you determine what is or is not “objectionable” or “in violation of FB Community Standards” when it would appear that at least some decision-making is motivated more by internal political views within the FB organization rather than any objective standard that is applied to all such viewpoints equally?

A reading of your Community Standards by any objective person would very likely have them come away with no clear and objective understanding of what is or is not permitted on a FB page.  The standards make no mention of politics or ideology and little specifics regarding how a particular post could be measured objectively as meeting or violating those standards?  That having been said, how and by whom will the decisions be made that result in any sort of restriction of content on a given page or post? 

Will your concerns regarding “spreading misinformation” or “hurting someone” be applied to individual users, paid advertisers, or both?  If applied to individuals, will any safeguards be built in to avoid decisions that are, in fact, being made with a noticeable political or ideological bias that favors one viewpoint over another?  If applied to advertisers, what safeguards will be provided that allows all viewpoints and positions to receive equal consideration in both ad placement and approval of content?

What standard is being used in determining whether a group or page is a “hate group”? Are any outside organizations being used to aid in making such determinations and if so, which organizations?

Please define what represents “Images altered to degrade private individuals.” Would this be limited to photo re-touching or would a simple caption added to a published photo be enough to violate the standards?

Your standards include: “We remove credible threats to public figures, as well as hate speech directed at them – just as we do for private individuals.”  What standard is used to determine what is or is not “hate speech”?  Who makes that determination?  Do you use any outside organizations to aid in making determinations regarding “hate speech” or groups that represent “hate groups”? If so, please name them.

Regarding your standards concerning “hate speech”, you list the following categories of “protected classes” that cannot be “directly attacked”:  Race, Ethnicity, National origin, Religious affiliation, Sexual orientation, Sex, gender, or gender identity, or Serious disabilities or diseases.  Please define “Directly Attacked.”  

How would a post be handled suggesting that based on a user’s faith tradition that sexual orientations other than heterosexual ones are sinful and should be condemned or avoided?  Would this apply to gender identity as well?  How would conversations relating to negative opinions regarding illegal immigration, or refugee resettlement among groups that claim religious affiliations that directly conflict with the laws of the United States be handled?  Would they be suppressed as “hate speech?”

Given the highly subjective nature of your standards in many areas, how do you resolve a disagreement between the posting user and the decision rendered by the representative who reviews a complaint?  What measures can the poster utilize to defend against a complaint that may be motivated by personal or ideological differences with the user?

HIS TESTIMONY:  As Facebook has grown, people everywhere have gotten a powerful new tool to stay connected to the people they love, make their voices heard, and build communities and businesses.

HIS TESTIMONY:  I don’t want anyone to use our tools to undermine democracy. That’s not what we stand for.

HIS TESTIMONY:  And we’ll also keep building tools to help more people make their voices heard in the democratic process.

If Facebook takes it upon itself to make judgments about the character of the voices on its platform, and closes or otherwise restricts voices with which it disagrees, how does that square with Facebook being a “powerful new tool to make…voices heard?  Only the voices you agree with or all voices? 

Does the first amendment not apply to users of Facebook?

What specifically was “dangerous” about the group administered under the title “Diamonds and Silk”?

Many of these suggestions were provided to a member of the House committee. I am hopeful that at least some of these or similar questions will be incorporated into the House hearing (April 11, 2018, at 10:00 a.m.). Questions like these beg answers as we approach a very important election season. If Facebook is truly intended to be a means for people to connect and have their voices heard and to share ideas freely, then Mr. Zuckerberg should be held to his word.

UPDATE April 12, 2018: 

During the final part of the hearings, Mark Zuckerberg acknowledged the Diamond and Silk issue and stated on the record that the decision to bar them was done “erroneously”. After following his testimony for about 90 minutes, I have concluded that Facebook has a problem with its employees more than it does at the policy-making level.

His testimony indicated that Facebook has over 20,000 employees assigned to security and evaluation of complaints regarding violations of community standards. He has also testified that the Silicon Valley area is overwhelmingly left-wing. Given these two admissions, it is very possible that the problem is a lack of firm policies governing employee conduct in evaluating claims including provisions that require employees to leave their personal biases at the door when the come to work or face repercussions when they allow their personal biases to create difficulties for corporate relations with the public. Many of the questions raised in my article were asked in one form or another, but I truly fear that the overall view of the committee was for some level of regulation to be enacted. My fear is based in the knowledge that history has shown regulatory efforts by the federal government are usually overblown and ill-conceived. Time will tell.

Tom Stark

Tom Stark’s career began with Air Force service, including a year in Thailand and Vietnam, and progressed through a variety of manufacturing and service positions to Manager of Security, Safety, and Transportation for the Orange County (FL) Convention Center. He graduated from Barry University in 1994 and soon after embarked on a second career building custom furniture as an entrepreneur for the last 20 years. He unsuccessfully ran as a Tea Party candidate in the 2010 Congressional race (WV-01). Tom currently writes and advocates for smaller more prudent and less intrusive government, strengthening families and protecting life while building free market principles that make America stronger. He is now 70, retired, and residing with his wife in Weston, West Virginia.

Related Articles

Back to top button
Close
Close

Please disable ad blocker.

We work hard to write our articles and provide you with the content you enjoy. The ads on the site allow us to continue our work while feeding our families. If you'd please whitelist our site in your ad blocker or remove your ad blocker altogether, we'd greatly appreciate it. Thank you!