• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

An Antic Disposition

  • Home
  • About
  • Archives
  • Writings
  • Links
You are here: Home / Cyber Freedom / Distributed, personalized fact-checking of social network streams

Distributed, personalized fact-checking of social network streams

2019/10/01 By Rob Leave a Comment

The Dangers of Centralized Control

A report on CNN online today covers criticism of Facebook by the Democratic National Committee, who claims that the social media giant allows President Trump, “to mislead the American people on their platform unimpeded.” The solution desired by the DNC is for Facebook to subject the president’s social media posts to “fact-checking,” something that they are already starting to do with posts by ordinary folks, but have not yet done to politicians.

The dangers of such an approach ought to be clear to all. Quis custodiet ipsos custodes?, as the ancients wrote. Who will watch the watchmen? Do we really want such concentrated power over political speech and political campaigns? I have friends at Facebook and Google. They are very smart people. But they are not so smart that they can act as censors to the planet. No one is that smart.

Of course, there are certainly harmful social media posts out there, dangerous medical advice, hate speech, bullying, frauds, spam, and so on, things that few of us would like to see in our social media streams.

At the same time we need to be mindful that the history of progress has been the history of unpopular ideas gaining acceptance and becoming mainstream, from religious toleration to abolition of slavery, to women’s suffrage to gay marriage.

These ideas, now considered sacrosanct liberties today, where once suppressed by centralized control over the mass distribution of ideas, the social networking of the day. For example, early proponents of contraception, getting their ideas out peacefully via newspapers, were prosecuted under laws that made it illegal to send “obscene” materials through the U.S. Postal Service, a law which was construed to include even information about birth control.

There are danger of doing too much, or doing too little. If we hand central control over censorship (which is what we’re talking about) over to a single company, or a small number of companies who, because of their great size are constantly in the crosshairs of antitrust regulators in the U.S. and in Europe, we effectively hand the “kill switch” of the internet over to the present ruling class, and likely stall future social progress. On the other hand, if we do nothing, our social media accounts become unsavory places where those with the worst intentions get the most attention.

An Analogy: Microsoft Windows and Internet Explorer

Recall back years ago when the big controversy was over Microsoft’s platform monopoly on Microsoft Windows and how they apparently were extending that monopoly to their Internet Explorer browser, and to related internet technologies like ActiveX controls.

It was reasonable for, say, a hyperlink in a Microsoft Word document to launch a browser when clicked. What else could Microsoft do but launch their own Internet Explorer browser? At the time, this argued for on the basis of user-convenience. A highly-integrated solution, one the user did not need to tinker with, was said to be more friendly for the user. Allowing other browsers to insert themselves would be “messy” and lead to user confusion.

Regulators saw things differently. In order to resolve antitrust complaints, Microsoft eventually agreed to open up their Windows platform and allow internet browsers from other companies equal access for integration into their platform. So, Netscape and Opera, and eventually Firefox and Chrome, could install themselves to be the “default” browser for a user, and respond, say, to hyperlink clicks in a Word document.

This move encouraged much progress in the development of web browsers over the ensuing years, competition that Microsoft ultimately lost. But the users all won, by having more choice.

A Open Approach to Social Media Filtering

The debate seems to be focused on what Facebook ought to do, how strict Facebook ought to be with fact-checking, etc. But ask the wrong question and you’ll never get the right answer. Instead, we ought to be asking ourselves whether or not we ought to be limited to just Facebook’s filtering algorithm.

The fundamental problem here with Facebook and their censorship is the acceptable of artificial scarcity in the provision of filtering (censoring) services for social media streams, caused by the lack of an open interface.

Just as Microsoft was made to open up their browser integrations, it is worth considering the benefits of having Facebook’s monopoly on “fact checking” and similar content filtering, so much of it clearly politically charged, tempered by open competition.

Imagine a user interface by which any user could select and subscribe to one or more web services, run by 3rd parties, that would filter his social media stream, and promote, demote, hide, or annotate posts per the judgement of those services.

This could be done, for example, by associating a URI with each post and sending a list of these to the external web service, which would, consulting its own database and/or algorithms, return a list of actions per each URI: promote, demote, hide, annotate, etc., actions which Facebook would then comply with, for that user and that user only.

Web services could be run by non-profits concerned with hate groups, like the Southern Poverty Law Center, with political groups, even political parties, with churches, with schools, etc. Some would be liberal, some conservative, some associated with governments, some from newspapers, some from private associations. It would be open to any and all.

A user could select more than one filtering service and arrange them in a priority order. Since the interface would open, a service could aggregate other services. So, just as some would specialize in filtering, others might specialize in identifying unique and useful groups doing filtering.

Some services might charge for their use. Others might pay users for being used.

Some filtering might be crowd-sourced, hooked into a browser plugin that allows users to “report” objectionable material to one or more filtering services.

None of the services would be obligatory.

And, of course, Facebook would not be locked-out of providing its own filtering services. But it would be just one of many, and could easily be disabled by a user if he disagreed with Facebook’s judgement.

In Closing

An absolute technological monopoly on filtering content in the world’s largest social network must not become a politically-exploitable monopoly on free speech. To allow that is to invite absolutism. The technological means of getting around this, by an open interface for user-pluggable filtering services, is in our hands. We’ve done stuff like this before. It is not all that hard. And aside from its healthy, pro-choice, pro-competition, pro-consumer aspects, I suspect Facebook itself would welcome relief from perpetually being in the hot seat with respect to content filtering. This is a good opportunity for them to wash their hands of it, and end their unasked for and undeserved roll as global censor of social media, and return to being a common carrier, freed from responsibility and liability for censoring content.

  • Tweet

Filed Under: Cyber Freedom, Facebook

Reader Interactions

Leave a Reply Cancel reply

Primary Sidebar

Copyright © 2006-2023 Rob Weir · Site Policies