Problems of using fact-checking sites as inputs for social media algorithms

This is a brief post describing a key problem in using fact-checking sites as inputs to filter undesirable content (e.g., fake news) from social media newsfeeds (e.g., Facebook).

The premise sounds good, right? We use human raters to verify truthfulness of an article, and use that information as part of the decision-making algorithm.

However, there are two problems:

  1. Human raters may be biased
  2. Not all statements are in fact verifiable

First, it has become obvious that many journalists are not objective, but blatantly biased, even to a degree of being proud about it. Consequently, their credibility is lost. If fact-checkers are biased journalists, they will interpret statements based on their own beliefs and attitudes, while not seeing anything wrong in doing so. This, of course, is a major issue for using fact-checking services as inputs in machine-decision making — because the inputs are corrupt, so will be the outcomes, too (”garbage in, garbage out”).

Second, there are several cases where ”facts” being checked are not truth statements. According to Oxford Index,

An important difference between the truth of a statement and the validity of a norm is that the truth of a statement is verifiable — i.e. it must be possible to prove it to be true or false — while the validity of a norm is not.

For example, a Finnish fact-checking site is checking facts such as ”EU is oppressing national states into following its will.” (Link, in Finnish.) Clearly, this is not a fact statement because that sort of a statement cannot be unambiguously verified. Yet, they unambiguously label the statement ”wrong”, thus exposing their political bias rather than truth value of the statement.

Another example: the popular fact-checking site Snopes.com is checking facts like ”Are Non-Citizens Being Registered to Vote Without Their Knowledge?” In that case, it was determined ”Mostly false” because of only five such errors have taken place according to their second-hand information. However, that is in fact proof of the possibility of the claim. To be a correct truth statement, is should state ”Have non-citizens been registered to vote without their knowledge” (i.e., are there known cases), and the answer, in the light of evidence, should be ”True”, not ”False”. In such cases, the interpretation of the raters clearly shows in how the verified statements are formulated and accordingly interpreted.

The particular problems of fact-checking sites like Snopes.com is that 1) they use selective referencing, seemingly focusing on citing ”liberal media” such as CNN (thus breaking fact-checkers’ code of principles) and 2) use unambiguous definition of truth, including classifications such as  ”Mostly True” and ”Mostly False”. But a truth statement (fact) is either true or false, not somewhere in between. Anything else is interpretation, and therefore susceptible to human bias.


For a fact to be verifiable, it needs to be a truth statement. In other words, we should be able to unambiguously state whether it is true or false. However, some of the ”facts” so-called fact-checking sites (e.g., Snopes) are verifying are not verifiable, i.e. they are not truth statements.

If the facts are not in fact truth statements, but something else — like jokes, sarcasm, or forms of exaggeration — using fact-checking services as inputs in social media algorithms to fight ”fake news” then becomes highly compromising.


Ayer, A. J. (1936) Language, Truth and Logic. London: V. Gollancz.

Tekijä jonisalminen777

Researcher of marketing, human-computer interaction, startups, and personas.


Täytä tietosi alle tai klikkaa kuvaketta kirjautuaksesi sisään:


Olet kommentoimassa WordPress.com -tilin nimissä. Log Out /  Muuta )


Olet kommentoimassa Facebook -tilin nimissä. Log Out /  Muuta )

Muodostetaan yhteyttä palveluun %s