Tagged: human bias

The black sheep problem in machine learning

Introduction. Hal Daumé wrote an interesting blog post about language bias and the black sheep problem. In the post, he defines the problem as follows:

The ”black sheep problem” is that if you were to try to guess what color most sheep were by looking and language data, it would be very difficult for you to conclude that they weren’t almost all black. In English, ”black sheep” outnumbers ”white sheep” about 25:1 (many ”black sheep”s are movie references); in French it’s 3:1; in German it’s 12:1. Some languages get it right; in Korean it’s 1:1.5 in favor of white sheep. This happens with other pairs, too; for example ”white cloud” versus ”red cloud.” In English, red cloud wins 1.1:1 (there’s a famous Sioux named ”Red Cloud”); in Korean, white cloud wins 1.2:1, but four-leaf clover wins 2:1 over three-leaf clover.

Thereafter, Hal accurately points out:

”co-occurance frequencies of words definitely do not reflect co-occurance frequencies of things in the real world.”

But the mistake made by Hal is to assume language describes objective reality (”the real world”). Instead, I would argue that it describes social reality (”the social world”).

Black sheep in social reality. The higher occurence of ’black sheep’ tells us that in social reality, there is a concept called ’black sheep’ which is more common than the concept of white (or any color) sheep. People are using that concept, not to describe sheep, but as an abstract concept in fact describing other people (”she is the black sheep of the family”). Then, we can ask: Why is that? In what contexts is the concept used? And try to teach the machine its proper use through associations of that concept to other contexts (much like we teach kids when saying something is appropriate and when not). As a result, the machine may create a semantic web of abstract concepts which, if not leading to it understanding them, at least helps in guiding its usage of them.

We, the human. That’s assuming we want it to get closer to the meaning of the word in social reality. But we don’t necessarily want to focus on that, at least as a short-term goal. In the short-term, it might be more purposeful to understand that language is a reflection of social reality. This means we, the humans, can understand human societies better through its analysis. Rather than trying to teach machines to imputate data to avoid what we label an undesired state of social reality, we should use the outputs provided by the machine to understand where and why those biases take place. And then we should focus on fixing them. Most likely, technology plays only a minor role in that, although it could be used to encourage balanced view through a recommendation system, for example.

Conclusion. The ”correction of biases” is equivalent to burying your head in the sand: even if they magically disappeared from our models, they would still remain in the social reality, and through the connection of social reality and objective reality, echo in the everyday lives of people.

Problems of using fact-checking sites as inputs for social media algorithms

This is a brief post describing a key problem in using fact-checking sites as inputs to filter undesirable content (e.g., fake news) from social media newsfeeds (e.g., Facebook).

The premise sounds good, right? We use human raters to verify truthfulness of an article, and use that information as part of the decision-making algorithm.

However, there are two problems:

  1. Human raters may be biased
  2. Not all statements are in fact verifiable

First, it has become obvious that many journalists are not objective, but blatantly biased, even to a degree of being proud about it. Consequently, their credibility is lost. If fact-checkers are biased journalists, they will interpret statements based on their own beliefs and attitudes, while not seeing anything wrong in doing so. This, of course, is a major issue for using fact-checking services as inputs in machine-decision making — because the inputs are corrupt, so will be the outcomes, too (”garbage in, garbage out”).

Second, there are several cases where ”facts” being checked are not truth statements. According to Oxford Index,

An important difference between the truth of a statement and the validity of a norm is that the truth of a statement is verifiable — i.e. it must be possible to prove it to be true or false — while the validity of a norm is not.

For example, a Finnish fact-checking site is checking facts such as ”EU is oppressing national states into following its will.” (Link, in Finnish.) Clearly, this is not a fact statement because that sort of a statement cannot be unambiguously verified. Yet, they unambiguously label the statement ”wrong”, thus exposing their political bias rather than truth value of the statement.

Another example: the popular fact-checking site Snopes.com is checking facts like ”Are Non-Citizens Being Registered to Vote Without Their Knowledge?” In that case, it was determined ”Mostly false” because of only five such errors have taken place according to their second-hand information. However, that is in fact proof of the possibility of the claim. To be a correct truth statement, is should state ”Have non-citizens been registered to vote without their knowledge” (i.e., are there known cases), and the answer, in the light of evidence, should be ”True”, not ”False”. In such cases, the interpretation of the raters clearly shows in how the verified statements are formulated and accordingly interpreted.

The particular problems of fact-checking sites like Snopes.com is that 1) they use selective referencing, seemingly focusing on citing ”liberal media” such as CNN (thus breaking fact-checkers’ code of principles) and 2) use unambiguous definition of truth, including classifications such as  ”Mostly True” and ”Mostly False”. But a truth statement (fact) is either true or false, not somewhere in between. Anything else is interpretation, and therefore susceptible to human bias.

Conclusion

For a fact to be verifiable, it needs to be a truth statement. In other words, we should be able to unambiguously state whether it is true or false. However, some of the ”facts” so-called fact-checking sites (e.g., Snopes) are verifying are not verifiable, i.e. they are not truth statements.

If the facts are not in fact truth statements, but something else — like jokes, sarcasm, or forms of exaggeration — using fact-checking services as inputs in social media algorithms to fight ”fake news” then becomes highly compromising.

Readings

Ayer, A. J. (1936) Language, Truth and Logic. London: V. Gollancz.