Introduction. Hal Daumé wrote an interesting blog post about language bias and the black sheep problem. In the post, he defines the problem as follows:
The ”black sheep problem” is that if you were to try to guess what color most sheep were by looking and language data, it would be very difficult for you to conclude that they weren’t almost all black. In English, ”black sheep” outnumbers ”white sheep” about 25:1 (many ”black sheep”s are movie references); in French it’s 3:1; in German it’s 12:1. Some languages get it right; in Korean it’s 1:1.5 in favor of white sheep. This happens with other pairs, too; for example ”white cloud” versus ”red cloud.” In English, red cloud wins 1.1:1 (there’s a famous Sioux named ”Red Cloud”); in Korean, white cloud wins 1.2:1, but four-leaf clover wins 2:1 over three-leaf clover.
Thereafter, Hal accurately points out:
”co-occurance frequencies of words definitely do not reflect co-occurance frequencies of things in the real world.”
But the mistake made by Hal is to assume language describes objective reality (”the real world”). Instead, I would argue that it describes social reality (”the social world”).
Black sheep in social reality. The higher occurence of ’black sheep’ tells us that in social reality, there is a concept called ’black sheep’ which is more common than the concept of white (or any color) sheep. People are using that concept, not to describe sheep, but as an abstract concept in fact describing other people (”she is the black sheep of the family”). Then, we can ask: Why is that? In what contexts is the concept used? And try to teach the machine its proper use through associations of that concept to other contexts (much like we teach kids when saying something is appropriate and when not). As a result, the machine may create a semantic web of abstract concepts which, if not leading to it understanding them, at least helps in guiding its usage of them.
We, the human. That’s assuming we want it to get closer to the meaning of the word in social reality. But we don’t necessarily want to focus on that, at least as a short-term goal. In the short-term, it might be more purposeful to understand that language is a reflection of social reality. This means we, the humans, can understand human societies better through its analysis. Rather than trying to teach machines to imputate data to avoid what we label an undesired state of social reality, we should use the outputs provided by the machine to understand where and why those biases take place. And then we should focus on fixing them. Most likely, technology plays only a minor role in that, although it could be used to encourage balanced view through a recommendation system, for example.
Conclusion. The ”correction of biases” is equivalent to burying your head in the sand: even if they magically disappeared from our models, they would still remain in the social reality, and through the connection of social reality and objective reality, echo in the everyday lives of people.
This report was created by Joni Salminen and Catherine R. Sloan. Publication date: December 10, 2017.
Artificial intelligence (AI) and machine learning are becoming more influential in society, as more decision-making power is being shifted to algorithms either directly or indirectly. Because of this, several research organizations and initiatives studying fairness of AI and machine learning have been started. We decided to conduct a review of these organizations and initiatives.
This is how we went about it. First, we used our prior information about different initiatives that we were familiar with. We used this information to draft an initial list and supplemented this list by conducting Google and Bing searches with key phrases relating to machine learning or artificial intelligence and fairness. Overall, we found 25 organization or initiatives. We analyzed these in greater detail. For each organization / initiative, we aimed to retrieve at least the following information:
- Name of the organization / initiative
- URL of the organization / initiative
- Founded in (year)
- Short description of the organization / initiative
- Purpose of the organization / initiative
- University or funding partner
Based on the above information, we wrote this brief report. Its purpose is to chart current initiatives around the world relating to fairness, accountability and transparency of machine learning and AI. At the moment, several stakeholders are engaged in research on this topic area, but it is uncertain how well they are aware of each other and if there is a sufficient degree of collaboration among them. We hope this list increases awareness and encounters among the initiatives.
In the following, the initiatives are presented in alphabetical order.
AI100: Stanford University’s One Hundred Year Study on Artificial Intelligence
Founded in 2016, this is an initiative launched by computer scientist Eric Horvitz and driven by seven diverse academicians focused on the influences of artificial intelligence on people and society. The goal is to anticipate how AI will impact every aspect of how people work, live and play, including automation, national security, psychology, ethics, law, privacy and democracy. AI100 is funded by a gift from Eric and Mary Horvitz.
AI for Good Global Summit
AI for Good Global Summit was held in Geneva, 7-9 June, 2017 in partnership with a number of United Nation (UN) sister agencies. The Summit aimed to accelerate and advance the development and democratization of AI solutions that can address specific global challenges related to poverty, hunger, health, education, the environment, and other social purposes.
AI Forum New Zealand
The AI Forum was launched in 2017 as a membership funded association for those with a passion for the opportunities AI can provide. The Forum connects AI tech innovators, investor groups, regulators, researchers, educators, entrepreneurs and the public. Its executive council includes representatives of Microsoft and IBM as well as start-ups and higher education. Currently the Forum is involved with a large-scale research project on the impact of AI on New Zealand’s economy and society.
AI Now Institute
The AI Now Institute at New York University (NYU) was founded by Kate Crawford and Meredith Whittaker in 2017. It’s an interdisciplinary research center dedicated to understanding the social implications of artificial intelligence. Its work focuses on four core domains: 1) Rights & Liberties, 2) Labor & Automation, 3) Bias & Inclusion and 4) Safety & Critical Infrastructure. The Institute’s partners include NYU’s schools of Engineering (Tandon), Business (Stern) and Law, the American Civil Liberties Union (ACLU) and the Partnership on AI.
Algorithms, Automation, and News
AAWS is an international conference focusing on impact of algorithms on news. Among the studied topics, the call for papers lists 1) concerns around news quality, transparency, and accountability in general; 2) hidden biases built into algorithms deciding what’s newsworthy; 3) the outcomes of information filtering such as ‘popularism’ (some content is favored over other content) and the transparency and accountability of the decisions made about what the public sees; 4) the privacy of data collected on individuals for the purposes of newsgathering and distribution; 5) the legal issues of libel by algorithm, 6) private information worlds and filter bubbles, and 7) the relationship between algorithms and ‘fake news’. The acceptance rate for the 2018 conference was about 12%. The conference is organized by Center for Advanced Studies at Ludwig-Maximilians-Universität München (LMU) and supported by Volkswagen Foundation and University of Oregon’s School of Journalism and Communication. The organizers aim to release a special issue of Digital Journalism and a book, and one of them (Neil Thurman) is engaged in a research project on ’Algorithmic News’.
This research project was founded in early 2017 at the University of Turku in Finland as a collaboration of its School of Economics with the BioNLP unit of University of Turku. There are currently three researchers involved, one from social science background and two from computer science. The project studies the societal impact and risks of machine decision-making. It has been funded by Kone Foundation and Kaute Foundation.
Center for Democracy and Technology (CDT)
CDT is a non-profit organization headquartered in Washington. They describe themselves as “a team of experts with deep knowledge of issues pertaining to the internet, privacy, security, technology, and intellectual property. We come from academia, private enterprise, government, and the non-profit worlds to translate complex policy into action.” The organization is currently focused on the following issues: 1) Privacy and data, 2) Free expression, 3) Security and surveillance, 4) European Union, and 5) Internet architecture. In August 2017, CDT launched a digital decisions tool to help engineers and product managers mitigate algorithmic bias in machine decision making. The tool translates principles for fair and ethical decision-making into a series of questions that can be addressed while designing and deploying an algorithm. The questions address developers’ choices: what data to use to train the algorithm, what features to consider, and how to test the algorithm’s potential bias.
Data & Society’s Intelligence and Autonomy Initiative
This initiative was founded in 2015 and is based in New York City. It develops grounded qualitative empirical research to provide nuanced understandings of emerging technologies to inform the design, evaluation and regulation of AI-driven systems, while avoiding both utopian and dystopian scenarios. The goal is to engage diverse stakeholders in interdisciplinary discussions to inform structures of AI accountability and governance from the bottom up. I&A is funded by a research grant from the Knight Foundation’s Ethics and Governance of Artificial Intelligence Fund, and was previously supported by grants from the John D. and Catherine T. MacArthur Foundation and Microsoft Research.
Facebook AI Research (FAIR)
Facebook’s research program engages with academics, publications, open source software, and technical conferences and workshops. Its researchers are based in Menlo Park, CA, New York City and Paris, France. Its CommAI project aims to develop new data sets and algorithms to develop and evaluate general purpose artificial agents that rely on a linguistic interface and can quickly adapt to a stream of tasks.
This internal Microsoft group focuses on Fairness, Accountability, Transparency and Ethics in AI and was launched in 2014. Its goal is to develop, via collaborative research projects, computational techniques that are both innovative and ethical, while drawing on the deeper context surrounding these issues from sociology, history and science.
Good AI was founded in 2014 as an international group based in Prague, Czech Republic dedicated to developing AI quickly to help humanity and to understand the universe. Its founding CEO Marek Rosa funded the project with $10M. Good AI’s R&D company went public in 2015 and is comprised of a team of 20 research scientists. In 2017 Good AI participated in global AI conferences in Amsterdam, London and Tokyo and hosted data science competitions.
Jigsaw is a technology incubator focusing on geopolitical challenges, originating from Google Ideas, as a ”think/do tank” for issues at the interface of technology and geopolitics. One of the projects of Jigsaw is the Perspective API that uses machine learning to identify abuse and harassment online. Perspective rates comments based on the perceived impact a comment might have on the conversation. Perspective can be used use to give real-time feedback to commenters, help moderators sort comments more effectively, or allow readers to find relevant information. The first model of Perspective API identifies whether a comment is perceived as “toxic” in a discussion.
IEEE Global Initiative for Ethical Considerations in AI and Autonomous Systems
In 2016, the Institute of Electrical and Electronics Engineers (IEEE) launched a project seeking public input on ethically designed AI. In April 2017, the IEEE hosted a related dinner for the European Parliament in Brussels. In July 2017, it issued a preliminary report entitled Prioritizing Human Well Being in the Age of Artificial Intelligence. IEEE is conducting a consensus driven standards project for “soft governance” of AI that may produce a “bill of rights” regarding what personal data is “off limits” without the need for regulation. They set up 11 different active standards groups for interested collaborators to join in 2017 and were projecting new reports by the end of the year. IEEE has also released a report on Ethically Aligned Design in artificial intelligence, part of a initiative to ensure ethical principles are considered in systems design.
Internet Society (ISOC)
The ISOC is a non-profit organization founded in 1992 to provide leadership in Internet-related standards, education, access, and policy. It is headquartered in Virginia, USA. The organization published a paper in April, 2017 that explains commercial uses of AI technology and provides recommendations for dealing with its management challenges, including 1) transparency, bias and accountability, 2) security and safety, 3) socio-economic impacts and ethics, and 4) new data uses and ecosystems. The recommendations include, among others, adopting ethical standards in the design of AI products and innovation policies, providing explanations to end users about why a specific decision was made, making it simpler to understand why algorithmic decision-making works, and introducing “algorithmic literacy” as a basic skills obtained through education.
Knight Foundation’s Ethics and Governance of Artificial Intelligence Fund https://www.knightfoundation.org/aifund-faq
The AI Fund was founded in January 2017 by the Massachusetts Institute of Technology (MIT) Media Lab, Harvard University’s Berkman-Klein Center, the Knight Foundation, Omidyar Network and Reid Hoffman of LinkedIn. It is currently housed at the Miami Foundation in Miami, Florida.
The goal of the AI Fund is to ensure that the development of AI becomes a joint multidisciplinary human endeavor that bridges computer scientists, engineers, social scientists, philosophers, faith leaders, economists, lawyers and policymakers. The aim is to accomplish this by supporting work around the world that advances the development of ethical AI in the public interest, with an emphasis on research and education.
In May 2017, the Berkman Klein Center at Harvard kicked off its collaboration with the MIT Media Lab on their Ethics and Governance of Artificial Intelligence Initiative focused on strategic research, learning and experimentation. Possible avenues of empirical research were discussed, and the outlines of a taxonomy emerged. Topics of this initiative include: use of AI-powered personal assistants, attitudes of youth, impact on news generation, and moderating online hate speech.
Moreover, Harvard’s Ethics and Governance of AI Fund has committed an initial $7.6M in grants to support nine organizations to strengthen the voice of civil society in the development of AI. An excerpt from their post: “Additional projects and activities will address common challenges across these core areas such as the global governance of AI and the ways in which the use of AI may reinforce existing biases, particularly against underserved and underrepresented populations.” Finally, a report of a December 2017 BKC presentation on building AI for an inclusive society has been published and can be accessed from the above link.
MIT-IBM Watson Lab
Founded in September 2017, MIT’s new $240 million center in collaboration with IBM, is intended to help advance the field of AI by “developing novel devices and materials to power the latest machine-learning algorithms.” This project overlaps with the Partnership on AI. IBM hopes it will help the company reclaim its reputation in the AI space. In another industry sector, Toyota made a billion-dollar investment in funding for its own AI center, plus research at both MIT and Stanford. The MIT-IBM Lab will be one of the “largest long-term university-industry AI collaborations to date,” mobilizing the talent of more than 100 AI scientists, professors, and students to pursue joint research at IBM’s Research Lab. The lab is co-located with the IBM Watson Health and IBM Security headquarters in Cambridge, MA. The stated goal is to push the boundaries in AI technology in several areas: 1) AI algorithms, 2) physics of AI, 3) application of AI to industries, and 4) advanced shared prosperity through AI.
In addition to this collaboration, IBM argues its Watson platform has been designed to be transparent. David Kenny, who heads Watson, said the following in a press conference: “I believe industry has a responsibility to step up. We all have a right to know how that decision was made [by AI],” Kenny said. “It cannot be a blackbox. We’ve constructed Watson to always be able to show how it came to the inference it came to. That way a human can always make a judgment and make sure there isn’t an inherent bias.”
New Zealand Law Foundation Centre for Law & Policy in Emerging Technologies
Professor Colin Gavaghan of the University of Otago heads a research centre examining the legal, ethical and policy issues around new technologies including artificial intelligence. In 2011, it hosted a forum on the Future of Fairness. The Law Foundation provided an endowment of $1.5M to fund the NZLF Centre and Chair in Emerging Technologies.
Obama White House Report: Preparing for the Future of Artificial Intelligence
The Obama Administration’s report on the future of AI was issued on October 16, 2016 in conjunction with a “White House Frontiers” conference focused on data science, machine learning, automation and robotics in Pittsburgh, PA. It followed a series of initiatives conducted by the WH Office of Science & Technology Policy (OSTP) in 2016. The report contains a snapshot of the state of AI technology and identifies questions that evolution of AI raises for society and public policy. The topics include improving government operations, adapting regulations for safe automated vehicles, and making sure AI applications are “fair, safe, and governable.” AI’s impact on jobs and the economy was another major focus. A companion paper laid out a strategic plan for Federally funded research and development in AI. President Trump has not named a Director for OSTP, so this plan is not currently being implemented. However, law makers in the US are showing further interest in legislation. Rep. John Delaney (D-Md.) said in a press conference in June, 2017: “I think transparency [of machine decision making] is obviously really important. I think if the industry doesn’t do enough of it, I think we’ll [need to consider legislation] because I think it really matters to the American people.” These efforts are part of the Congressional AI Caucus launched in May 2017, focused on implications of AI for the tech industry, economy and society overall.
OpenAI is a non-profit artificial intelligence research company in California that aims to develop general AI in such a way as to benefit humanity as a whole. It has received more than 1 billion USD in commitments to promote research and other activities aimed at supporting the safe development of AI. The company focuses on long-term research. Founders of OpenAI include Elon Musk and Sam Altman. The sponsors include, in addition to individuals, YC Research, Infosys, Microsoft, Amazon, and Open Philanthropy Project. The open source contributions can be found at https://github.com/openai.
PAIR: People + AI Research Initiative
This is a Google initiative that was launched in 2017 to focus on discovering how AI can augment the expert intelligence of professionals such as doctors, technicians, designers, farmers, musicians and others. It also aims to make AI more inclusive and accessible to everyone. Visiting faculty members are Hal Abelson and Brendan Meade. Current projects involve drawing and diversity in machine learning, an open library for training neural nets, training data for models, and design via machine learning.
Partnership on AI
The Partnership was founded in September 2016 by Eric Horvitz and Mustafa Suleyman to study and formulate best practices for AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and influences on people and society. Partnership on AI is funded financially and supported in-kind with research by its members, including founding members Amazon, Google/DeepMind, Facebook, IBM and Microsoft. In 2017, it expanded corporate and NGO membership, adding members such as Ebay, Intel, Salesforce and Center for Democracy & Technology (CDT). It hired an Executive Director, Terah Lyons, and boasts independent Board members from UC Berkley and the ACLU. The group has had affiliation discussions with the Association for the Advancement of Artificial Intelligence (AAAI) and the Allen Institute for Artificial Intelligence. In 2016 the Partnership expressed its support for the Obama White House Report.
Rajapinta is a scientific association founded in January 2017 that advocates the social scientific study of ICT and ICT applications to social research in Finland. Its short-term goal is to improve collaboration and provide opportunities for meetings and networking in the hopes of establishing a seat at the table in the global scientific community in the longer term. Funding sources are not readily available.
Royal Society of UK’s Machine Learning Project
The Royal Society is a fellowship of many of the world’s most eminent scientists and is currently conducting a project on machine learning (as a branch of AI), which in April 2017 produced a very comprehensive report titled Machine learning: the power and promise of computers that learn by example. It explores everyday ways in which people interact with machine learning systems, such as in social media image recognition, voice recognition systems, virtual personal assistants and recommendation systems used by online retailers. The grant funding for this particular project within the much larger Royal Society is unclear.
Workshop on Fairness, Accountability, and Transparency in Machine Learning (FatML)
Founded in 2014, FatML is an annual two-day conference that brings together researchers and practitioners concerned with fairness, accountability and transparency in machine learning, given a recognition that ML raises novel challenges around ensuring non-discrimination, due process and explainability of institutional decision-making. According to the initiative, corporations and governments must be supervised in their use of algorithmic decision making. FatML makes current scholarly resources on related subjects publicly available. The conference is funded in part by registration fees and possibly subsidized by corporate organizers such as Google, Microsoft and Cloudflare. Their August 2017 event was held in Halifax, Nova Scotia, Canada.
World Economic Forum (WEF)
WEF released a blog post in July 2017 on the risks of algorithmic decision making to civil rights, mentioning US law enforcement’s use of facial recognition technology, and other examples. The post argues humans are facing “algorithmic regulation” for example in public entitlements or benefits. It cites self-reinforcing bias as one of the five biggest problems with allowing AI into the government policy arena. In September 2017, the WEF released another post suggesting that a Magna Carta (“charter of rights”) for AI is needed; this essentially refers to commonly agreed upon rules and rights for both individuals and yielders of algorithm-based decision making authority. According to the post, the foundational elements of such an agreement include making sure AI creates jobs for all, rules dealing with machine curated news feeds and polarization, rules avoiding discrimination and bias of machine decision making, and safeguards for ensuring personal choice without sacrificing privacy for commercial efficiency.
From the above, we can conclude three points. First, different levels of stakeholders around the world have been activated to study the impact of technology on machine-decision making, as shown by the multitude of projects. On the research side, there are several recently founded research projects and conferences (e.g., AAWS, FatML). In a similar vein, industry players such as IBM, Microsoft and Facebook also show commitment in solving the associated challenges in their platforms. Moreover, policy makers are investigating the issues as well, as shown by the Obama administration’s report and the new Congressional AI Caucus.
Second, in addition to the topic being of interest for different stakeholders, it also involves a considerable number of different perspectives, including but not limited to aspects of computer science, ethics, law, politics, journalism and economics. Such a great degree of cross-sectionalism and multidisciplinary effort is not common for research projects that often tend to focus on a narrower field of expertise; thus, it might be more challenging to produce solutions that are theoretically sound and practically functional.
Third, there seems to be much overlap between the initiatives mentioned here; many of the initiatives seem to focus on solving the same problems, but it is unclear how well the initiatives are aware of each other and whether a centralized research agenda and resource sharing or joint allocation might help achieve results faster.
Notice an initiative or organization missing from this report? Please send information to Dr. Joni Salminen: firstname.lastname@example.org.
Feature analysis could be employed for bias detection when evaluating the procedural fairness of algorithms. (This is an alternative to the ”Google approach” which emphasis evaluation of outcome fairness.)
In brief, feature analysis reveals how well each feature (=variable) influenced the model’s decision. For example, see the following quote from Huang et al. (2014, p. 240):
”All features do not contribute equally to the classification model. In many cases, the majority of the features contribute little to the classifier and only a small set of discriminative features end up being used. (…) The relative depth of a feature used as a decision node in a tree can be used to assess the importance of the feature. Here, we use the expected fraction of samples each feature contributes to as an estimate of the importance of the feature. By averaging all expected fraction rates over all trees in our trained model, we could estimate the importance for each feature. It is important to note that feature spaces among our selected features are very diverse. The impact of the individual features from a small feature space might not beat the impact of all the aggregate features from a large feature space. So apart from simply summing up all feature spaces within a feature (i.e. sum of all 7, 057 importance scores in hashtag feature), which is referred to as un-normalized in Figure 4, we also plot the normalized relative importance of each features, where each feature’s importance score is normalized by the size of the feature space.”
They go on to visualize the impact of each feature (see Figure 1).
Figure 1 Feature analysis example (Huang et al., 2014)
As you can see, this approach seems excellent for probing the impact of each feature on the model’s decision making. The impact of sensitive features, such as ethnicity, can be detected. Although this approach may be useful for supervised machine learning, where the data is clearly labelled, the applicability to unsupervised learning might be a different story.
Media bias is under heavy discussion at the moment, especially relating to the past presidential election in the US. However, the quality of discussion is not the way it should be; I mean, there should be objective analysis on the role of the media. Instead, most comments are politically motivated accusations or denials. This article aims to be objective, discussing the measurement of media bias; that is, how could we identify whether a particular media outlet is biased or not? The author feels there are not generally acknowledged measures for this, so it is easy to claim or deny bias without factual validation. Essentially, this erodes the quality of the discussion, leading only into a war of opinions. Second, without the existence of such measures, both the media and the general public are unable to monitor the fairness of coverage.
Why is media fairness important?
Fairness of the media is important for one main reason: the media have a strong influence on the public opinion. In other words, journalists have great power, and with great power comes great responsibility. The existence of bias leads to different standards of coverage depending on the topic being reported. In other words, the information is being used to portray a selective view of the world. This is analogous to confirmation bias; a person wants to prove a certain point, so he or she only acknowledges evidence supporting that point. Such behavior is very easy for human beings, for which reason journalists should be extra cautious in letting their own opinions influence the content of their reportage.
In addition to being a private problem, the media bias can also be understood as a systemic problem. This arises through 1) official guidelines and 2) informal group think. First, the official guidelines means that the opinions, beliefs or worldviews of the particular media outlet are diffused down the organization. Meaning that the editorial board communicates its official stance (”we, as a media outlet, support a political candidate X”) which is then taken by the individual reporters as their ethos. When the media outlet itself, or the surrounding ”media industry” as a whole, absorbs a view, there is a tendency to silence the dissidents. This, again, can be reduced to elementary human psychology, known as the conformity bias or group think. Because others in your reference group accept a certain viewpoint, you are more likely to accept it as well due to social pressure. The informal dynamics are even more dangerous to objective reporting than the official guidelines because they are subtle and implicit by nature. In other words, journalist may not be aware of bias and just consider their worldview ”normal” while arguments opposing it are classified as wrong and harmful.
Finally, media fairness is important due to its larger implications on information sources and the actions taken by citizens based on the information they are exposed to. It is in the society’s best interest that people resort to legitimate and trustworthy sources of information, as opposed to unofficial, rogue sources that can spread misinformation or disinformation. However, when the media becomes biased, it loses its legitimacy and becomes discredited; as a form of reactance to the biased stories, citizens turn to alternative sources of information. The problem is that these sources may not be trustworthy at all. Therefore, by waving their journalistic ethics, the mass media become at par with all other information sources; in a word, lose their credibility. The lack of credible sources of information leads into a myriad of problems for the society, such as distrust in the government, civil unrest or other forms of action people take based on the information they receive. Under such circumstances, the problem of ”echo chamber” is fortified — individuals feel free to select their sources according to their own beliefs instead of facts. After all, if all information is biased, what does it matter which one you choose to believe in?
How to measure media bias?
While it may not be difficult to define media bias at a general level, it may be difficult to observe an instance of bias in an unanimously acceptable way. That is where commonly accepted measures could be of some help. To come up with such measures, we can start by defining the information elements that can be retrieved for objectivity analysis. Then, we should consider how they can best be analyzed to determine whether a particular media outlet is biased.
In other words, what information do we have? Well, we can observe two sources: 1) the media itself, and 2) all other empirical observations (e.g., events taking place). Notice that observing the world only through media would be inaccurate testimony of human behavior; we draw a lot from our own experiences and from around us. By observing the stories created by the media we know what is being reported and what is not being reported. By observing things around us (apart from the media), we know what is happening and what is not happening. By combining these dimensions, we can derive
- what is being reported (and happens)
- what is being reported (but does not happen)
- what is not being reported (but happens), and
- what is not being reported (but does not happen).
Numbers 2 and 4 are not deemed relevant for this inquiry, but 1 and 3 are. Namely, the choice of information, i.e. what is being reported and what is being left out of reporting. Hence, this is the first dimension of our measurement framework.
1. Choice of information
- topic inclusion — what topics are reported (themes –> identify, classify, count)
- topic exclusion — what topics are not reported (reference –> define, classify, count)
- story inclusion — what is included in the reportage (themes –> identify, classify, count)
- story exclusion — what is left out of the reportage (reference –> define, classify, count)
- story frequency — how many times a story is repeated (count)
This dimension measures what is being talked about in the media. It measures inclusion, exclusion and frequency to determine what information the media disseminates. The two levels are topics and stories — both have themes that can be identified, then material classified into them, and counted to get an understanding of the coverage. Measuring exclusion works in the same way, except the analyst needs to have a frame of reference he or she can compare the found themes with. For example, if the frame of reference contains ”Education” and the topics found from the material do not include education, then it can be concluded that the media at the period of sampling did not cover education. Besides themes, reference can include polarity, and thus one can examine if opposing views are given equal coverage. Finally, the frequency of stories measures media’s emphasis; reflecting the choice of information.
Because all information is selected from a close-to-infinite pool of potential stories, one could argue that all reportage is inherently biased. Indeed, there may not be universal criteria that would justify reporting Topic A over Topic B. However, measurement helps form a clearer picture of a) what the media as a whole is reporting, and b) what does each individual media outlet report in comparison to others. A member of the audience is then better informed on what themes the media has chosen to report. This type of helicopter view can enhance the ability to detect a biased information choice, either by a particular media outlet or the media as a whole.
The question of information choice is pertinent to media bias, especially relating to exclusion of information. A biased reporter can defend himself by arguing ”If I’m biased, show me where!”. But bias is not the same as inaccuracy. A biased story can still be accurate, for example, it may only leave some critical information out. The emphasis of a certain piece of information at the expense of other is a clear form of bias. Because not every piece of information can be included in a story, something is forcefully let out. Therefore, there is a temptation to favor a certain storyline. However, this concern can be neutralized by introducing balance; for a given topic, let there be an equal effort for exhibiting positive and negative evidence. And in terms of exclusion, discarding an equal amount of information from both extremes, if need be.
In addition to measuring what is being reported, we also need to consider how it is being reported. This is the second dimension of the measurement framework, dealing with the formulation of information.
2. Formulation of information
- IN INTERVIEWS: question formulation — are the questions reporters are asking neutral or biased in terms of substance (identify, classify, count)
- IN REPORTS: message formulation — are the paragraphs/sentences in reportage neutral or biased in terms of substance (classify, count)
- IN INTERVIEWS: tone — is the tone reporters are asking the questions neutral or biased (classify count)
- IN REPORTS: tone — are the paragraphs/sentences in reportage neutral or biased in terms of tone (classify, count)
- loaded headlines (identify, count)
- loaded vocabulary (identify, count)
- general sentiment towards key objects (identify, classify: pos/neg/neutral)
This dimension measures how the media reports on the topics it has chosen. It is a form of content analysis, involving qualitative and quantitative features. Measures cover interview type of settings, as well as various reportages such as newspaper articles and television coverage. The content can be broken down into pieces (questions, paragraphs, sentences) and their objectivity evaluated based on both substance and tone. An example of bias in substance would be presenting an opinion as a fact, or taking a piece of information out of context. An example of biased tone would be using negative or positive adjectives in relation to select objects (e.g., presidential candidates).
Presenting loaded headlines and text as percentage of total observations gives an indication of how biased the content is. In addition, the analyst can evaluate the general sentiment the reportage portrays of key objects — this includes first identifying the key objects of the story, and then classifying their treatment on a three-fold scale (positive, negative, neutral).
I mentioned earlier that agreeing on the observation of bias is an issue. This is due to the interpretative nature of these measures; i.e., they involve a degree of subjectivity which is generally not considered as a good characteristic for a measure. Counting frequencies (e.g., how often a word was mentioned) is not susceptible to interpretation but judging the tone of the reporter is. Yet, those are the kind of cues that reveal a bias, so they should be incorporated in the measurement frameword. Perhaps we can draw an analogy to any form of research here; it is always up to the integrity of the analyst to draw conclusions.
Even studies that are said to include high reliability by design can be reported in a biased way, e.g. by reframing the original hypotheses. Ultimately, application of measurement in social sciences remains at the shoulder of the researcher. Any well-trained, committed researcher is more likely to follow the guideline of objectivity than not; but of course this cannot be guaranteed. The explication of method application should reveal to an outsider the degree of trustworthiness of the study, although the evaluation requires a degree of sophistication. Finally, using several analysts reduces an individual bias in interpreting content; inter-rater agreement can then be calculated with Cohen’s kappa or similar metrics.
After assessing the objectivity of the content, we turn to the source. Measurement of source credibility is important in both validating prior findings as well as understanding why the (potential) bias takes place.
3. Source credibility
- individual political views (identify)
- organizational political affiliation (identify)
- reputation (sample)
This dimensions measures why the media outlet reports the way it does. If individual and organizational affiliations are not made clear in the reportage, the analyst needs to do work to discover them. In addition, the audience has shaped a perception of bias based on historical exposure to the media outlet — running a properly sampled survey can provide support information for conclusions of the objectivity study.
How to prevent media bias?
The work of journalists is sometimes compared to that of a scientist: in both professions, one needs curiousity, criticality, ability to observe, and objectivity. However, whereas scientists mostly report dull findings, reporters are much more pressured to write sexy, entertaining stories. This leads into the the problem of sense-making, i.e. reporters create a coherent story with a clear message, instead showing the messy reality. The sense-making bias in itself favors media bias, because creating a narrative forces one to be selective of what to include and what to exclude. As long as there is this desire for simple narratives, coverage of complex topics cannot be entirely objective. We may, however, mitigate this effect by upholding certain principles.
I suggest three principles for the media to uphold in their coverage of topics.
First, the media should have a critical stance to its object of reportage. Instead of accepting the piece of information they receive as truth, they should push to ask hard questions. But that should be done in a balanced way – for example, in a presidential race, both candidates should get an equal amount of ”tough” questions. Furthermore, journalists should not absorb any ”truths”, beliefs or presumptions that affect in their treatment of a topic. Since every journalist is a human being, this requirement is quite an idealistic one; but the effect of personal preferences or those imposed by the social environment should in any case be mitigated. The goal of objectivity should be cherished, even if the outcome is in conflict with one’s personal beliefs. Finally, the media should be independent. Both in that it is not being dictated by any interest group, public or private, on what to report, but also in that it is not expressing or committing into a political affiliation. Much like church and state are kept separate according to Locke’s social contract as well as Jefferson’s constitutional ideas, the press and the state should be separated. This rule should apply to both publicly and privately funded media outlets.
The status of the media is precious. They have an enormous power over the opinions of the citizens. However, this is conditional power; should they lose objectivity, they’d also lose the influence, as people turn to alternative sources of information. I have presented that a major root cause of the problem is the media’s inability to detect its own bias. Through better detection and measurement of bias, corrective action can be taken. But since those corrective actions are conditioned to willingness to be objective, a willingness many media outlest are not signalling, the measurement in itself is not adequate in solving the larger problem. At a larger scale, I have proposed there be a separation of media and politics, which prevents by law any media outlet to take a political side. Such legislation is likely to increase objectivity and decrease the harmful polarization that the current partisan-based media environment constantly feeds into.
Overall, there should be some serious discusson on what the role of media in the society should be. In addition, attention to journalistic education and upholding of journalistic ethics should be paid. If the industry is not able to monitor itself, it is upon the society to introduce such regulation that the media will not abuse its power but remains objective. I have suggested the media and related stakeholders provide information on potential bias. I have also suggested new measures for bias that consider both the inclusion and exclusion of information. The measurement of inclusion can be done by analyzing news stories for common keywords and themes. If the analyst has an a prior framework of topics/themes/stories he or she considers as reference, it can be then concluded how well the media covers those themes by classifying the material accordingly. Such analysis would also reveal what is not being reported, an important distinction that is often not taken into account.
Introduction. Here’s an argument: Most online disputes can be traced back to differences of premises. I’m observing this time and time again: two people disagree, but fail to see why that is, although for an outside it seems evident. Each party believes they are right, and so they keep on debating; it’s like a never-ending cycle, and that’s why the online debates to on and on to ridiculous proportions. The media does its best to aggravate the problem, not to solve it. In similar vein, newsfeed algorithms are likely to feed into polarization. I propose here that identifying the fundamental difference in their premises could end any debate sooner than later, and therefore save valuable time and energy of the participants, and save the society and individuals from much chagrin.
Why does it matter? Due to commonness of this phenomenon, its solution is actually a societal priority; we, the society, need to teach especially young people how to debate meaningfully so that they can efficiently reach a mutual agreement either by one of the parties adopting the other one’s argument (the ”Gandhi principle”) or quickly identifying the fundamental disagreement in premises, so that the debate does not go on for an unnecessarily long period, wasting time and causing socially adverse collateral damage (e.g., group polarization, increasing hate and bad feelings).
In practice, these nice principles seem to take place rarely. For example, changing one’s point of view and ”admitting defeat” is extremely rare if you’ve looked at online debates. It is almost dominant that people stick to their original point of view rather than ”caving in,” as it is falsely perceived. A good debater should give credit where it is due: the point is to reach a common agreement, or agreement of disagreement, not winning. The false premise of winning is the root cause of most adverse debates we see taking place online.
While there may be several reasons for not seeing the other party’s point of views, including stubborness, one authentic source of disagreement is the fundamental difference in premises. Simply put, people just have different values and worldviews, and they fail either to recognize that or to respect this state of reality.
What does that mean? Simply put, people have different premises, emerging from different worldviews and experiences. Given this assumption, every skilled debater should recognize the existence of fundamental difference when in disagreement – they should consider, ”okay, where is the other guy coming from?”, i.e. what are his premises? And through that process, present the fundamental difference and thus close the debate.
My point is simple: When tracing the argument back to the premises, for each conflict we can reveal a fundamental disagreement at the premise level.
The good news is that it gives us a reconciliation (and food for though to each, possibly leading into the Gandhi outcome of adopting opposing view when it is judged more credible). When we know there is a fundamental disagreement, we can work together to find it, and consider the finding of it as the end point of the debate. Debating therefore becomes collaboration, not competition: a task of not proving yourself right, but a task of discovering the root cause for disagreement. I believe this is more effective method for ending debates than the current methods resulting in a lot of unnecessary wasted time and effort. In addition, the recognition of fundamental difference is immune to loss of face, stubborness, or other socio-psychological conditions that prevent reconciliation (because it does not require admittance of defeat, but is based on agreement to disagree). Finally, after recognizing the source for the fundamental disagreement, we can use facts to evaluate which premise is likely to be more correct (however, facts and statistics have their own problems, too, e.g. cherry-picking).
The bad news is that oftentimes, the premises are either 1) very difficult to change because they are so fundamentally part of one’s beliefs that the individual refuses to alter them even after shown wrong by evidence, or 2) we don’t know how we should change them because there might not be ”better” premises at all, just different ones. For example, is it more wrong or right to say ”We should help people in Africa” or ”We should first take of our own citizens”. There seems to be no factual ground to prioritize such statements. Now, of course this argument in itself is based on a premise, that of relativity. But alternatively we could say that some premises are better than others, e.g. given a desirable outcome – but that would be a debate of value subjectivity vs. universality, and as such leads just into a circular debate (which we precisely do not want) because both fundamental premises do co-exist.
In many practical political decision-making issues the same applies – nobody, not even the so-called experts, can certainly argue for the best scenario or predict the outcomes with a high degree of confidence (for example, economists have known to be wrong many times and contradicting one another according to their different premises about how the economy works). This leads to the problem of ”many truths” which can be crippling for decision-making and perception of togetherness in a society. But in a situation like that, it is ever more critical to identify the fundamental differences in premises; that kind of transparency enables dispassionate evaluation of their merits and weaknesses and at the same time those of the other party’s thinking process. In a word, it is important for understanding your own thinking (following the old Socratean thought of ’knowing thyself’) and for understanding the thinking of others.
The hazard of identifying fundamental premise differences is, of course, that it leads into ”null result” (nobody wins). Simply put, we admit that there is a difference and perhaps logically draw the conclusion that neither is right, or that each pertains the belief of being right (but understand the logic of the other party). In an otherwise non-reconcialiable scenario, this would seem like a decent compromise, but it is also prohibitive if and when participants perceive the debate as competition. Instead, it should be perceived as co-creation of sorts: working together in a systematic way to exhaust each other’s arguments and thus derive the fundamental difference in premises.
Conclusion. In this post-modern era where 1) values and worldviews are more fragmented than ever, and 2) online discussions are commonplace thanks to social media, the number of argumentation conflicts is inherently very high. In fact, it is more likely to see conflict than agreement due to the high degree of diversity, a post-modern trait of the society. People naturally have different premises, emerging from idiosyncratic worldviews, values and personal experiences, and therefore the emergence of conflicting arguments can be seen as the new norm in a high-frequency communication environments such as social networks. People alleviate this effect by grouping with likeminded individuals to get psyhochological rewards and certainty in the era of uncertainty, which may lead into assuming more extreme positions than they would otherwise assume (i.e., polarization effect).
Education of argumentation theory, logic (both in theory and practice), and empathy is crucial to start solving this condition of disagreement which I think is of permanent nature. Earlier I used the term ”skilled debater.” Indeed, debating is a skill. It’s a crucial skill of every citizen. Societies (and social networks) do wrong by giving people voice but not teaching them how to use it. Debating skills are not inherent traits people are born with – they are learned skills. While some people are self-learned, it cannot be rationally assumed that the majority of people would learn these skills by themselves. Rather, they need to be educated, in schools at all levels. For example, most university programs in Finland are not teaching debating skills in the sense I’m describing here – yet they proclaim to instill critical thinking to their students. While algorithms can help in feeding people balanced content, the issue of critical thinking is not a technological but a social problem. The effort of solving it with social solutions is currently inadequate – the schooling system needs to step up, and make the issue a priority. Otherwise we face another decade or more and more ignorance taking over online discussions, making everybody’s life miserable in the process.
More writings in English.
Earlier, I had a brief exchange of tweets with @jonathanstray about algorithms. It started from his tweet:
Perhaps the biggest technical problem in making fair algorithms is this: if they are designed to learn what humans do, they will.
To which I replied:
Yes, and that’s why learning is not the way to go. ”Fair” should not be goal, is inherently subjective. ”Objective” is better
Then he wrote:
lots of things that are really important to society are in no way objective, though. Really the only exception is prediction.
And I wrote:
True, but I think algorithms should be as neutral (objective) as possible. They should be decision aids for humans.
And he answered:
what does ”neutral” mean though?
After which I decided to write a post about it, since the idea is challenging to explain in 140 characters.
So, what is a neutral algorithm? I would define it like this:
A neutral algorithm is a decision-making program whose operating principles are minimally inflenced by values or opinions of its creators. 
An example of a neutral algorithm is a standard ad optimization algorithm: it gets to decide whether to show Ad1, Ad2, or Ad3. As opposed to asking from designers or corporate management which ad to display, it makes the decision based on objective measures, such as click-through rate (CTR).
A treatment that all ads (read: content, users) get is fair – they are diffused based on their merits (measured objectively by an unambiguous metric), not based on favoritism of any sort.
The roots of algorithm neutrality stem from freedom of speech and net neutrality . No outsiders can impose their values and opinions (e.g., censoring politically sensitive content) and interfere with the operating principles of the algorithm. Instead of being influenced by external manipulation, the decision making of the algorithm is as value-free (neutral) as possible. For example, in the case of social media, it chooses to display information which accurately reflects the sentiment and opinions of the people at a particular point in time.
Now, I grant there are issues with ”freedom”, some of which are considerable. For example, 1) for media, CTR-incentives lead to clickbaiting (alternative goal metrics should be considered), 2) for politicians and electorate, facts can be overshadowed by misinformation and short videos taken out of context to give false impression of individuals; and 3) for regular users, harmful misinformation can spread as a consequnce of neutrality (e.g., anti vaccination propaganda). But these are ”true” social issues that the algorithm is accurately reflecting. If we want more ”just” outcomes, we will actually need to make neutral algorithms biased. Among other questions, this leads into the problem space of positive discrimination. It is also valid to ask: Who determines what is just?
A natural limitation to machine decisions, and an answer to the previous question, is legislation – illegal content should be kept out by the algorithm. In this sense, the neutral algorithm needs to adhere to a larger institutional and regulatory context, but given that the laws themselves are ”fair” this should impose no fundamental threat to the objective of neutral algorithms: free decision-making and, consequently, freedom of speech. I wrote a separate post about the neutrality dilemma.
Inspite of the aforementioned issues, with a neutral algorithm each media/candidate/user has a level playing field. In time, they must learn to use it to argue in a way that merits the diffusion of their message.
The rest is up to humans – educated people respond to smart content, whereas ignorant people respond to and spread non-sense. A neutral algorithm cannot influence this; it can only honestly display what the state of ignorance/sophistication is in a society. A good example is Microsoft’s infamous bot Tay , a machine learning experiment turned bad. The alarming thing about the bot is not that ”machines are evil”, but that *humans are evil*; the machine merely reflects that. Hence my original point of curbing human evilness by keeping algorithms free of human values as much as possible.
Perhaps in the future an algorithm could figuratively spoken save us from ourselves, but at the moment that act requires conscious effort from us humans. We need to make critical decisions based on our own judgment, instead of outsourcing ethically difficult choices to algorithms. Just as there is separation of church and state, there should be separation of humans and algorithms to the greatest possible extent.
 Initially, I thought about definition that would say ”not influenced”, but it is not safe to assume that the subjectivity of its creators would not in some way be reflected to the algorithm. But ”minimal” leads into normative argument that that subjectivity should be mitigated.
 Wikipedia (2016): ”Net neutrality (…) is the principle that Internet service providers and governments should treat all data on the Internet the same, not discriminating or charging differentially by user, content, site, platform, application, type of attached equipment, or mode of communication.”
 A part of the story is that Tay was trolled heavily and therefore assumed a derogatory way of speech.
Human-machine interaction on social platforms relies on human input. That input is driven by human bias. One form of bias is based on what I call ”belief systems”. Understanding different belief systems, e.g. by more advanced linguistic analysis, can yield solutions to various social problems on the Internet, such as online disputes and forming of echo chambers.
1. Assumptions. People are driven by beliefs and assumptions. We all make assumptions and use simplified thinking to cope with complexities of daily life. These include stereotypes, heuristical decision-making, and many forms of cognitive biases we’re all subject to. Because information individuals have is inherently limited as are their cognitive capabilities, our way of rational thinking is naturally bounded (Simon, 1956).
2. Definitions. I want to talk about what I call ”belief systems”. They can be defined as a form of shared thinking by a community or a niche of people. Some general characterizations follow. First, belief systems are characterized by common language (vocabulary) and shared way of thinking. Sociologists could define them as communities or sub-cultures, but I’m not using that term because it is usually associated with shared norms and values which do not matter in the context I refer to in this post.
3. Pros and cons. Second, the main advantage of belief systems is efficient communication, because all members share the belief system and are therefore privy to the meaning of specific terms and concepts. The main disadvantage of belief systems is the so-called tunnel vision which restricts the members adopting a belief system to seek or accept alternative ways of thinking. Both the main advantage and the main disadvantage result from the same principle: the necessity of simplicity. What I mean by that is that if a belief system is not parsimonious enough, it is not effective in communication but might escape tunnel vision (and vice versa).
4. Diffusion of beliefs. For a belief system to spread, it is subject to the laws of network diffusion (Katz & Shapiro, 1985). The more people have adopted a belief system, the more valuable it becomes for an individual user. This encourages further adoption as a form of virtuous cycle. Simplicity enhances diffusion – a complex system is most likely not adopted by a critical mass of people. ”Critical mass” refers here to the number of people sharing the belief system needed for additional members to adopt a belief system. Although this may not be any single number since the utility functions controlling the adoption are not uniformly distributed among individuals; there is an underlying assumption that belief systems are social by nature. If not enough people adopt a belief system, it is not remarkable enough to drive human action at a meaningful scale.
5. Understanding. Belief systems are intangible and unobservable by any direct means, but they are ”real” is social sense of the word. They are social objects or constructs that can be scrutinized by using proxies that reflect their existence. The best proxy for this purpose is language. Thus, belief systems can be understood by analyzing language. Language reveals how people think. The use of language (e.g., professional slang) reveals underlying shared assumptions of members adhering to a belief system. An objective examinator would be able to observe and record the members’ use of language, and construct a map of the key concepts and vocabulary, along with their interrelations and underlying assumptions. Through this proceduce, any belief system could be dissected to its fundamental constituents, after which the merits and potential dischords (e.g., biases) could be objectively discussed.
For example, startup enthusiasts talk about ”customer development” and ”going out of building” as new, revolutionary way of replacing market research, whereas marketing researchers might consider little novelty in these concepts and actually be able to list those and many more market research techniques that would potentially yield a better outcome.
6. Performance. By objective means, a certain belief system might not be superior to another either to be adopted or to perform better. In practice, a belief system can yield high performance rewards either due to 1) additional efficiency in communication, 2) randomity of it working better than other competing solutions, or 3) its heuristical properties that e.g. enhance decision-making speed and/or accuracy. Therefore, beliefs systems might not need to be theoretically optimal solutions to yield a practically useful outcome.
7. Changing a belief system. Moreover, belief systems are often unconcious. Consider the capitalistic belief system, or socialist belief system. Both drive the thinking of individuals to an enormous extent. Once a belief system is adopted, it is difficult to learn away. Getting rid of a belief system requires considerable cognitive effort, a sort of re-programming. An individual needs to be aware of the properties and assumptions of his belief system, and then want to change them e.g. by for looking counter-evidence. It is a psychological process equivalent to learning or ”unlearning”.
8. Conclusion. People operate based on belief systems. Belief systems can be understood by analyzing language. Language reveals how people think. The use of language (e.g., professional slang) reveals underlying shared assumptions of a belief system. Belief systems produce efficiency gains for communication but simultaneously hinder consideration of possibly better alternatives. A belief system needs to be simple enough to be useful, people readily absorb it and do not question the assumptions thereafter. Changing belief systems is possible but requires active effort for a period of time.
Katz, M. L., & Shapiro, C. (1985). Network Externalities, Competition, and Compatibility. The American Economic Review, 75(3), 424–440.
Simon, H. A. (1956). Rational choice and the structure of the environment. Psychological Review, 63(2), 129–38.
So, I read this article: Facebook is prioritizing my family and friends – but am I?
The point of the article — that you should focus on your friends & family in real life instead of Facebook — is poignant and topical. So much of our lives is spent on social media, without the ”social” part, and even when it is there, something is missing in comparison to physical presence (without smart phones!).
Anyway, this post is not about that. I got to think about the from the algorithm neutrality perspective. So what does that mean?
Algorithm neutrality takes place when social networks allow content spread freely based on its merits (e.g., CTR, engagement rate); so that the most popular content gets the most dissemination. In other words, the network imposes no media bias. Although the content spreading might have a media bias, the social network is objective and only accounting its quantifiable merits.
Why does this matter? Well, a neutral algorithm guarantees manipulation-free dissemination of information. As soon as human judgment intervenes, there is a bias. That bias may lead to censorship and favoring of certain political party, for example. The effect can be clearly seen in the so-called media bias. Anyone following either the political coverage of the US elections or the Brexit coverage has noticed the immense media bias which is omnipresent in even the esteemed publications, like the Economist and Washington Post. Indeed, they take a stance and report based on their stance, instead of covering objectively. A politically biased media like the one in the US is not much better than the politically biased media in Russia.
It is clear that free channels of expression enable the proliferation of alternative views, whereupon an individual is (theoretically) better off, since there are more data points to base his/her opinion on. Thus, social networks (again, theoretically) mitigate media bias.
There are many issues though. First is the one that I call neutrality dilemma.
The neutrality dilemma arises from what I already mentioned: the information bias can be embedded in the content people share. If the network restricts the information dissemination, it moves from neutrality to control. If it doesn’t restrict information dissemination, there is a risk of propagation of harmful misinformation, or propaganda. Therefore, in this continuum of control and freedom there is a trade-off that the social networks constantly need to address in their algorithms and community policies. For example, Facebook is banning some content, such as violent extremism. They are also collaborating with local governments which can ask for removal of certain content. This can be viewed in their transparency report.
The dilemma has multiple dimensions.
First of all, there are ethical issues. From the perspective of ”what is right”, shouldn’t the network prohibit diffusion of information when it is counter-factual? Otherwise, peopled can be mislead by false stories. But also, from perspective of what is right, shouldn’t there be free expression, even if a piece of information is not validated?
Second, there are some technical challenges:
A. How to identify ”truthfulness” of content? In many cases, it is seemingly impossible because the issues are complex and not factual to begin with. Consider e.g. the Brexit: it is not a fact that the leave vote would lead into a worse situation than the stay vote, and vice versa. In a similar vein, it is not a fact that the EU should be kept together. These are questions of assumptions which make them hard: people freely choose the assumptions they want to believe, but there can be no objective validation of this sort of complex social problem.
B. How to classify political/argumentative views and relate them to one another? There are different point of views, like ”pro-Brexit” and ”anti-Brexit”. The social network algorithm should detect based on an individual’s behavior their membership in a given group: the behavior consists of messages posted, content liked, shared and commented. It should be fairly easy to form a view of a person’s stance on a given topic with the help of these parameters. Then, it is crucial to map the stances in relation to one another, so that the extremes can be identified.
As it currently stands, one is being shown the content he/she prefers which confirms the already established opinion. This does not support learning or getting an objective view of the matter: instead, if reinforces a biased worldview and indeed exacerbates the problems. It is crucial to remember that opinions do not remain only opinions but reflect into behavior: what is socially established becomes physically established through people’s actions in the real world. Therefore, the power of social networks needs to be taken with precaution.
C. How to identify the quality of argumentation? Quality of argumentation is important if applying the rotation of alternative views intended to mitigate reinforcement of bias. This is because the counter-arguments need to be solid: in fact, when making a decision, the pro and contra-sides need both be well-argued for an objective decision to emerge. Machine learning could be the solution — assuming we have training data on the ”proper” structure of solid argumentation, we can compare this archetype to any kind of text material and assign it a score based on how good the argumentation is. Such a method does not consider the content of the argument, only its logical value. It would include a way to detect known argumentation errors based on syntax used. In fact, such a system is not unimaginably hard to achieve — common argumentation errors or logical fallacies are well documented.
Another form of detecting quality of argumentation is user-based reporting: individuals report the posts they don’t like, and these get discounted by the algorithm. However, Even when allowing users to report ”low-quality” content, there is a risk they report content they disagree with, not which is poorly argued. In reporting, there is relativism or subjectivism that cannot be avoided.
Perhaps the most problematic of all are the socio-psychological challenges associated with human nature. The neutral algorithm enforces group polarization by connecting people who agree on a topic. This is natural outcome of a neutral algorithm, since people by their behavior confirm their liking of a content they agree with. This leads to reinforcement whereupon they are shown more of that type of content. The social effect is known as group polarization – an individual’s original opinion is enforced through observing other individuals sharing that opinion. That is why so much discussion in social media is polarized: there is this well known tendency of human nature not to remain objective but to take a stance in one group against another.
How can we curb this effect? A couple of solutions readily come to mind.
1. Rotating opposing views. If in a neutral system you are shown 90% of content that confirms your beliefs, rotation should force you to see more than 10% percent of alternative (say, 25%). Technically, this would require that ”opinion archetypes” can be classified and contrasted to one another. Machine learning to the rescue?
The power of rotation comes from the idea it simulates social behavior: the more a person is exposed to subjects that initially seem strange and unlikeable (i.e., xenophobia), the more likely they are to be understood. A greater degree of awareness and understanding leads into higher acceptance of those things. In real world, people who frequently meet people from other cultures are more likely to accept other cultures in general.
Therefore, the same logic could by applied by Facebook in forcing us to see well-argumented counter-evidence to our beliefs. It is crucial that the counter-evidence is well-argued, or else there is a strong risk of reactance — people rejecting the opposing view even more. Unfortunately, this is a feature of the uneducated mind – not to be able to change one’s opinions but remain fixated on one’s beliefs. So the method is not full-proof, but it is better than what we now have.
2. Automatic fact-checking. Imagine a social network telling you ”This content might contain false information”. Caution signals may curb the willingness to accept any information. In fact, it may be more efficient to show misinformation tagged as unreliable rather than hide it — in the latter case, there is possibility for individuals to correct their false beliefs. Current approaches, however, rely on expert feedback which is fallible.
3. Research in sociology. I am not educated to know enough about the general solutions of group polarization, groupthink and other associated social problems. But I know sociologists have worked on them – this research should be put to use in collaboration with engineers who design the algorithms.
However, the root causes for dissemination of misinformation, either purposefully harmful or due to ignorance, lie not on technology. They are human-based problems and must have a human-based solution.
What are these root causes? Lack of education. Poor quality of educational system. Lack of willingness to study a topic before forming an opinion (i.e., lazy mind). Lack of source/media criticism. Confirmation bias. Groupthink. Group polarization.
Ultimately, these are the root causes of why some content that should not spread, spreads. They are social and psychological traits of human beings, which cannot be altered via algorithmic solutions. However, algorithms can direct behavior into more positive outcomes, or at least avoid the most harmful extremes – if the aforementioned classification problems can be solved.
The other part of the equation is education — kids need to be taught from early on about media and source criticism, logical argumentation, argumentation skills and respect to another party in a debate. Indeed, respect and sympathy go a long way — in the current atmosphere of online debating it seems like many have forgotten basic manners.
In the online environment, provocations are easy and escalate more easily than in face-to-face encounters. It is ”fun” to make fun of the ignorant people – a habit of the so-called intellectuals – nor it is correct to ignore science and facts – a habit of the so-called ignorants.
It is also unfortunate that many of the topics people debate on can be traced down to values and worldviews instead of more objective topics. When values and worldviews are fundamentally different among participants, it is truly hard to find a middle-way. It takes a lot of effort and character to be able to put yourself on the opposing party’s shoes, much more so than just point blank rejecting their view. It takes even more strength to change your opinion once you discover it was the wrong one.
Conclusion and discussion. Avoiding media bias is an essential advantage of social networks in information dissemination. I repeat: it’s a tremendous advantage. People are able to disseminate information and opinions without being controlled by mass-media outlets. At the same time, neutrality imposes new challenges. The most prominent question is to which extent should the network govern its content.
One one hand, user behavior is driving social networks like Facebook towards information sharing network – people are seemingly sharing more and more news content and less about their own lives – but Facebook wants to remain as social network, and therefore reduces neutrality in favor of personal content. What are the strategic implications? Will users be happier? Is it right to deviate from algorithm neutrality when you have dominant power over information flow?
Facebook is approaching a sort of an information monopoly when it comes to discovery (Google is the monopoly in information search), and I’d say it’s the most powerful global information dissemination medium today. That power comes with responsibility and ethical question, and hence the algorithm neutrality discussion. The strategic question for Facebook is that does it make sense for them to manipulate the natural information flow based on user behavior in a neutral system. The question for the society is should Facebook news feeds be regulated.
I am not advocating more regulation, since regulation is never a creative solution to any problem, nor does it tends to be informed by science. I advocate collaboration of sociologists and social networks in order to identify the best means to filter harmful misinformation and curb the generally known negative social tendencies that we humans possess. For sure, this can be done without endangering the free flow of information – the best part of social networks.