This report was created by Joni Salminen and Catherine R. Sloan. Publication date: December 10, 2017.
Artificial intelligence (AI) and machine learning are becoming more influential in society, as more decision-making power is being shifted to algorithms either directly or indirectly. Because of this, several research organizations and initiatives studying fairness of AI and machine learning have been started. We decided to conduct a review of these organizations and initiatives.
This is how we went about it. First, we used our prior information about different initiatives that we were familiar with. We used this information to draft an initial list and supplemented this list by conducting Google and Bing searches with key phrases relating to machine learning or artificial intelligence and fairness. Overall, we found 25 organization or initiatives. We analyzed these in greater detail. For each organization / initiative, we aimed to retrieve at least the following information:
- Name of the organization / initiative
- URL of the organization / initiative
- Founded in (year)
- Short description of the organization / initiative
- Purpose of the organization / initiative
- University or funding partner
Based on the above information, we wrote this brief report. Its purpose is to chart current initiatives around the world relating to fairness, accountability and transparency of machine learning and AI. At the moment, several stakeholders are engaged in research on this topic area, but it is uncertain how well they are aware of each other and if there is a sufficient degree of collaboration among them. We hope this list increases awareness and encounters among the initiatives.
In the following, the initiatives are presented in alphabetical order.
AI100: Stanford University’s One Hundred Year Study on Artificial Intelligence
Founded in 2016, this is an initiative launched by computer scientist Eric Horvitz and driven by seven diverse academicians focused on the influences of artificial intelligence on people and society. The goal is to anticipate how AI will impact every aspect of how people work, live and play, including automation, national security, psychology, ethics, law, privacy and democracy. AI100 is funded by a gift from Eric and Mary Horvitz.
AI for Good Global Summit
AI for Good Global Summit was held in Geneva, 7-9 June, 2017 in partnership with a number of United Nation (UN) sister agencies. The Summit aimed to accelerate and advance the development and democratization of AI solutions that can address specific global challenges related to poverty, hunger, health, education, the environment, and other social purposes.
AI Forum New Zealand
The AI Forum was launched in 2017 as a membership funded association for those with a passion for the opportunities AI can provide. The Forum connects AI tech innovators, investor groups, regulators, researchers, educators, entrepreneurs and the public. Its executive council includes representatives of Microsoft and IBM as well as start-ups and higher education. Currently the Forum is involved with a large-scale research project on the impact of AI on New Zealand’s economy and society.
AI Now Institute
The AI Now Institute at New York University (NYU) was founded by Kate Crawford and Meredith Whittaker in 2017. It’s an interdisciplinary research center dedicated to understanding the social implications of artificial intelligence. Its work focuses on four core domains: 1) Rights & Liberties, 2) Labor & Automation, 3) Bias & Inclusion and 4) Safety & Critical Infrastructure. The Institute’s partners include NYU’s schools of Engineering (Tandon), Business (Stern) and Law, the American Civil Liberties Union (ACLU) and the Partnership on AI.
Algorithms, Automation, and News
AAWS is an international conference focusing on impact of algorithms on news. Among the studied topics, the call for papers lists 1) concerns around news quality, transparency, and accountability in general; 2) hidden biases built into algorithms deciding what’s newsworthy; 3) the outcomes of information filtering such as ‘popularism’ (some content is favored over other content) and the transparency and accountability of the decisions made about what the public sees; 4) the privacy of data collected on individuals for the purposes of newsgathering and distribution; 5) the legal issues of libel by algorithm, 6) private information worlds and filter bubbles, and 7) the relationship between algorithms and ‘fake news’. The acceptance rate for the 2018 conference was about 12%. The conference is organized by Center for Advanced Studies at Ludwig-Maximilians-Universität München (LMU) and supported by Volkswagen Foundation and University of Oregon’s School of Journalism and Communication. The organizers aim to release a special issue of Digital Journalism and a book, and one of them (Neil Thurman) is engaged in a research project on ’Algorithmic News’.
This research project was founded in early 2017 at the University of Turku in Finland as a collaboration of its School of Economics with the BioNLP unit of University of Turku. There are currently three researchers involved, one from social science background and two from computer science. The project studies the societal impact and risks of machine decision-making. It has been funded by Kone Foundation and Kaute Foundation.
Center for Democracy and Technology (CDT)
CDT is a non-profit organization headquartered in Washington. They describe themselves as “a team of experts with deep knowledge of issues pertaining to the internet, privacy, security, technology, and intellectual property. We come from academia, private enterprise, government, and the non-profit worlds to translate complex policy into action.” The organization is currently focused on the following issues: 1) Privacy and data, 2) Free expression, 3) Security and surveillance, 4) European Union, and 5) Internet architecture. In August 2017, CDT launched a digital decisions tool to help engineers and product managers mitigate algorithmic bias in machine decision making. The tool translates principles for fair and ethical decision-making into a series of questions that can be addressed while designing and deploying an algorithm. The questions address developers’ choices: what data to use to train the algorithm, what features to consider, and how to test the algorithm’s potential bias.
Data & Society’s Intelligence and Autonomy Initiative
This initiative was founded in 2015 and is based in New York City. It develops grounded qualitative empirical research to provide nuanced understandings of emerging technologies to inform the design, evaluation and regulation of AI-driven systems, while avoiding both utopian and dystopian scenarios. The goal is to engage diverse stakeholders in interdisciplinary discussions to inform structures of AI accountability and governance from the bottom up. I&A is funded by a research grant from the Knight Foundation’s Ethics and Governance of Artificial Intelligence Fund, and was previously supported by grants from the John D. and Catherine T. MacArthur Foundation and Microsoft Research.
Facebook AI Research (FAIR)
Facebook’s research program engages with academics, publications, open source software, and technical conferences and workshops. Its researchers are based in Menlo Park, CA, New York City and Paris, France. Its CommAI project aims to develop new data sets and algorithms to develop and evaluate general purpose artificial agents that rely on a linguistic interface and can quickly adapt to a stream of tasks.
This internal Microsoft group focuses on Fairness, Accountability, Transparency and Ethics in AI and was launched in 2014. Its goal is to develop, via collaborative research projects, computational techniques that are both innovative and ethical, while drawing on the deeper context surrounding these issues from sociology, history and science.
Good AI was founded in 2014 as an international group based in Prague, Czech Republic dedicated to developing AI quickly to help humanity and to understand the universe. Its founding CEO Marek Rosa funded the project with $10M. Good AI’s R&D company went public in 2015 and is comprised of a team of 20 research scientists. In 2017 Good AI participated in global AI conferences in Amsterdam, London and Tokyo and hosted data science competitions.
Jigsaw is a technology incubator focusing on geopolitical challenges, originating from Google Ideas, as a ”think/do tank” for issues at the interface of technology and geopolitics. One of the projects of Jigsaw is the Perspective API that uses machine learning to identify abuse and harassment online. Perspective rates comments based on the perceived impact a comment might have on the conversation. Perspective can be used use to give real-time feedback to commenters, help moderators sort comments more effectively, or allow readers to find relevant information. The first model of Perspective API identifies whether a comment is perceived as “toxic” in a discussion.
IEEE Global Initiative for Ethical Considerations in AI and Autonomous Systems
In 2016, the Institute of Electrical and Electronics Engineers (IEEE) launched a project seeking public input on ethically designed AI. In April 2017, the IEEE hosted a related dinner for the European Parliament in Brussels. In July 2017, it issued a preliminary report entitled Prioritizing Human Well Being in the Age of Artificial Intelligence. IEEE is conducting a consensus driven standards project for “soft governance” of AI that may produce a “bill of rights” regarding what personal data is “off limits” without the need for regulation. They set up 11 different active standards groups for interested collaborators to join in 2017 and were projecting new reports by the end of the year. IEEE has also released a report on Ethically Aligned Design in artificial intelligence, part of a initiative to ensure ethical principles are considered in systems design.
Internet Society (ISOC)
The ISOC is a non-profit organization founded in 1992 to provide leadership in Internet-related standards, education, access, and policy. It is headquartered in Virginia, USA. The organization published a paper in April, 2017 that explains commercial uses of AI technology and provides recommendations for dealing with its management challenges, including 1) transparency, bias and accountability, 2) security and safety, 3) socio-economic impacts and ethics, and 4) new data uses and ecosystems. The recommendations include, among others, adopting ethical standards in the design of AI products and innovation policies, providing explanations to end users about why a specific decision was made, making it simpler to understand why algorithmic decision-making works, and introducing “algorithmic literacy” as a basic skills obtained through education.
Knight Foundation’s Ethics and Governance of Artificial Intelligence Fund https://www.knightfoundation.org/aifund-faq
The AI Fund was founded in January 2017 by the Massachusetts Institute of Technology (MIT) Media Lab, Harvard University’s Berkman-Klein Center, the Knight Foundation, Omidyar Network and Reid Hoffman of LinkedIn. It is currently housed at the Miami Foundation in Miami, Florida.
The goal of the AI Fund is to ensure that the development of AI becomes a joint multidisciplinary human endeavor that bridges computer scientists, engineers, social scientists, philosophers, faith leaders, economists, lawyers and policymakers. The aim is to accomplish this by supporting work around the world that advances the development of ethical AI in the public interest, with an emphasis on research and education.
In May 2017, the Berkman Klein Center at Harvard kicked off its collaboration with the MIT Media Lab on their Ethics and Governance of Artificial Intelligence Initiative focused on strategic research, learning and experimentation. Possible avenues of empirical research were discussed, and the outlines of a taxonomy emerged. Topics of this initiative include: use of AI-powered personal assistants, attitudes of youth, impact on news generation, and moderating online hate speech.
Moreover, Harvard’s Ethics and Governance of AI Fund has committed an initial $7.6M in grants to support nine organizations to strengthen the voice of civil society in the development of AI. An excerpt from their post: “Additional projects and activities will address common challenges across these core areas such as the global governance of AI and the ways in which the use of AI may reinforce existing biases, particularly against underserved and underrepresented populations.” Finally, a report of a December 2017 BKC presentation on building AI for an inclusive society has been published and can be accessed from the above link.
MIT-IBM Watson Lab
Founded in September 2017, MIT’s new $240 million center in collaboration with IBM, is intended to help advance the field of AI by “developing novel devices and materials to power the latest machine-learning algorithms.” This project overlaps with the Partnership on AI. IBM hopes it will help the company reclaim its reputation in the AI space. In another industry sector, Toyota made a billion-dollar investment in funding for its own AI center, plus research at both MIT and Stanford. The MIT-IBM Lab will be one of the “largest long-term university-industry AI collaborations to date,” mobilizing the talent of more than 100 AI scientists, professors, and students to pursue joint research at IBM’s Research Lab. The lab is co-located with the IBM Watson Health and IBM Security headquarters in Cambridge, MA. The stated goal is to push the boundaries in AI technology in several areas: 1) AI algorithms, 2) physics of AI, 3) application of AI to industries, and 4) advanced shared prosperity through AI.
In addition to this collaboration, IBM argues its Watson platform has been designed to be transparent. David Kenny, who heads Watson, said the following in a press conference: “I believe industry has a responsibility to step up. We all have a right to know how that decision was made [by AI],” Kenny said. “It cannot be a blackbox. We’ve constructed Watson to always be able to show how it came to the inference it came to. That way a human can always make a judgment and make sure there isn’t an inherent bias.”
New Zealand Law Foundation Centre for Law & Policy in Emerging Technologies
Professor Colin Gavaghan of the University of Otago heads a research centre examining the legal, ethical and policy issues around new technologies including artificial intelligence. In 2011, it hosted a forum on the Future of Fairness. The Law Foundation provided an endowment of $1.5M to fund the NZLF Centre and Chair in Emerging Technologies.
Obama White House Report: Preparing for the Future of Artificial Intelligence
The Obama Administration’s report on the future of AI was issued on October 16, 2016 in conjunction with a “White House Frontiers” conference focused on data science, machine learning, automation and robotics in Pittsburgh, PA. It followed a series of initiatives conducted by the WH Office of Science & Technology Policy (OSTP) in 2016. The report contains a snapshot of the state of AI technology and identifies questions that evolution of AI raises for society and public policy. The topics include improving government operations, adapting regulations for safe automated vehicles, and making sure AI applications are “fair, safe, and governable.” AI’s impact on jobs and the economy was another major focus. A companion paper laid out a strategic plan for Federally funded research and development in AI. President Trump has not named a Director for OSTP, so this plan is not currently being implemented. However, law makers in the US are showing further interest in legislation. Rep. John Delaney (D-Md.) said in a press conference in June, 2017: “I think transparency [of machine decision making] is obviously really important. I think if the industry doesn’t do enough of it, I think we’ll [need to consider legislation] because I think it really matters to the American people.” These efforts are part of the Congressional AI Caucus launched in May 2017, focused on implications of AI for the tech industry, economy and society overall.
OpenAI is a non-profit artificial intelligence research company in California that aims to develop general AI in such a way as to benefit humanity as a whole. It has received more than 1 billion USD in commitments to promote research and other activities aimed at supporting the safe development of AI. The company focuses on long-term research. Founders of OpenAI include Elon Musk and Sam Altman. The sponsors include, in addition to individuals, YC Research, Infosys, Microsoft, Amazon, and Open Philanthropy Project. The open source contributions can be found at https://github.com/openai.
PAIR: People + AI Research Initiative
This is a Google initiative that was launched in 2017 to focus on discovering how AI can augment the expert intelligence of professionals such as doctors, technicians, designers, farmers, musicians and others. It also aims to make AI more inclusive and accessible to everyone. Visiting faculty members are Hal Abelson and Brendan Meade. Current projects involve drawing and diversity in machine learning, an open library for training neural nets, training data for models, and design via machine learning.
Partnership on AI
The Partnership was founded in September 2016 by Eric Horvitz and Mustafa Suleyman to study and formulate best practices for AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and influences on people and society. Partnership on AI is funded financially and supported in-kind with research by its members, including founding members Amazon, Google/DeepMind, Facebook, IBM and Microsoft. In 2017, it expanded corporate and NGO membership, adding members such as Ebay, Intel, Salesforce and Center for Democracy & Technology (CDT). It hired an Executive Director, Terah Lyons, and boasts independent Board members from UC Berkley and the ACLU. The group has had affiliation discussions with the Association for the Advancement of Artificial Intelligence (AAAI) and the Allen Institute for Artificial Intelligence. In 2016 the Partnership expressed its support for the Obama White House Report.
Rajapinta is a scientific association founded in January 2017 that advocates the social scientific study of ICT and ICT applications to social research in Finland. Its short-term goal is to improve collaboration and provide opportunities for meetings and networking in the hopes of establishing a seat at the table in the global scientific community in the longer term. Funding sources are not readily available.
Royal Society of UK’s Machine Learning Project
The Royal Society is a fellowship of many of the world’s most eminent scientists and is currently conducting a project on machine learning (as a branch of AI), which in April 2017 produced a very comprehensive report titled Machine learning: the power and promise of computers that learn by example. It explores everyday ways in which people interact with machine learning systems, such as in social media image recognition, voice recognition systems, virtual personal assistants and recommendation systems used by online retailers. The grant funding for this particular project within the much larger Royal Society is unclear.
Workshop on Fairness, Accountability, and Transparency in Machine Learning (FatML)
Founded in 2014, FatML is an annual two-day conference that brings together researchers and practitioners concerned with fairness, accountability and transparency in machine learning, given a recognition that ML raises novel challenges around ensuring non-discrimination, due process and explainability of institutional decision-making. According to the initiative, corporations and governments must be supervised in their use of algorithmic decision making. FatML makes current scholarly resources on related subjects publicly available. The conference is funded in part by registration fees and possibly subsidized by corporate organizers such as Google, Microsoft and Cloudflare. Their August 2017 event was held in Halifax, Nova Scotia, Canada.
World Economic Forum (WEF)
WEF released a blog post in July 2017 on the risks of algorithmic decision making to civil rights, mentioning US law enforcement’s use of facial recognition technology, and other examples. The post argues humans are facing “algorithmic regulation” for example in public entitlements or benefits. It cites self-reinforcing bias as one of the five biggest problems with allowing AI into the government policy arena. In September 2017, the WEF released another post suggesting that a Magna Carta (“charter of rights”) for AI is needed; this essentially refers to commonly agreed upon rules and rights for both individuals and yielders of algorithm-based decision making authority. According to the post, the foundational elements of such an agreement include making sure AI creates jobs for all, rules dealing with machine curated news feeds and polarization, rules avoiding discrimination and bias of machine decision making, and safeguards for ensuring personal choice without sacrificing privacy for commercial efficiency.
From the above, we can conclude three points. First, different levels of stakeholders around the world have been activated to study the impact of technology on machine-decision making, as shown by the multitude of projects. On the research side, there are several recently founded research projects and conferences (e.g., AAWS, FatML). In a similar vein, industry players such as IBM, Microsoft and Facebook also show commitment in solving the associated challenges in their platforms. Moreover, policy makers are investigating the issues as well, as shown by the Obama administration’s report and the new Congressional AI Caucus.
Second, in addition to the topic being of interest for different stakeholders, it also involves a considerable number of different perspectives, including but not limited to aspects of computer science, ethics, law, politics, journalism and economics. Such a great degree of cross-sectionalism and multidisciplinary effort is not common for research projects that often tend to focus on a narrower field of expertise; thus, it might be more challenging to produce solutions that are theoretically sound and practically functional.
Third, there seems to be much overlap between the initiatives mentioned here; many of the initiatives seem to focus on solving the same problems, but it is unclear how well the initiatives are aware of each other and whether a centralized research agenda and resource sharing or joint allocation might help achieve results faster.
Notice an initiative or organization missing from this report? Please send information to Dr. Joni Salminen: firstname.lastname@example.org.
Feature analysis could be employed for bias detection when evaluating the procedural fairness of algorithms. (This is an alternative to the ”Google approach” which emphasis evaluation of outcome fairness.)
In brief, feature analysis reveals how well each feature (=variable) influenced the model’s decision. For example, see the following quote from Huang et al. (2014, p. 240):
”All features do not contribute equally to the classification model. In many cases, the majority of the features contribute little to the classifier and only a small set of discriminative features end up being used. (…) The relative depth of a feature used as a decision node in a tree can be used to assess the importance of the feature. Here, we use the expected fraction of samples each feature contributes to as an estimate of the importance of the feature. By averaging all expected fraction rates over all trees in our trained model, we could estimate the importance for each feature. It is important to note that feature spaces among our selected features are very diverse. The impact of the individual features from a small feature space might not beat the impact of all the aggregate features from a large feature space. So apart from simply summing up all feature spaces within a feature (i.e. sum of all 7, 057 importance scores in hashtag feature), which is referred to as un-normalized in Figure 4, we also plot the normalized relative importance of each features, where each feature’s importance score is normalized by the size of the feature space.”
They go on to visualize the impact of each feature (see Figure 1).
Figure 1 Feature analysis example (Huang et al., 2014)
As you can see, this approach seems excellent for probing the impact of each feature on the model’s decision making. The impact of sensitive features, such as ethnicity, can be detected. Although this approach may be useful for supervised machine learning, where the data is clearly labelled, the applicability to unsupervised learning might be a different story.
In the ”Studying User Perceptions and Experiences with Algorithms” workshop, there were many interesting questions popping up. Here are some of them:
- Will increased awareness of algorithm functionality change user behavior? How
- How can we build better algorithms to diversify information users are exposed to?
- Do most people care about knowing how Google works?
- What’s the ”count to 10” equivalent for online discussions? How to avoid snap judgments?
- How to defuse revenge seeking in online discussions?
- What are individuals’ affective relationships with algorithms like?
These make for great research questions.
In one of the workshops of the first conference day, ”Studying User Perceptions and Experiences with Algorithms”, the participants recommended papers to each other. Here are, if not all, then most of them, along with their abstracts.
Exposure to news, opinion, and civic information increasingly occurs through social media. How do these online networks influence exposure to perspectives that cut across ideological lines? Using deidentified data, we examined how 10.1 million U.S. Facebook users interact with socially shared news. We directly measured ideological homophily in friend networks and examined the extent to which heterogeneous friends could potentially expose individuals to cross-cutting content. We then quantified the extent to which individuals encounter comparatively more or less diverse content while interacting via Facebook’s algorithmically ranked News Feed and further studied users’ choices to click through to ideologically discordant content. Compared with algorithmic ranking, individuals’ choices played a stronger role in limiting exposure to cross-cutting content.
This article reflects the kinds of situations and spaces where people and algorithms meet. In what situations do people become aware of algorithms? How do they experience and make sense of these algorithms, given their often hidden and invisible nature? To what extent does an awareness of algorithms affect people’s use of these platforms, if at all? To help answer these questions, this article examines people’s personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself. Examining how algorithms make people feel, then, seems crucial if we want to understand their social power.
Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios, K., … Sandvig, C. (2015). “I Always Assumed That I Wasn’T Really That Close to [Her]”: Reasoning About Invisible Algorithms in News Feeds. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 153–162). New York, NY, USA: ACM.
Our daily digital life is full of algorithmically selected content such as social media feeds, recommendations and personalized search results. These algorithms have great power to shape users’ experiences, yet users are often unaware of their presence. Whether it is useful to give users insight into these algorithms’ existence or functionality and how such insight might affect their experience are open questions. To address them, we conducted a user study with 40 Facebook users to examine their perceptions of the Facebook News Feed curation algorithm. Surprisingly, more than half of the participants (62.5%) were not aware of the News Feed curation algorithm’s existence at all. Initial reactions for these previously unaware participants were surprise and anger. We developed a system, FeedVis, to reveal the difference between the algorithmically curated and an unadulterated News Feed to users, and used it to study how users perceive this difference. Participants were most upset when close friends and family were not shown in their feeds. We also found participants often attributed missing stories to their friends’ decisions to exclude them rather than to Facebook News Feed algorithm. By the end of the study, however, participants were mostly satisfied with the content on their feeds. Following up with participants two to six months after the study, we found that for most, satisfaction levels remained similar before and after becoming aware of the algorithm’s presence, however, algorithmic awareness led to more active engagement with Facebook and bolstered overall feelings of control on the site.
Epstein, R., & Robertson, R. E. (2015). The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections. Proceedings of the National Academy of Sciences, 112(33), E4512–E4521.
Internet search rankings have a significant impact on consumer choices, mainly because users trust and choose higher-ranked results more than lower-ranked results. Given the apparent power of search rankings, we asked whether they could be manipulated to alter the preferences of undecided voters in democratic elections. Here we report the results of five relevant double-blind, randomized controlled experiments, using a total of 4,556 undecided voters representing diverse demographic characteristics of the voting populations of the United States and India. The fifth experiment is especially notable in that it was conducted with eligible voters throughout India in the midst of India’s 2014 Lok Sabha elections just before the final votes were cast. The results of these experiments demonstrate that (i) biased search rankings can shift the voting preferences of undecided voters by 20% or more, (ii) the shift can be much higher in some demographic groups, and (iii) search ranking bias can be masked so that people show no awareness of the manipulation. We call this type of influence, which might be applicable to a variety of attitudes and beliefs, the search engine manipulation effect. Given that many elections are won by small margins, our results suggest that a search engine company has the power to influence the results of a substantial number of elections with impunity. The impact of such manipulations would be especially large in countries dominated by a single search engine company.
Algorithms (particularly those embedded in search engines, social media platforms, recommendation systems, and information databases) play an increasingly important role in selecting what information is considered most relevant to us, a crucial feature of our participation in public life. As we have embraced computational tools as our primary media of expression, we are subjecting human discourse and knowledge to the procedural logics that undergird computation. What we need is an interrogation of algorithms as a key feature of our information ecosystem, and of the cultural forms emerging in their shadows, with a close attention to where and in what ways the introduction of algorithms into human knowledge practices may have political ramifications. This essay is a conceptual map to do just that. It proposes a sociological analysis that does not conceive of algorithms as abstract, technical achievements, but suggests how to unpack the warm human and institutional choices that lie behind them, to see how algorithms are called into being by, enlisted as part of, and negotiated around collective efforts to know and be known.
In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.
Every day, corporations are connecting the dots about our personal behaviorsilently scrutinizing clues left behind by our work habits and Internet use. The data compiled and portraits created are incredibly detailed, to the point of being invasive. But who connects the dots about what firms are doing with this information? The Black Box Society argues that we all need to be able to do soand to set limits on how big data affects our lives. Hidden algorithms can make (or ruin) reputations, decide the destiny of entrepreneurs, or even devastate an entire economy. Shrouded in secrecy and complexity, decisions at major Silicon Valley and Wall Street firms were long assumed to be neutral and technical. But leaks, whistleblowers, and legal disputes have shed new light on automated judgment. Self-serving and reckless behavior is surprisingly common, and easy to hide in code protected by legal and real secrecy. Even after billions of dollars of fines have been levied, underfunded regulators may have only scratched the surface of this troubling behavior. Frank Pasquale exposes how powerful interests abuse secrecy for profit and explains ways to rein them in. Demanding transparency is only the first step. An intelligible society would assure that key decisions of its most important firms are fair, nondiscriminatory, and open to criticism. Silicon Valley and Wall Street need to accept as much accountability as they impose on others.
There is a dearth of research on the public’s beliefs about how social media technologies work. To help address this gap, this article presents the results of an exploratory survey that probes user and non-user beliefs about the techno-cultural and socioeconomic facets of Twitter. While many users are well-versed in producing and consuming information on Twitter, and understand Twitter makes money through advertising, the analysis reveals gaps in users’ understandings of the following: what other Twitter users can see or send, the kinds of user data Twitter collects through third parties, Twitter and Twitter partners’ commodification of user-generated content, and what happens to Tweets in the long term. This article suggests the concept of “information flow solipsism” as a way of describing the resulting subjective belief structure. The article discusses implications information flow solipsism has for users’ abilities to make purposeful and meaningful choices about the use and governance of social media spaces, to evaluate the information contained in these spaces, to understand how content users create is utilized by others in the short and long term, and to conceptualize what information other users experience.
Woolley, S. C., & Howard, P. N. (2016). Automation, Algorithms, and Politics| Political Communication, Computational Propaganda, and Autonomous Agents — Introduction. International Journal of Communication, 10(0), 9.
The Internet certainly disrupted our understanding of what communication can be, who does it, how, and to what effect. What constitutes the Internet has always been an evolving suite of technologies and a dynamic set of social norms, rules, and patterns of use. But the shape and character of digital communications are shifting again—the browser is no longer the primary means by which most people encounter information infrastructure. The bulk of digital communications are no longer between people but between devices, about people, over the Internet of things. Political actors make use of technological proxies in the form of proprietary algorithms and semiautomated social actors—political bots—in subtle attempts to manipulate public opinion. These tools are scaffolding for human control, but the way they work to afford such control over interaction and organization can be unpredictable, even to those who build them. So to understand contemporary political communication—and modern communication broadly—we must now investigate the politics of algorithms and automation.
The relationship between users and algorithms is always a mediated one, meaning that there is always a proxy between the algorithm and the user. The proxy can be understood differently based on the particular level we’re interested in. For example, it can be a social media platform (e.g., Facebook, Twitter) where people retrieve their news content (Nielsen & Schrøder, 2014). Or, at a closer level of interaction, it can be understood as user interface (UI). The following picture illustrates this thinking.
Figure 1 Mediated relationship between users and algorithms
In both cases, however, the interaction – and therefore the experience of the user – is mediated by a proxy entity. This is a critical notion when examining the interaction between algorithms and users because such a thing cannot exist in pure form. Essentially, the research of algorithms deal with how algorithms transform into user experience. Through the mediating nature we can build phenomenological bridges to technology adoption, such as TAM2 and UTAUT and models (Venkatesh & Davis, 2000; and Venkatesh et al., 2003, respectively) or generally to experience of technology usage, examined e.g. in human-computer interaction (HCI) literature (see Card et al., 1983; Dix et al., 2003).
I recently participated in a meeting of computer scientists where the topic was ”fake news”. The implicit assumption was that ”we will do this tool x that will show people what is false information, and they will become informed.”
However, after the meeting I realized this might not be enough, and in fact be naïve thinking. It may not matter that algorithms and social media platforms show people ’this is false information’. People might choose to believe in the conspiracy theory anyway, for various reasons. In those cases, the problem is not the lack of information, it is something else.
And the real question is: Can technology fix that something else? Or at least be part of the solution?
The balanced view algorithm
Because, technically, the algorithm is simple:
- Take a topic
- Define the polarities of the topic
- Show each user an equal number of content of each polarity
=> Results in a balanced and informed citizen!
But, as said, if the opposing content is against what you want to believe in, well, then the problem is not ”seeing” enough that content.
These are tough questions and reside in the interface of sociology and algorithms. On one hand, some of the solutions may approach manipulation but, as propagandists could tell, manipulation has to be subtle to be effective. The major risk is that people might rebel against a balanced worldview. It is good to remember that ’what you need to see’ is not the same as ’what you want to see’. There is little that algorithms can do if people want to live in a bubble.
Earlier, I had a brief exchange of tweets with @jonathanstray about algorithms. It started from his tweet:
Perhaps the biggest technical problem in making fair algorithms is this: if they are designed to learn what humans do, they will.
To which I replied:
Yes, and that’s why learning is not the way to go. ”Fair” should not be goal, is inherently subjective. ”Objective” is better
Then he wrote:
lots of things that are really important to society are in no way objective, though. Really the only exception is prediction.
And I wrote:
True, but I think algorithms should be as neutral (objective) as possible. They should be decision aids for humans.
And he answered:
what does ”neutral” mean though?
After which I decided to write a post about it, since the idea is challenging to explain in 140 characters.
So, what is a neutral algorithm? I would define it like this:
A neutral algorithm is a decision-making program whose operating principles are minimally inflenced by values or opinions of its creators. 
An example of a neutral algorithm is a standard ad optimization algorithm: it gets to decide whether to show Ad1, Ad2, or Ad3. As opposed to asking from designers or corporate management which ad to display, it makes the decision based on objective measures, such as click-through rate (CTR).
A treatment that all ads (read: content, users) get is fair – they are diffused based on their merits (measured objectively by an unambiguous metric), not based on favoritism of any sort.
The roots of algorithm neutrality stem from freedom of speech and net neutrality . No outsiders can impose their values and opinions (e.g., censoring politically sensitive content) and interfere with the operating principles of the algorithm. Instead of being influenced by external manipulation, the decision making of the algorithm is as value-free (neutral) as possible. For example, in the case of social media, it chooses to display information which accurately reflects the sentiment and opinions of the people at a particular point in time.
Now, I grant there are issues with ”freedom”, some of which are considerable. For example, 1) for media, CTR-incentives lead to clickbaiting (alternative goal metrics should be considered), 2) for politicians and electorate, facts can be overshadowed by misinformation and short videos taken out of context to give false impression of individuals; and 3) for regular users, harmful misinformation can spread as a consequnce of neutrality (e.g., anti vaccination propaganda). But these are ”true” social issues that the algorithm is accurately reflecting. If we want more ”just” outcomes, we will actually need to make neutral algorithms biased. Among other questions, this leads into the problem space of positive discrimination. It is also valid to ask: Who determines what is just?
A natural limitation to machine decisions, and an answer to the previous question, is legislation – illegal content should be kept out by the algorithm. In this sense, the neutral algorithm needs to adhere to a larger institutional and regulatory context, but given that the laws themselves are ”fair” this should impose no fundamental threat to the objective of neutral algorithms: free decision-making and, consequently, freedom of speech. I wrote a separate post about the neutrality dilemma.
Inspite of the aforementioned issues, with a neutral algorithm each media/candidate/user has a level playing field. In time, they must learn to use it to argue in a way that merits the diffusion of their message.
The rest is up to humans – educated people respond to smart content, whereas ignorant people respond to and spread non-sense. A neutral algorithm cannot influence this; it can only honestly display what the state of ignorance/sophistication is in a society. A good example is Microsoft’s infamous bot Tay , a machine learning experiment turned bad. The alarming thing about the bot is not that ”machines are evil”, but that *humans are evil*; the machine merely reflects that. Hence my original point of curbing human evilness by keeping algorithms free of human values as much as possible.
Perhaps in the future an algorithm could figuratively spoken save us from ourselves, but at the moment that act requires conscious effort from us humans. We need to make critical decisions based on our own judgment, instead of outsourcing ethically difficult choices to algorithms. Just as there is separation of church and state, there should be separation of humans and algorithms to the greatest possible extent.
 Initially, I thought about definition that would say ”not influenced”, but it is not safe to assume that the subjectivity of its creators would not in some way be reflected to the algorithm. But ”minimal” leads into normative argument that that subjectivity should be mitigated.
 Wikipedia (2016): ”Net neutrality (…) is the principle that Internet service providers and governments should treat all data on the Internet the same, not discriminating or charging differentially by user, content, site, platform, application, type of attached equipment, or mode of communication.”
 A part of the story is that Tay was trolled heavily and therefore assumed a derogatory way of speech.