Introduction. Hal Daumé wrote an interesting blog post about language bias and the black sheep problem. In the post, he defines the problem as follows: The ”black sheep problem” is that if you were to try to guess what color most sheep were by looking and language data, it would be very difficult for you to conclude that […]
Kategoria: English
This report was created by Joni Salminen and Catherine R. Sloan. Publication date: December 10, 2017. Artificial intelligence (AI) and machine learning are becoming more influential in society, as more decision-making power is being shifted to algorithms either directly or indirectly. Because of this, several research organizations and initiatives studying fairness of AI and machine […]
What jobs are safe from AI?
There is enormous concern about machine learning and AI replacing human workers. However, according to several economists, and also according to past experience ranging back all the way to the industrial revolution of the 18th century (which caused major distress at the time), the replacement of human workers is not permanent but there will be […]
Ethics of machine learning algorithms has recently been raised as a major research concern. Earlier this year (2017), a fund of $27M USD was started to support research on the societal challenges of AI. The group responsible for the fund includes e.g. the Knight Foundation, Omidyar Network and the startup founder and investor Reid Hoffman. As […]
Read about this amazing initiative at Harvard’s website and thought of sharing it: About the Ethics and Governance of Artificial Intelligence Initiative Artificial intelligence and complex algorithms, fueled by the collection of big data and deep learning systems, are quickly changing how we live and work, from the news stories we see, to the loans […]
Feature analysis could be employed for bias detection when evaluating the procedural fairness of algorithms. (This is an alternative to the ”Google approach” which emphasis evaluation of outcome fairness.) In brief, feature analysis reveals how well each feature (=variable) influenced the model’s decision. For example, see the following quote from Huang et al. (2014, p. 240): […]
Questions from ICWSM17
In the ”Studying User Perceptions and Experiences with Algorithms” workshop, there were many interesting questions popping up. Here are some of them: Will increased awareness of algorithm functionality change user behavior? How How can we build better algorithms to diversify information users are exposed to? Do most people care about knowing how Google works? What’s […]
Reading list from ICWSM17
In one of the workshops of the first conference day, ”Studying User Perceptions and Experiences with Algorithms”, the participants recommended papers to each other. Here are, if not all, then most of them, along with their abstracts. Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, […]
The relationship between users and algorithms is always a mediated one, meaning that there is always a proxy between the algorithm and the user. The proxy can be understood differently based on the particular level we’re interested in. For example, it can be a social media platform (e.g., Facebook, Twitter) where people retrieve their news content […]
The balanced view algorithm
I recently participated in a meeting of computer scientists where the topic was ”fake news”. The implicit assumption was that ”we will do this tool x that will show people what is false information, and they will become informed.” However, after the meeting I realized this might not be enough, and in fact be naïve […]