From polarity to diversity of opinions

The problem with online discussions and communities is that the extreme poles draw people effectively, causing group polarization in which the original opinion of a person becomes more radical due to influence of the group. In Finnish, we have a saying ”In a group, stupidity concentrates” (joukossa tyhmyys tiivistyy).

Here, I’m exploring the idea that this effect, namely the growth of polar extremes (for example, being for or against immigration, as currently many European citizens are) is simply because people are lacking options to identify with. There are only the extremes, but no neutral or moderate group, even though, as I’m arguing here, most people in fact are moderate and understand that extremes and absolutes are misleading simplifications either way.

In other words, when there are only two ”camps” of opinion, people are more easily split between them. However, my argument is that people have preferences that correspond to being in the middle, not in the extremes.

These preferences remain hidden because there are only two camps to subscribe to: One cannot be moderate because there is no moderate group.

For example, there are liberals and conservatives, but what about the people in the middle? What about them who share some ideas of liberals and others from conservatives? By having only these two groups, other combinations become socially impossible because people are, again socially, pressed to observe all the opinions of the group they’re subscribing to, even if they wouldn’t agree with a particular view. This effect has been studied in relation to the concept of groupthink, but no permanent remedy has been found.

How to solve the problem of extremes?

My idea is simple: we should start more camps, more views to subscribe to, especially those representing moderate views.

The argument is that having more supply of camps, people will distribute more evenly between them and we have less polarization as a consequence.

This is illustrated in the picture (sketched quickly in Paint since I got an inspiration).

a and b

In (A), public discourse is dominated by the extremes (the distribution of attention is skewed toward the extremes of a given opinion spectrum). In (B), the distribution is focused on the center of the opinion spectrum (=moderate views) while the extremes are marginalized (as they should be, according to the assumption of moderate majority).

An example: having several political parties results in more diverse views being presented. In the US, you are either a Democrat or a Republican (although there are  marginal Green Party and the progressives, it must be stated), but in Finland you can also be many others: Center Party, National Coalition Party, or Green Party, for example. The same applies to most countries in Europe. Although I don’t have facts for this, it seems that the public discourse in the US is exceptionally polarized compared to many other countries [1].

Giving more choices to identify with for the ”silent majority” that is moderate rather than extreme, revealing the ”true” opinions of citizens, would ideally marginalize both extremes, avoiding the tyrannity of minority [2] currently dominating the public discourse.

Finally, all this could be formalized in game theory by assuming heterogeneity of preferences over the opinion spectrum and parameters such as gravity (”pull factor” by the extremes), justifiable e.g. by media attention given to extreme views over moderate ones. But the implication reains the same: diversity of classes reduces polarization under the set of assumptions.


[1] Of course there are other reasons, such as media taking political sides.

[2] This means extreme views are not representative to the whole population (which is more moderate than either view) but they get disproportionate attention in the media and public discourse. This is because the majority views are hidden; they would need to be revealed.

How to teach machines common sense? Solutions for the ambiguity problem of AI


The ambiguity problem illustrated:

User: ”Siri, call me an ambulance!”

Siri: ”Okay, I will call you ’an ambulance’.”

You’ll never reach the hospital, and end up bleeding to death.


Two potential solutions come to mind:

A. machine builds general knowledge (”common sense”)

B. machine identifies ambiguity & asks for clarification from humans (reinforcement learning)

The whole ”common sense” problem can be solved by introducing human feedback into the system. We really need to tell the machine what is what, just like a child. This is iterative learning, in which trials and errors take place. However, it is better than trying to adapt an unescapably finite dataset into a close to finite space of meanings.

But, in fact, A and B converge by doing so. Which is fine, and ultimately needed.

Contextual awareness

To determine which solution to an ambiguous situation is proper, the machine needs contextual awareness; this can be achieved by storing contextual information from each ambiguous situation, and being explained ”why” a particular piece of information results in disambiguity. It’s not enough to say ”you’re wrong”, but there needs to be an explicit association to a reason (concept, variable). Equally, it’s not enough to say ”you’re right”, but again the same association is needed.

The process:

1) try something

2) get told it’s not right, and why (linking to contextual information)

3) try something else, corresponding to why

4) get rewarded, if it’s right.

The problem is, currently machines are being trained by data, not by human feedback.

New thinking: Training AI pets

So we would need to build machine-training systems which enable training by direct human feedback, i.e. a new way to teach and communicate with the machine. It’s not a trivial thing, since the whole machine-learning paradigm is based on data, not meanings. From data and probabilities, we would need to move into associations and concepts that capture social reality. A new methodology is needed. Potentially, individuals could train their own AIs like pets (think having your own ”AI pet” like Tamagotchi), or we could use large numbers of crowd workers who would explain the machine why things are how they are (i.e., create associations). A specific type of markup (=communication with the machine) would probably also be needed, although conversational UIs would most likely be the best solution.

Through mimicking human learning we can teach the machine common sense. This is probably the only way; since common sense does not exist beyond human cognition, it can only be learnt from humans. An argument can be made that this is like going back in time, to era where machines followed rule-based programming (as opposed to being data-driven). However, I would argue rule-based learning is much closer to human learning than the current probability-based one, and if we want to teach common sense, we therefore need to adopt the human way.


Machine learning may be at par, but machine training certainly is not. The current machine learning paradigm is data-driven, whereas we could look into ways for concept-driven AI training approaches. Essentially, this is something like reinforcement learning for concept maps.

The black sheep problem in machine learning

Introduction. Hal Daumé wrote an interesting blog post about language bias and the black sheep problem. In the post, he defines the problem as follows:

The ”black sheep problem” is that if you were to try to guess what color most sheep were by looking and language data, it would be very difficult for you to conclude that they weren’t almost all black. In English, ”black sheep” outnumbers ”white sheep” about 25:1 (many ”black sheep”s are movie references); in French it’s 3:1; in German it’s 12:1. Some languages get it right; in Korean it’s 1:1.5 in favor of white sheep. This happens with other pairs, too; for example ”white cloud” versus ”red cloud.” In English, red cloud wins 1.1:1 (there’s a famous Sioux named ”Red Cloud”); in Korean, white cloud wins 1.2:1, but four-leaf clover wins 2:1 over three-leaf clover.

Thereafter, Hal accurately points out:

”co-occurance frequencies of words definitely do not reflect co-occurance frequencies of things in the real world.”

But the mistake made by Hal is to assume language describes objective reality (”the real world”). Instead, I would argue that it describes social reality (”the social world”).

Black sheep in social reality. The higher occurence of ’black sheep’ tells us that in social reality, there is a concept called ’black sheep’ which is more common than the concept of white (or any color) sheep. People are using that concept, not to describe sheep, but as an abstract concept in fact describing other people (”she is the black sheep of the family”). Then, we can ask: Why is that? In what contexts is the concept used? And try to teach the machine its proper use through associations of that concept to other contexts (much like we teach kids when saying something is appropriate and when not). As a result, the machine may create a semantic web of abstract concepts which, if not leading to it understanding them, at least helps in guiding its usage of them.

We, the human. That’s assuming we want it to get closer to the meaning of the word in social reality. But we don’t necessarily want to focus on that, at least as a short-term goal. In the short-term, it might be more purposeful to understand that language is a reflection of social reality. This means we, the humans, can understand human societies better through its analysis. Rather than trying to teach machines to imputate data to avoid what we label an undesired state of social reality, we should use the outputs provided by the machine to understand where and why those biases take place. And then we should focus on fixing them. Most likely, technology plays only a minor role in that, although it could be used to encourage balanced view through a recommendation system, for example.

Conclusion. The ”correction of biases” is equivalent to burying your head in the sand: even if they magically disappeared from our models, they would still remain in the social reality, and through the connection of social reality and objective reality, echo in the everyday lives of people.

10 suositusta tekoälyn hyödyntämiseen

Tekoälyyn perustuvat digitaaliset apulaiset, chatbotit ja monenlaiset algoritmit sekä koneellinen päätöksenteko koskettavat ihmisten arkea entistä enemmän – oltiin siitä tietoisia tai ei. Kehitysvauhti on kovaa, mutta standardointi ja vaikuttavuuden arviointi laahaavat perässä. Sen vuoksi on syytä pohtia ja herättää keskustelua siitä, mihin suuntaan tekoälyn kehitystä tulevaisuudessa ohjataan. Yksi tärkeistä kysymyksistä on:

Millaisia suosituksia tekoälyn parissa työskenteleville tulisi antaa?

Tässä artikkelissa esitellään 10 suositusta AI Now -instituutilta, joka on tekoälyn tutkimukseen erikoistunut organisaatio, koskien tekoälyn hyödyntämistä.

1. Mustia laatikkoja tulisi välttää: Erilaisten julkishallinnollisten organisaatioiden, kuten lakien toimeenpanosta, terveydenhuollosta tai koulutuksesta vastaavien organisaatioiden ei tulisi hyödyntää tekoälyn tai algoritmisten järjestelmien niin sanottuja mustia laatikoita, joiden toimintaperiaatteita ulkopuoliset eivät pysty arvioimaan. Tämä käsittää sekä organisaation sisällä luodut että kolmansilta osapuolilta lisensoidut tekoälyjärjestelmät, joiden toimintalogiikka ei ole julkista tietoa. Suljettujen koneoppimisjärjestelmien käyttö herättää huolta, ja on ehdotettu että niiden tulisi olla julkisen auditoinnin ja testauksen piirissä. Yhdysvalloissa ongelmia on aiheuttanut esimerkiksi algoritmi, jolla arvioidaan opettajia vertailemalla koulun oppilaiden menestystä suhteessa osavaltion keskiarvoon. Opettajien kansalaisoikeuksia on katsottu rikottavan järjestelmällä, sillä he eivät voi varmistaa tulosten oikeellisuutta. Lisääntynyt tietoisuus koneellisesta päätöksenteosta on samanaikaisesti lisännyt toiveita läpinäkyvyydestä. Avoimen datan hankkeita onkin käynnissä ympäri maailmaa, kuten Suomessa esimerkiksi Turun kaupungilla.

2. Läpinäkyvä testaaminen: Ennen kuin tekoälyjärjestelmä julkaistaan, kehittäjien tulisi suorittaa tarkkoja testausvaiheita, jotta varmistetaan puolueellisuuden ja virheiden minimointi. Testausvaiheen tulisi olla mahdollisimman avointa ja läpinäkyvää sekä korostaa nimenomaan kehittäjän vastuuta. Voidaan esimerkiksi pyrkiä tasapainottamaan käyttäjille näytettävää sisältöä tai muilla tavoin pyrkiä etukäteen varmistamaan se, että koneellisen päätöksenteon haittapuolet minimoidaan.

3. Jatkuva konepäätösten seuraaminen ja arviointi: Tekoälyjärjestelmän julkaisun jälkeen kehittäjän tulisi järjestää jatkuvaa järjestelmän seurantaa soveltuen erilaisten käyttäjien ja yhteisöjen tarpeisiin. Vähemmistöryhmien näkemykset ja kokemukset tulisi asettaa jatkuvassa kehitystyössä tärkeäksi prioriteetiksi, jotta järjestelmän monipuolisuus taataan. Kehittäjän vastuun ei tulisi myöskään päättyä järjestelmän julkaisuun ja käyttöönottoon. On esimerkiksi keinoja arvioida koneoppimisalgoritmien toiminnassa vaikuttaneita datamuuttujia, ja tulkita onko päätös mahdollisesti ollut puolueellinen.

4. Toimialakohtainen tutkimus: Tarvitaan enemmän tutkimusta tekoälyjärjestelmien käytöstä työpaikoilla, kattaen esimerkiksi rekrytoinnin ja henkilöstöpalvelut. Tämänkaltainen tutkimus edesauttaisi myös automaation vaikutusten arviointia eri konteksteissa. Kaiken kaikkiaan tekoälyn vaikutus koskettaa monia aloja, joten koneellisen päätöksenteon valtaa tulisi arvioida usealta eri kantilta. Erityistä huomiota tulisi kiinnittää työntekijän oikeusturvaan. Esimerkiksi työhaastatteluvaiheessa on alettu hyödyntämään automatiikkaa, joka analysoi työnhakijan puhetta ja kehonkieltä sekä vertaa niitä työpaikalla parhaiten suoriutuviin. Haasteellista on kuitenkin muun muassa se, että prosessi voi vahvistaa ennakkoluuloja ja yksipuolistaa henkilöstöä.

5. Tekoälykehityksen standardit: Tulisi kehittää standardeja tutkimaan tekoälyn käyttämän data-aineiston alkuperää ja kehitysvaiheita. Tekoäly pohjautuu suuriin tietomääriin, jonka avulla muodostetaan malleja ja ennusteita. Suuret tietomäärät edustavat sen sijaan ihmiskunnan historiaa, mikä eittämättä sisältää myös puolueellisia ja ennakkoluuloisia asenteita. Siksi datan alkuperää on syytä tutkia enemmän. Koneoppimisen avulla pystytään toki havaitsemaan monenlaisia tilastollisia malleja, vaikkakin tavoite muodostaa yleistyksiä saattaakin merkitä erityisten poikkeusten huomiotta jättämistä.

6. Sovellutusympäristön huomiointi: Pitäisi laajentaa tekoälyn puolueellisuuden tutkimusta ajoittain kapeahkon teknisen lähestymistavan ulkopuolelle. Perushaasteena on, että vaikka teknisessä näkökulmassa keskitytään tavoitteiden mukaisesti optimoimaan järjestelmiä, niin oleellista on myös huomioida konepäätöksenteon toimintaympäristö. Toimintaympäristön hahmottaminen vaatii kuitenkin asiantuntijuutta esimerkiksi lainsäädännöstä, lääketieteestä tai sosiologiasta. Tämän vuoksi tekoälyn kohdalla tarvitaan poikkitieteellistä tutkimusta.

7. Tekoälyarvioinnin standardit: Vahvat auditoinnin standardit ja tekoälyn toiminnan ymmärtäminen arkielämässä on tarpeellista. Kuitenkaan tällä hetkellä ei ole olemassa vakiintuneita käytänteitä tekoälyn vaikuttavuuden arvioimiseen. Tutkimushankkeet, jotka keskittyvät tekoälyn yhteiskunnallisiin vaikutuksiin ovat varsin hajanaisia niin maantieteellisesti kuin sektoritasolla. On niin julkisrahoitteisia kuin yksityisrahoitteisia hankkeita – tekoälyn valtaa tutkitaan tällä hetkellä eniten Yhdysvalloissa, mutta myös Euroopassa on useita hankkeita käynnissä. Näiden välistä koordinaatiota tulisi kuitenkin lisätä, jotta standardien luonti olisi mahdollista.

8. Monimuotoisuuden lisääminen: Yritysten, oppilaitosten, järjestöjen, konferenssien ja muiden sidosryhmien tulisi julkaista tietoja siitä, miten esimerkiksi naiset, vähemmistöt ja muut marginalisoidut ryhmät osallistuvat tekoälyn tutkimiseen ja kehitykseen. Monimuotoisuuden puute on haaste tekoälyn kehittämisessä ja saattaa myös johtaa tekoälyn puolueellisuuden lisääntymiseen. Tekoälyn kehittäjät ovat usein miehiä ja tulevat samasta kulttuuri- ja koulutustaustasta. Tämä on johtanut esimerkiksi siihen, että puheentunnistusjärjestelmät eivät ”tunnista” naisen ääntä tai että digitaaliset assistentit estävät naisten terveyteen liittyvää tiedonsaantia. Kyse ei ole siitä, että palkattaisiin vain tietyn ryhmän edustajia, vaan että rakennettaisiin tosiasiallisesti inklusiivisia työympäristöjä ja kattavampaa tekoälyn järjestelmiä.

9. Ei-tekniset näkökannat: Edellistä jatkaen, tekoälyn tutkimiseen tulisikin palkata asiantuntijoita tietojenkäsittelytieteiden ja insinöörialojen ulkopuolelta. Tekoälyn käytön siirtyessä yhä uusille toimintaympäristöille olisi hyvä varmistaa, että esimerkiksi oikeustieteilijöiden ja yhteiskuntatieteilijöiden panos tekoälyjärjestelmien suunnittelussa tulisi esille.

10. Yhteisen hyvän määrittely: Tekoälyä koskevaa eettistä säännöstöä pitäisi seurata toimintakykyinen valvontaelin ja vastuullisuuden mekanismeja tulisi jalostaa nykyisestä. Kyse on siitä, miten yhteensovittaa korkeat eettiset periaatteet ja reilun sekä turvallisen tekoälyn kehitystyö. Eettisyyden periaate on kuitenkin ainakin toistaiseksi pitkälti vapaaehtoista ja keskeisenä painotuksena on yhteinen hyvä. Mutta mikä oikeastaan on yhteistä hyvää ja kuka sen määrittelee? Kehittäjien yksipuolisuuden ja tilivelvollisuuden periaatteen lisäksi tekoälyn kehityksen keskiössä on jatkossakin eettisyys.

Artikkeli perustuu AI Now -instituutin tutkimukseen. Alkuperäinen artikkeli ”The 10 Top Recommendations for the AI Field in 2017” on luettavissa täältä.

A Brief Report on AI and Machine Learning Fairness Initiatives

This report was created by Joni Salminen and Catherine R. Sloan. Publication date: December 10, 2017.

Artificial intelligence (AI) and machine learning are becoming more influential in society, as more decision-making power is being shifted to algorithms either directly or indirectly. Because of this, several research organizations and initiatives studying fairness of AI and machine learning have been started. We decided to conduct a review of these organizations and initiatives.

This is how we went about it. First, we used our prior information about different initiatives that we were familiar with. We used this information to draft an initial list and supplemented this list by conducting Google and Bing searches with key phrases relating to machine learning or artificial intelligence and fairness. Overall, we found 25 organization or initiatives. We analyzed these in greater detail. For each organization / initiative, we aimed to retrieve at least the following information:

  • Name of the organization / initiative
  • URL of the organization / initiative
  • Founded in (year)
  • Short description of the organization / initiative
  • Purpose of the organization / initiative
  • University or funding partner

Based on the above information, we wrote this brief report. Its purpose is to chart current initiatives around the world relating to fairness, accountability and transparency of machine learning and AI. At the moment, several stakeholders are engaged in research on this topic area, but it is uncertain how well they are aware of each other and if there is a sufficient degree of collaboration among them. We hope this list increases awareness and encounters among the initiatives.

In the following, the initiatives are presented in alphabetical order.


AI100: Stanford University’s One Hundred Year Study on Artificial Intelligence

Founded in 2016, this is an initiative launched by computer scientist Eric Horvitz and driven by seven diverse academicians focused on the influences of artificial intelligence on people and society. The goal is to anticipate how AI will impact every aspect of how people work, live and play, including automation, national security, psychology, ethics, law, privacy and democracy.  AI100 is funded by a gift from Eric and Mary Horvitz.

AI for Good Global Summit

AI for Good Global Summit was held in Geneva, 7-9 June, 2017 in partnership with a number of United Nation (UN) sister agencies. The Summit aimed to accelerate and advance the development and democratization of AI solutions that can address specific global challenges related to poverty, hunger, health, education, the environment, and other social purposes.

AI Forum New Zealand

The AI Forum was launched in 2017 as a membership funded association for those with a passion for the opportunities AI can provide. The Forum connects AI tech innovators, investor groups, regulators, researchers, educators, entrepreneurs and the public.  Its executive council includes representatives of Microsoft and IBM as well as start-ups and higher education.  Currently the Forum is involved with a large-scale research project on the impact of AI on New Zealand’s economy and society.

AI Now Institute

The AI Now Institute at New York University (NYU) was founded by Kate Crawford and Meredith Whittaker in 2017.  It’s an interdisciplinary research center dedicated to understanding the social implications of artificial intelligence.  Its work focuses on four core domains: 1) Rights & Liberties, 2) Labor & Automation, 3) Bias & Inclusion and 4) Safety & Critical Infrastructure.  The Institute’s partners include NYU’s schools of Engineering (Tandon), Business (Stern) and Law, the American Civil Liberties Union (ACLU) and the Partnership on AI.

Algorithms, Automation, and News

AAWS is an international conference focusing on impact of algorithms on news. Among the studied topics, the call for papers lists 1) concerns around news quality, transparency, and accountability in general; 2) hidden biases built into algorithms deciding what’s newsworthy; 3) the outcomes of information filtering such as ‘popularism’ (some content is favored over other content) and the transparency and accountability of the decisions made about what the public sees; 4) the privacy of data collected on individuals for the purposes of newsgathering and distribution; 5) the legal issues of libel by algorithm, 6) private information worlds and filter bubbles, and 7) the relationship between algorithms and ‘fake news’. The acceptance rate for the 2018 conference was about 12%. The conference is organized by Center for Advanced Studies at Ludwig-Maximilians-Universität München (LMU) and supported by Volkswagen Foundation and University of Oregon’s School of Journalism and Communication. The organizers aim to release a special issue of Digital Journalism and a book, and one of them (Neil Thurman) is engaged in a research project on ’Algorithmic News’.


This research project was founded in early 2017 at the University of Turku in Finland as a collaboration of its School of Economics with the BioNLP unit of University of Turku. There are currently three researchers involved, one from social science background and two from computer science. The project studies the societal impact and risks of machine decision-making. It has been funded by Kone Foundation and Kaute Foundation.

Center for Democracy and Technology (CDT)

CDT is a non-profit organization headquartered in Washington. They describe themselves as “a team of experts with deep knowledge of issues pertaining to the internet, privacy, security, technology, and intellectual property. We come from academia, private enterprise, government, and the non-profit worlds to translate complex policy into action.” The organization is currently focused on the following issues: 1) Privacy and data, 2) Free expression, 3) Security and surveillance, 4) European Union, and 5) Internet architecture. In August 2017, CDT launched a digital decisions tool to help engineers and product managers mitigate algorithmic bias in machine decision making. The tool translates principles for fair and ethical decision-making into a series of questions that can be addressed while designing and deploying an algorithm. The questions address developers’ choices: what data to use to train the algorithm, what features to consider, and how to test the algorithm’s potential bias.

Data & Society’s Intelligence and Autonomy Initiative

This initiative was founded in 2015 and is based in New York City. It develops grounded qualitative empirical research to provide nuanced understandings of emerging technologies to inform the design, evaluation and regulation of AI-driven systems, while avoiding both utopian and dystopian scenarios. The goal is to engage diverse stakeholders in interdisciplinary discussions to inform structures of AI accountability and governance from the bottom up. I&A is funded by a research grant from the Knight Foundation’s Ethics and Governance of Artificial Intelligence Fund, and was previously supported by grants from the John D. and Catherine T. MacArthur Foundation and Microsoft Research.

Facebook AI Research (FAIR)

Facebook’s research program engages with academics, publications, open source software, and technical conferences and workshops.  Its researchers are based in Menlo Park, CA, New York City and Paris, France. Its CommAI project aims to develop new data sets and algorithms to develop and evaluate general purpose artificial agents that rely on a linguistic interface and can quickly adapt to a stream of tasks.


This internal Microsoft group focuses on Fairness, Accountability, Transparency and Ethics in AI and was launched in 2014.  Its goal is to develop, via collaborative research projects, computational techniques that are both innovative and ethical, while drawing on the deeper context surrounding these issues from sociology, history and science.

Good AI

Good AI was founded in 2014 as an international group based in Prague, Czech Republic dedicated to developing AI quickly to help humanity and to understand the universe. Its founding CEO Marek Rosa funded the project with $10M. Good AI’s R&D company went public in 2015 and is comprised of a team of 20 research scientists. In 2017 Good AI participated in global AI conferences in Amsterdam, London and Tokyo and hosted data science competitions.

Google Jigsaw

Jigsaw is a technology incubator focusing on geopolitical challenges, originating from Google Ideas, as a ”think/do tank” for issues at the interface of technology and geopolitics. One of the projects of Jigsaw is the Perspective API that uses machine learning to identify abuse and harassment online. Perspective rates comments based on the perceived impact a comment might have on the conversation. Perspective can be used use to give real-time feedback to commenters, help moderators sort comments more effectively, or allow readers to find relevant information. The first model of Perspective API identifies whether a comment is perceived as “toxic” in a discussion.

IEEE Global Initiative for Ethical Considerations in AI and Autonomous Systems

In 2016, the Institute of Electrical and Electronics Engineers (IEEE) launched a project seeking public input on ethically designed AI. In April 2017, the IEEE hosted a related dinner for the European Parliament in Brussels.  In July 2017, it issued a preliminary report entitled Prioritizing Human Well Being in the Age of Artificial Intelligence.  IEEE is conducting a consensus driven standards project for “soft governance” of AI that may produce a “bill of rights” regarding what personal data is “off limits” without the need for regulation. They set up 11 different active standards groups for interested collaborators to join in 2017 and were projecting new reports by the end of the year. IEEE has also released a report on Ethically Aligned Design in artificial intelligence, part of a initiative to ensure ethical principles are considered in systems design.

Internet Society (ISOC)

The ISOC is a non-profit organization founded in 1992 to provide leadership in Internet-related standards, education, access, and policy. It is headquartered in Virginia, USA. The organization published a paper in April, 2017 that explains commercial uses of AI technology and provides recommendations for dealing with its management challenges, including 1) transparency, bias and accountability, 2) security and safety, 3) socio-economic impacts and ethics, and 4) new data uses and ecosystems. The recommendations include, among others, adopting ethical standards in the design of AI products and innovation policies, providing explanations to end users about why a specific decision was made, making it simpler to understand why algorithmic decision-making works, and introducing “algorithmic literacy” as a basic skills obtained through education.

Knight Foundation’s Ethics and Governance of Artificial Intelligence Fund

The AI Fund was founded in January 2017 by the Massachusetts Institute of Technology (MIT) Media Lab, Harvard University’s Berkman-Klein Center, the Knight Foundation, Omidyar Network and Reid Hoffman of LinkedIn.  It is currently housed at the Miami Foundation in Miami, Florida.

The goal of the AI Fund is to ensure that the development of AI becomes a joint multidisciplinary human endeavor that bridges computer scientists, engineers, social scientists, philosophers, faith leaders, economists, lawyers and policymakers.  The aim is to accomplish this by supporting work around the world that advances the development of ethical AI in the public interest, with an emphasis on research and education.

In May 2017, the Berkman Klein Center at Harvard kicked off its collaboration with the MIT Media Lab on their Ethics and Governance of Artificial Intelligence Initiative focused on strategic research, learning and experimentation. Possible avenues of empirical research were discussed, and the outlines of a taxonomy emerged. Topics of this initiative include: use of AI-powered personal assistants, attitudes of youth, impact on news generation, and moderating online hate speech.

Moreover, Harvard’s Ethics and Governance of AI Fund has committed an initial $7.6M in grants to support nine organizations to strengthen the voice of civil society in the development of AI. An excerpt from their post: “Additional projects and activities will address common challenges across these core areas such as the global governance of AI and the ways in which the use of AI may reinforce existing biases, particularly against underserved and underrepresented populations.” Finally, a report of a December 2017 BKC presentation on building AI for an inclusive society has been published and can be accessed from the above link.

MIT-IBM Watson Lab

Founded in September 2017, MIT’s new $240 million center in collaboration with IBM, is intended to help advance the field of AI by “developing novel devices and materials to power the latest machine-learning algorithms.”  This project overlaps with the Partnership on AI. IBM hopes it will help the company reclaim its reputation in the AI space.  In another industry sector, Toyota made a billion-dollar investment in funding for its own AI center, plus research at both MIT and Stanford. The MIT-IBM Lab will be one of the “largest long-term university-industry AI collaborations to date,” mobilizing the talent of more than 100 AI scientists, professors, and students to pursue joint research at IBM’s Research Lab. The lab is co-located with the IBM Watson Health and IBM Security headquarters in Cambridge, MA. The stated goal is to push the boundaries in AI technology in several areas: 1) AI algorithms, 2) physics of AI, 3) application of AI to industries, and 4) advanced shared prosperity through AI.

In addition to this collaboration, IBM argues its Watson platform has been designed to be transparent. David Kenny, who heads Watson, said the following in a press conference: “I believe industry has a responsibility to step up. We all have a right to know how that decision was made [by AI],” Kenny said. “It cannot be a blackbox. We’ve constructed Watson to always be able to show how it came to the inference it came to. That way a human can always make a judgment and make sure there isn’t an inherent bias.”

New Zealand Law Foundation Centre for Law & Policy in Emerging Technologies

Professor Colin Gavaghan of the University of Otago heads a research centre examining the legal, ethical and policy issues around new technologies including artificial intelligence.  In 2011, it hosted a forum on the Future of Fairness.  The Law Foundation provided an endowment of $1.5M to fund the NZLF Centre and Chair in Emerging Technologies.

Obama White House Report: Preparing for the Future of Artificial Intelligence

The Obama Administration’s report on the future of AI was issued on October 16, 2016 in conjunction with a “White House Frontiers” conference focused on data science, machine learning, automation and robotics in Pittsburgh, PA. It followed a series of initiatives conducted by the WH Office of Science & Technology Policy (OSTP) in 2016. The report contains a snapshot of the state of AI technology and identifies questions that evolution of AI raises for society and public policy.  The topics include improving government operations, adapting regulations for safe automated vehicles, and making sure AI applications are “fair, safe, and governable.”  AI’s impact on jobs and the economy was another major focus. A companion paper laid out a strategic plan for Federally funded research and development in AI. President Trump has not named a Director for OSTP, so this plan is not currently being implemented. However, law makers in the US are showing further interest in legislation. Rep. John Delaney (D-Md.) said in a press conference in June, 2017: “I think transparency [of machine decision making] is obviously really important. I think if the industry doesn’t do enough of it, I think we’ll [need to consider legislation] because I think it really matters to the American people.” These efforts are part of the Congressional AI Caucus launched in May 2017, focused on implications of AI for the tech industry, economy and society overall.


OpenAI is a non-profit artificial intelligence research company in California that aims to develop general AI in such a way as to benefit humanity as a whole. It has received more than 1 billion USD in commitments to promote research and other activities aimed at supporting the safe development of AI. The company focuses on long-term research. Founders of OpenAI include Elon Musk and Sam Altman. The sponsors include, in addition to individuals, YC Research, Infosys, Microsoft, Amazon, and Open Philanthropy Project. The open source contributions can be found at

PAIR: People + AI Research Initiative

This is a Google initiative that was launched in 2017 to focus on discovering how AI can augment the expert intelligence of professionals such as doctors, technicians, designers, farmers, musicians and others. It also aims to make AI more inclusive and accessible to everyone. Visiting faculty members are Hal Abelson and Brendan Meade.  Current projects involve drawing and diversity in machine learning, an open library for training neural nets, training data for models, and design via machine learning.

Partnership on AI

The Partnership was founded in September 2016 by Eric Horvitz and Mustafa Suleyman to study and formulate best practices for AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and influences on people and society.  Partnership on AI is funded financially and supported in-kind with research by its members, including founding members Amazon, Google/DeepMind, Facebook, IBM and Microsoft.  In 2017, it expanded corporate and NGO membership, adding members such as Ebay, Intel, Salesforce and Center for Democracy & Technology (CDT).  It hired an Executive Director, Terah Lyons, and boasts independent Board members from UC Berkley and the ACLU.  The group has had affiliation discussions with the Association for the Advancement of Artificial Intelligence (AAAI) and the Allen Institute for Artificial Intelligence. In 2016 the Partnership expressed its support for the Obama White House Report.

Rajapinta is a scientific association founded in January 2017 that advocates the social scientific study of ICT and ICT applications to social research in Finland.  Its short-term goal is to improve collaboration and provide opportunities for meetings and networking in the hopes of establishing a seat at the table in the global scientific community in the longer term.  Funding sources are not readily available.

Royal Society of UK’s Machine Learning Project

The Royal Society is a fellowship of many of the world’s most eminent scientists and is currently conducting a project on machine learning (as a branch of AI), which in April 2017 produced a very comprehensive report titled Machine learning: the power and promise of computers that learn by example.  It explores everyday ways in which people interact with machine learning systems, such as in social media image recognition, voice recognition systems, virtual personal assistants and recommendation systems used by online retailers.  The grant funding for this particular project within the much larger Royal Society is unclear.

Workshop on Fairness, Accountability, and Transparency in Machine Learning

Founded in 2014, FatML is an annual two-day conference that brings together researchers and practitioners concerned with fairness, accountability and transparency in machine learning, given a recognition that ML raises novel challenges around ensuring non-discrimination, due process and explainability of institutional decision-making.  According to the initiative, corporations and governments must be supervised in their use of algorithmic decision making.  FatML makes current scholarly resources on related subjects publicly available. The conference is funded in part by registration fees and possibly subsidized by corporate organizers such as Google, Microsoft and Cloudflare. Their August 2017 event was held in Halifax, Nova Scotia, Canada.

World Economic Forum (WEF)

WEF released a blog post in July 2017 on the risks of algorithmic decision making to civil rights, mentioning US law enforcement’s use of facial recognition technology, and other examples. The post argues humans are facing “algorithmic regulation” for example in public entitlements or benefits. It cites self-reinforcing bias as one of the five biggest problems with allowing AI into the government policy arena. In September 2017, the WEF released another post suggesting that a Magna Carta (“charter of rights”) for AI is needed; this essentially refers to commonly agreed upon rules and rights for both individuals and yielders of algorithm-based decision making authority. According to the post, the foundational elements of such an agreement include making sure AI creates jobs for all, rules dealing with machine curated news feeds and polarization, rules avoiding discrimination and bias of machine decision making, and safeguards for ensuring personal choice without sacrificing privacy for commercial efficiency.


From the above, we can conclude three points. First, different levels of stakeholders around the world have been activated to study the impact of technology on machine-decision making, as shown by the multitude of projects. On the research side, there are several recently founded research projects and conferences (e.g., AAWS, FatML). In a similar vein, industry players such as IBM, Microsoft and Facebook also show commitment in solving the associated challenges in their platforms. Moreover, policy makers are investigating the issues as well, as shown by the Obama administration’s report and the new Congressional AI Caucus.

Second, in addition to the topic being of interest for different stakeholders, it also involves a considerable number of different perspectives, including but not limited to aspects of computer science, ethics, law, politics, journalism and economics. Such a great degree of cross-sectionalism and multidisciplinary effort is not common for research projects that often tend to focus on a narrower field of expertise; thus, it might be more challenging to produce solutions that are theoretically sound and practically functional.

Third, there seems to be much overlap between the initiatives mentioned here; many of the initiatives seem to focus on solving the same problems, but it is unclear how well the initiatives are aware of each other and whether a centralized research agenda and resource sharing or joint allocation might help achieve results faster.


Notice an initiative or organization missing from this report? Please send information to Dr. Joni Salminen:

9 eettistä ongelmaa keinoälyssä

Keinoälyssä on kyse muustakin kuin teknologisista edistysaskeleista. Nykyisin tunnustetaan laajalti, että keinoälyjärjestelmiin liittyy keskeisesti inhimillisten arvojen ylläpitäminen ja riskienhallinta. Teknologiayritykset, kuten Googlen emoyhtiö Alphabet, Amazon, Facebook, IBM, Nokia ja Microsoft, sekä useat tieteen ja teknologian maailmassa tunnetut mielipidevaikuttajat, kuten Stephen Hawking, Elon Musk ja Bill Gates uskovat, että nyt on paras aika keskustella paitsi keinoälyn mahdollisuuksista, myös sen varjopuolista ja potentiaalisista riskeistä.

Tämän vuoksi on olennaista käydä läpi eettisiä ja ajankohtaisia kysymyksiä keinoälyyn liittyen. Tässä blogiartikkelissa esitetään yhdeksän keinoälyn yhteiskunnallista riskiä.

  1. Työttömyys. Mitä tapahtuu työpaikkojen loppuessa?

Yleisenä trendinä on, että automaatio sysää ihmisiä pois suorittavan tason teollisista töistä kohti korkeamman jalostusarvon työtehtäviä. Esimerkiksi kuljetusalalla on monia autonomisen ajamisen kokeiluja ympäri maailmaa, jotka saavat myös pohtimaan automaation eettistä puolta. Jos autonomisen ajamisen hyödyntämisen avulla pystytään radikaalista vähentämään liikenneonnettomuuksia taikka merenkulussa alusten uppoamisia inhimillisten virheiden seurauksena, ja tämän kustannus on ihmisten työpaikkojen menetys, voidaanko lopputulos tulkita pääasiallisesti eettiseksi?

Toisaalta työpaikkojen kohdalla kyse on myös ajankäytöstä. Onko työllä ollut liian keskeinen rooli ihmiskunnan historiassa? Automaatio ja keinoäly saattavatkin tarjota ihmisille mahdollisuuden löytää elämälleen muunlaisen tarkoituksen kuin työn. Tämän kysymyksen lisäksi Algoritmitutkimuksen tiimoilta on esitetty ajatuksia, että työpaikkojen korvaantuminen on tilapäinen ongelma, joka on toistunut historian saatossa useita kertoja.

  1. Epätasa-arvo. Miten yhteiskunnassa jaetaan keinoälystä kertyvä varallisuus?

Nykyinen vallitseva talousjärjestelmä perustuu ihmisten ajan ja kykyjen panoksesta saatavaan korvaukseen, joka useimmiten arvotetaan tuntipalkan muodossa. Mutta tulevaisuudessa keinoälyä hyödyntämällä yritykset voivat merkittävästi vähentää riippuvuuttaan ihmistyövoimasta. Sen seurauksena yksilöt, joilla on eniten omistuksia keinoälyyn keskittyneissä yrityksissä, keräävät suurimmat voitot. Jakaantuuko maailma siten entistä enemmän voittajiin ja häviäjiin?

Taloudellisen epätasa-arvon kasvu on jo nyt nähtävissä esimerkiksi startup-yritysten perustajien kohdalla, jotka keräävät valtaosan luomastaan taloudellisesta hyödystä. Esimerkiksi vuonna 2014 suurin piirtein samaa liikevaihtoa kerryttivät USA:ssa kolme Detroitin suurinta yritystä kuin kolme suurinta Silicon Valleyn startupia – sillä erotuksella, että jälkimmäisissä yrityksissä työskenteli 10 kertaa vähemmän työntekijöitä.

Jos haluamme kehittää ”työnteon jälkeistä” yhteiskuntaa, miten luomme siihen soveltuvan reilun talousjärjestelmän? Tämä vanha marxilainen kysymys tuotantotekijöiden omistajuudesta ja niistä koituvan varallisuuden jakautumisesta nostaa siis päätään myös keinoälyn kontekstissa. Ehdotuksena on esitetty mm. universaalia perustuloa; toisaalta siihen liittyy useita ongelmia, kuten sosialististen talouksien toimettomuuden ongelma.

  1. Ihmiskunta. Miten keinoäly vaikuttaa käyttäytymiseemme?

Keinoälyyn perustuvista boteista tulee jatkuvasti kehittyneempiä mallintamaan ihmisten välisiä kanssakäymisiä. Vuonna 2014 botti nimeltä Eugene Goostman voitti Turingin testin ensimmäistä kertaa. Goostman sai yli puolet häntä arvioinutta ihmistä luulemaan, että he puhuvat aidolle henkilölle.

Kyseinen virstanpylväs tuskin jää viimeiseksi. Tälläkin hetkellä moni yritys hyödyntää esimerkiksi chat-botteja asiakaspalvelussaan (Suomessa esimerkiksi IF Vakuutusyhtiön ”Emma”). Ihmisten kapasiteetti osoittaa huomiota ja huolenpitoa toisilleen on rajallinen, mutta botit pystyvät panostamaan niihin rajattomasti ja ovat siksi kenties ihanteellisempia asiakaspalvelijoita. On kuitenkin vielä kyseenalaista, miten tehokkaasti botit voivat ratkoa monimutkaisia asiakasongelmia sekä ennen kaikkea osoittamaan asiakaspalvelutyössä tärkeää empatiakykyä.

Joka tapauksessa voimme jo huomata, miten keinoäly onnistuu aktivoimaan aivojemme palkintojärjestelmiä. Esimerkkeinä klikkiotsikot, joiden tarkoituksena on testata erilaisten viestien toimivuutta mielenkiintomme herättäjänä, leviävät tehokkaasti sosiaalisen median uutisvirta-algoritmien välityksellä. Samankaltaisia keinoja käytetään myös muiden ohella luomaan videopeleistä addiktiivisia eli riippuvuutta aiheuttavia. Vaikka näissä esimerkeissä ihminen varsinaisesti luo sisällön, kone pystyy eri sisällöistä valitsemaan sen kaikkein koukuttavimman.

  1. Keinoälyttömyys. Miten voimme estää virheitä?

Älykkyys muodostuu oppimisen kautta, oli kyseessä ihminen tai keinoäly. Järjestelmillä on yleensä harjoitusjakso, joissa ne ”opetetaan” havaitsemaan oikeita kuvioita ja toimimaan niille syötetyn komennon perusteella. Kun järjestelmän harjoitusjakso on valmis, se voidaan siirtää haastavampaan testivaiheeseen ja sen jälkeen ottaa julkiseen käyttöön.

Järjestelmiä on kuitenkin mahdollista huijata tavoilla, joissa ihminen ei erehtyisi. Esimerkiksi satunnaiset pistekuviot voivat johtaa keinoälyn havaitsemaan jotain, mitä ei tosiasiassa ole olemassa. Jos haluamme luottaa keinoälyyn tulevaisuudessa entistä enemmän, on syytä varmistaa, että se toimii kuten suunniteltu, ja etteivät ihmiset voi saada sitä valtaansa omien etujensa ajamista varten. Ns. blackhat-hakukoneoptimointi eli Google-hakutulosten keinotekoinen manipulointi ja valeuutisten levittäminen ovat esimerkkejä algoritmipohjaisten järjestelmien manipulaatiosta, jolla voi olla laajamittaisia yhteiskunnallisia vaikutuksia. Vaikka ihmiset yhtäältä tietävät, ettei kaikkeen netissä vastaan tulevaan voi uskoa, toisaalta informaatiokuplaan ajautuminen ja sitä seuraava polarisoituminen on yllättävän yleistä koulutustasosta riippumatta.

  1. Robotit ja syrjintä. Miten eliminoimme keinoälyn puolueellisuuden?

Vaikka keinoäly kykenee nopeuteen ja suorituskykyyn, joka on ihmisen ulottumattomissa, sen ei voida luottaa olevan aina reilu ja neutraali. Esimerkiksi taannoin ohjelmisto, jota käytettiin tulevien rikollisten ennustamiseen, osoitti ennakkoluuloja mustia kohtaan. On tärkeä muistaa, että keinoälyn järjestelmät ovat ihmisten luomia. Ihmiset voivat olla myös puolueellisia ja tuomitsevia, ja vähintääkin erehtyväisiä. Itse asiassa algoritmi voisi teoriassa lievittää ihmisluonteen haitallisia puolia, sillä koneen päätöksentekoprosessissa ei ole mukana tarkoitusperäistä epäluotettavuutta.

  1. Turvallisuus. Miten pidämme keinoälyn turvassa vahingollisilta tahoilta?

Keinoälyn kehittyessä jokainen maailman valtio, myös Suomi, haluaa sen hyödyistä osansa.  Oli kyse sitten ihmissotilaiden korvikkeiksi tarkoitetut roboteista, itseohjautuvista asejärjestelmistä tai keinoälyjärjestelmistä, keinoälyä voidaan käyttää monenlaisiin tarkoitusperiin. Tulevaisuuden konflikteja ei käydä vain maanpinnalla, kuten esimerkiksi kauko-ohjattavat dronet ovat osoittaneet. Siksi kyberturvallisuudesta muodostuu myös keinoälyyn liittyvän keskustelun tärkeä osa-alue.

  1. ”Pahat” lampunhenget. Miten suojaudumme seurauksilta, joita ei oltu tarkoitettu?

Mitä jos keinoäly kääntyisi ihmiskuntaa vastaan? Se ei välttämättä tarkoita ”pahaa” samassa mielessä kuin ihminen toimisi tai tapaa, jolla keinoälyn luomat onnettomuudet dystooppisissa elokuvissa esitetään. Sen sijaan on mahdollista kuvitella tilanne, jossa kehittyneestä keinoälystä tulee ”lampunhenki”, joka voi täyttää toiveita tai komentoja, mutta myös kauhistuttavilla seurauksilla.

Silloin kyseessä saattaa pahanteon sijaan olla kuitenkin kyse väärinymmärryksestä ja kontekstin ymmärtämättömyydestä. Kuvittele esimerkiksi keinoälyjärjestelmä, jolle on annettu komennoksi hävittää syöpä maailmasta. Prosessoituaan asiaa se muodostaa kaavan, joka tosiasiallisesti hävittää syövän – tuhoamalla kaikki ihmiset. Tavoite täytyy, mutta tuskin ihmisten tarkoittamalla tavalla.

  1. Singulariteetti. Miten pystymme hallitsemaan monimutkaisia keinoälyjärjestelmiä?

Ihmiskunnan valta maailmassa perustuu kekseliäisyyteen ja älykkyyteen. Pärjäämme paremmin kuin isommat, nopeammat ja vahvemmat eläimet, koska pystymme kontrolloimaan ja ehdollistamaan niitä.

Mutta onko keinoälyllä jonain päivänä vastaava etu meihin nähden? Emme voi laskea kaikkea sen varaan, että vain nappia painamalla suljemme sen, koska tarpeeksi kehittynyt keinoälyjärjestelmä osaa odottaa sellaista ja puolustaa itseään. Tätä jotkut tutkijat kutsuvat singulariteetiksi, joka voidaan määritellä ajankohtana, jolloin ihmiset eivät enää ole älykkäin olento maan päällä.

  1. Robottien oikeudet. Kuinka määrittelemme keinoälyn humaanin kohtelun?

Onko roboteilla oikeuksia? Vaikka neurotieteilijät työskentelevät vieläkin tietoisuuden salaisuuksien selvittämiseksi, ymmärrämme jo monia perusmekanismeja, kuten kehun ja moitteen järjestelmiä. Tavallaan kehitämme nyt tekoälyä kepin ja porkkanan avulla. Esimerkiksi algoritmin oppimisen vahvistumista palkitaan samalla tavoin kuin koiraa sen kasvatuksessa: oppimista vahvistetaan virtuaalisella palkkiolla. Tällä hetkellä kyseiset mekanismit ovat vielä lapsenkengissä, mutta niistä tulee monimutkaisempia ja arkielämää lähestyviä teknologian kehittyessä.

Kun koemme keinoälyn olevan entiteetti, joka voi muodostaa käsityksiä, tuntea ja toimia niihin perustuen, niin oikeudellisen aseman pohtiminen ei ole enää kaukaista. Pitäisikö keinoälyä kohdella kuin eläimiä tai vertaistaan älykkyyttä? Mitä ajattelemme tunteisiin kykenevän keinoälyn kärsimyksen tasoista? Myös geneettiset algoritmit, evoluutioalgoritmit, joita käytetään järjestelmän kehittämisessä etsimällä niistä paras iteraatio, joka selviytyy, kun taas huonosti menestyneet poistetaan, edustavat monimutkaisia eettisiä ongelmia. Missä kohtaa alamme pitämään geneettisiä algoritmeja massamurhana? Tällä hetkellä yleisenä rajanvetona näissä kysymyksissä pidetään tietoisuutta – niin kauan kuin robotti tai kone ei ole tietoinen itsestään samassa mielessä kuin ihminen, tuntea kipua tai toimia itsenäisesti, sitä ei pidetä olentona, jolla olisi itseisarvo.


Edellä mainittuja keinoälyn riskejä tulee pohtia ja niiden eettistä ulottuvuutta tarkastella vakavasti. Keinoälyllä on valtava potentiaali ja on yhteiskunnan tehtävänä on saada sen potentiaali edistämään kaikkien ihmisten elämää, parhaalla mahdollisella tavalla. Tämä edellyttää laajamittaista yhteistyötä tutkijoiden ja elinkeinoelämän välillä.


Artikkeli perustuu World Economic Forumin katsaukseen keinoälyn eettisistä ongelmista. Alkuperäinen artikkeli ”Top 9 ethical issues in artificial intelligence” on luettavissa täältä.

What jobs are safe from AI?

There is enormous concern about machine learning and AI replacing human workers. However, according to several economists, and also according to past experience ranging back all the way to the industrial revolution of the 18th century (which caused major distress at the time), the replacement of human workers is not permanent but there will be new jobs to replace the replaced jobs (as postulated by the Schumpeterian hypothesis). In this post, I will briefly share some ideas on what jobs are relatively safe from AI, and how should an individual member of the workforce increase his or her chances of being competitive in the job market of the future.

“Insofar as they are economic problems at all, the world’s problems in this generation and the next are problems of scarcity, not of intolerable abundance. The bogeyman of automation consumes worrying capacity that should be saved for real problems . . .” -Herbert Simon, 1966

What jobs are safe from AI?

The ones involving:

  1. creativity – machine can ”draw” and ”compose” but it can’t develop a business plan.
  2. interpretation – even in law which is codified in most countries, lawyers use judgment and interpretation. Cannot be replaced as it currently stands.
  3. transaction costs – robots could conduct a surgery and even evaluate before that if a surgery is needed, but in between you need people to explain things, to prepare the patients, etc. Most service chains require a lot of mobility and communication, i.e. transaction costs, that are to be handled by people.

How to avoid losing your job to AI?

Make sure your skills are complementary to automation, not substitute of it. For example, if you have great copywriting skills, there was actually never a better time to be a marketer, as digital platforms enable you to reach all the audiences with a few clicks. The machine cannot write compelling ads, so your skills are complementary. The increased automation does not reduce the need for creativity; it amplifies it.

If the machine would learn to be creative in a meaningful way (which is far far away, realistically speaking), then you’d do some other complementary task.

The point is: there is always some part of the process you can complement.

Fear not. Machines will not take all human jobs because not all human jobs exist yet. Machines and software will take care of some parts of service chains, even to a great extent but in fact that will enhance the functioning of the whole chain, and also that of human labor (consider the amplification example of online copywriting). New jobs that we still cannot vision will be created, as needs and human imagination keep evolving.

The answer is in creative destruction: People won’t stop coming up with things to offer because of machines. And other people won’t stop wanting those things because of machines. Jobs will remain also in the era of AI. The key is not to complain about someone taking your job, but to think of other things to offer, and develop your personal competences accordingly. Even if you won’t, the next guy will. There’s no stopping creativity.

Read more:

  • Scherer, F. M. (1986). Innovation and Growth: Schumpeterian Perspectives (MIT Press Books). The MIT Press.
  • David, H. (2015). Why are there still so many jobs? The history and future of workplace automation. The Journal of Economic Perspectives, 29(3), 3–30.

Research agenda for ethics and governance of artificial intelligence

Ethics of machine learning algorithms has recently been raised as a major research concern. Earlier this year (2017), a fund of $27M USD was started to support research on the societal challenges of AI. The group responsible for the fund includes e.g. the Knight Foundation, Omidyar Network and the startup founder and investor Reid Hoffman.

As stated on the fund’s website, the fund will support a cross-section of AI ethics and governance projects and activities, both in the United States and internationally. They advocate cross-disciplinary research between e.g. computer scientists, social scientists, ethicists, philosophers, economists, lawyers and policymakers.

The fund lays out a list of areas they’re interested in funding. The list can be seen as a sort of a research agenda. The items are:

  • Communicating complexity: How do we best communicate, through words and processes, the nuances of a complex field like AI?
  • Ethical design: How do we build and design technologies that consider ethical frameworks and moral values as central features of technological innovation?
  • Advancing accountable and fair AI: What kinds of controls do we need to minimize AI’s potential harm to society and maximize its benefits?
  • Innovation in the public interest: How do we maintain the ability of engineers and entrepreneurs to innovate, create and profit, while ensuring that society is informed and that the work integrates public interest perspectives?
  • Expanding the table: How do we grow the field to ensure that a range of constituencies are involved with building the tools and analyzing social impact?

As can be seen, the agenda emphasizes the big question: How can we maintain the benefits of the new technologies while making sure that their potential harm is minimized? To answer this question, a host of studies and perspectives is definitely needed. Read here a list of other initiatives working on the societal issues of AI and machine learning.

Ethics and Governance of Artificial Intelligence Initiative

Read about this amazing initiative at Harvard’s website and thought of sharing it:

About the Ethics and Governance of Artificial Intelligence Initiative

Artificial intelligence and complex algorithms, fueled by the collection of big data and deep learning systems, are quickly changing how we live and work, from the news stories we see, to the loans for which we qualify, to the jobs we perform. Because of this pervasive impact, it is imperative that AI research and development be shaped by a broad range of voices—not only by engineers and corporations—but also social scientists, ethicists, philosophers, faith leaders, economists, lawyers, and policymakers.
To address this challenge, several foundations and funders recently announced the Ethics and Governance of Artificial Intelligence Fund, which will support interdisciplinary research to ensure that AI develops in a way that is ethical, accountable, and advances the public interest. The Berkman Klein Center and the MIT Media Lab will act as anchor academic institutions for this fund and develop a range of activities, research, tools, and prototypes aimed at bridging the gap between disciplines and connecting human values with technical capabilities. They will work together to strengthen existing and form new interdisciplinary human networks and institutional collaborations, and serve as a collaborative platform where stakeholders working across disciplines, sectors, and geographies can meet, engage, learn, and share.

Read more:

Feature analysis for detecting algorithmic bias

Feature analysis could be employed for bias detection when evaluating the procedural fairness of algorithms. (This is an alternative to the ”Google approach” which emphasis evaluation of outcome fairness.)

In brief, feature analysis reveals how well each feature (=variable) influenced the model’s decision. For example, see the following quote from Huang et al. (2014, p. 240):

”All features do not contribute equally to the classification model. In many cases, the majority of the features contribute little to the classifier and only a small set of discriminative features end up being used. (…) The relative depth of a feature used as a decision node in a tree can be used to assess the importance of the feature. Here, we use the expected fraction of samples each feature contributes to as an estimate of the importance of the feature. By averaging all expected fraction rates over all trees in our trained model, we could estimate the importance for each feature. It is important to note that feature spaces among our selected features are very diverse. The impact of the individual features from a small feature space might not beat the impact of all the aggregate features from a large feature space. So apart from simply summing up all feature spaces within a feature (i.e. sum of all 7, 057 importance scores in hashtag feature), which is referred to as un-normalized in Figure 4, we also plot the normalized relative importance of each features, where each feature’s importance score is normalized by the size of the feature space.”

They go on to visualize the impact of each feature (see Figure 1).

Figure 1  Feature analysis example (Huang et al., 2014)

As you can see, this approach seems excellent for probing the impact of each feature on the model’s decision making. The impact of sensitive features, such as ethnicity, can be detected. Although this approach may be useful for supervised machine learning, where the data is clearly labelled, the applicability to unsupervised learning might be a different story.


Huang, W., Weber, I., & Vieweg, S. (2014). Inferring Nationalities of Twitter Users and Studying Inter-National Linking. ACM HyperText Conference. Retrieved from