Short argument: when talking about automation, it’s important to define what level we mean. There are at least three levels:
- automation within a professional task = this level deals with understanding how automation impacts people’s work, e.g., replacing them or boosting their productivity (both the good and the bad)
- automation within an organization = this level deals with understanding how automation shapes organizations; e.g., what news skills are needed, what is the impact on overall productivity and quality of outputs, new challenges, etc.
- automation within a value chain / network = this level takes a holistic view on automation, inspecting its impact on a cluster or network of organizations and social systems as opposed to micro-level analysis. The questions are similar to the ones in other levels, but they are analyzed for the whole, considering intra-organizational dynamics.
Depending on the level, the implications, recommendations, best practices, and societal impacts differ.
Notes from a CHI2018 workshop:
- creativity is not an excuse for ignorance
- software doing part of your work might be more work
- errors from the developers are cascading in the system
- never trust the marketing aspect of automation
- frictions in the promise of automation
- automation surprises
- automation was expected to have better consistent behavior
- automation has an intended use — but people may use it differently: they revert to the easiest choice
- automation is already everywhere
- automation is inside the interaction technique
- automation is old
- ”automation — this word is useless!”
- transparency is a good property to automation
- notation of automation
- automation is not always good for human – especially in the case of security
- you need to spend a lot of time for training because automation is not what was expected
- levels of automation are intertwined
Some of these are pretty good insights.
In social media sampling, there are many issues. Two of them are: 1) the silent majority problem and 2) the grouping problem.
The former refers to the imbalance between participants and spectators: can we trust that the vocal few represent the views of all?
The latter means that people of similar opinions tend to flock together, meaning that looking at one online community or even social media platform we can get a biased understanding of the whole population.
Solving these problems is hard, and requires understanding of the online communities, their polarity, sociology and psychology driving the participation, and the functional principles of the algorithms that determine visibility and participation in the platforms.
Prior knowledge on the online communities can be used as a basis for stratified sampling that can be a partial remedy.
Web 2.0 was about all the pretty, shiny things about social media, like user-generated content, blogs, customer participation, ”everyone has a voice,” etc. Now, Web 3.0 is all about the dark side: algorithmic bias, filter bubbles, group polarization, flame wars, cyberbullying, etc. We discovered that maybe everyone should not have a voice, after all. Or at least that voice should be used with more attention to what you are saying.
While it is tempting to blame Facebook, media, or ”technology” for all this (just as it is easy to praise it for the other things), the truth is that individuals should accept more responsibility of their own behavior. Technology provides platforms for communication and information, but it does not generate communication and information; people do.
In consequence, I’m very skeptical about technological solutions to the Web 3.0 problems; they seem not to be technological problems but social ones, requiring primarily social solutions and secondly hybrid solutions. We should start respecting the opinions of others, get educated about different views, and learn how to debate based on facts and finding fundamental differences, not resorting to argumentation errors. Here, machines have only limited power – it’s up to us to re-learn these things and keep teaching them to new generations. It’s quite pitiful that even though our technology is 1000x better than in Ancient Greek, our ability to debate properly is one tenth of what it was 2000 years ago.
Avoiding the enslavement of machines requires going back to the basics of humanity.
Did you ever want to climb Mount Everest?
If you did, you would have to split such a goal into many tasks: You would first need to find out what resources are needed for it, who could help you, how to prepare mentally and physically, etc. You would come up with a list of tasks that, in a sequence, form your plan of achieving the goal.
The same logic applies to all goals we humans have, both in companies and private lives, and it also applies when evaluting what tasks, given a goal, can be outsourced to machine decision making.
The best to way to conduct such an analysis is to view organizational goals as a sequence of inter-related job tasks, and then evaluate which particular sub-tasks humans are best at handling, and vice versa.
- Define the end goal (e.g., launch a marketing campaign)
- Define the steps needed to achieve that goal (strategy) (e.g., decide targeting, write ads, define budget, optimize spend)
- Divide each step into sub-tasks (e.g., decide targeting: analyze past campaigns, analyze needs from social media)
- Evaluate (e.g., on a scale of 1-5) how well machine and human perform in each sub-task (e.g., write ads: human = 5, machine = 1)
- Look at the entire chain and identify points of synergy (where machine can be used to enhance human work or vice versa (e.g., analyze social media by supervised machine learning where crowd workers tag tweets).
We find, by applying such logic, that there are plenty of such tasks in organizational workflows that currently cannot be outsourced to machines, out of variety of reasons. Sometimes the reasons relate to manual processes, i.e. the overall context does not support optimal carrying out of tasks. An example: currently, I’m manually downloading receipts from a digital marketing service account => I have to manually log-in and retrieve the receipts as PDF files, and then send them as email attachment to book-keeping. Ideally, the book-keeping system would just retrieve the receipts via an application programming interface (API) automatically, eliminating this unnecessary part of human labor.
At the same time, we should a) work to remove unnecessary barrier to work automation where it is feasible, b) while thinking of ways to provide optimal synergy from human and machine work inputs. This is not about optimizing individual work tasks, but optimizing the entire workflows toward reaching a specific goal. At the moment, there is little research and attention paid to this kind of comprehensive planning, which I call ”workflow engineering”.
The problem with online discussions and communities is that the extreme poles draw people effectively, causing group polarization in which the original opinion of a person becomes more radical due to influence of the group. In Finnish, we have a saying ”In a group, stupidity concentrates” (joukossa tyhmyys tiivistyy).
Here, I’m exploring the idea that this effect, namely the growth of polar extremes (for example, being for or against immigration, as currently many European citizens are) is simply because people are lacking options to identify with. There are only the extremes, but no neutral or moderate group, even though, as I’m arguing here, most people in fact are moderate and understand that extremes and absolutes are misleading simplifications either way.
In other words, when there are only two ”camps” of opinion, people are more easily split between them. However, my argument is that people have preferences that correspond to being in the middle, not in the extremes.
These preferences remain hidden because there are only two camps to subscribe to: One cannot be moderate because there is no moderate group.
For example, there are liberals and conservatives, but what about the people in the middle? What about them who share some ideas of liberals and others from conservatives? By having only these two groups, other combinations become socially impossible because people are, again socially, pressed to observe all the opinions of the group they’re subscribing to, even if they wouldn’t agree with a particular view. This effect has been studied in relation to the concept of groupthink, but no permanent remedy has been found.
How to solve the problem of extremes?
My idea is simple: we should start more camps, more views to subscribe to, especially those representing moderate views.
The argument is that having more supply of camps, people will distribute more evenly between them and we have less polarization as a consequence.
This is illustrated in the picture (sketched quickly in Paint since I got an inspiration).
In (A), public discourse is dominated by the extremes (the distribution of attention is skewed toward the extremes of a given opinion spectrum). In (B), the distribution is focused on the center of the opinion spectrum (=moderate views) while the extremes are marginalized (as they should be, according to the assumption of moderate majority).
An example: having several political parties results in more diverse views being presented. In the US, you are either a Democrat or a Republican (although there are marginal Green Party and the progressives, it must be stated), but in Finland you can also be many others: Center Party, National Coalition Party, or Green Party, for example. The same applies to most countries in Europe. Although I don’t have facts for this, it seems that the public discourse in the US is exceptionally polarized compared to many other countries .
Giving more choices to identify with for the ”silent majority” that is moderate rather than extreme, revealing the ”true” opinions of citizens, would ideally marginalize both extremes, avoiding the tyrannity of minority  currently dominating the public discourse.
Finally, all this could be formalized in game theory by assuming heterogeneity of preferences over the opinion spectrum and parameters such as gravity (”pull factor” by the extremes), justifiable e.g. by media attention given to extreme views over moderate ones. But the implication reains the same: diversity of classes reduces polarization under the set of assumptions.
 Of course there are other reasons, such as media taking political sides.
 This means extreme views are not representative to the whole population (which is more moderate than either view) but they get disproportionate attention in the media and public discourse. This is because the majority views are hidden; they would need to be revealed.
The ambiguity problem illustrated:
User: ”Siri, call me an ambulance!”
Siri: ”Okay, I will call you ’an ambulance’.”
You’ll never reach the hospital, and end up bleeding to death.
Two potential solutions come to mind:
A. machine builds general knowledge (”common sense”)
B. machine identifies ambiguity & asks for clarification from humans (reinforcement learning)
The whole ”common sense” problem can be solved by introducing human feedback into the system. We really need to tell the machine what is what, just like a child. This is iterative learning, in which trials and errors take place. However, it is better than trying to adapt an unescapably finite dataset into a close to finite space of meanings.
But, in fact, A and B converge by doing so. Which is fine, and ultimately needed.
To determine which solution to an ambiguous situation is proper, the machine needs contextual awareness; this can be achieved by storing contextual information from each ambiguous situation, and being explained ”why” a particular piece of information results in disambiguity. It’s not enough to say ”you’re wrong”, but there needs to be an explicit association to a reason (concept, variable). Equally, it’s not enough to say ”you’re right”, but again the same association is needed.
1) try something
2) get told it’s not right, and why (linking to contextual information)
3) try something else, corresponding to why
4) get rewarded, if it’s right.
The problem is, currently machines are being trained by data, not by human feedback.
New thinking: Training AI pets
So we would need to build machine-training systems which enable training by direct human feedback, i.e. a new way to teach and communicate with the machine. It’s not a trivial thing, since the whole machine-learning paradigm is based on data, not meanings. From data and probabilities, we would need to move into associations and concepts that capture social reality. A new methodology is needed. Potentially, individuals could train their own AIs like pets (think having your own ”AI pet” like Tamagotchi), or we could use large numbers of crowd workers who would explain the machine why things are how they are (i.e., create associations). A specific type of markup (=communication with the machine) would probably also be needed, although conversational UIs would most likely be the best solution.
Through mimicking human learning we can teach the machine common sense. This is probably the only way; since common sense does not exist beyond human cognition, it can only be learnt from humans. An argument can be made that this is like going back in time, to era where machines followed rule-based programming (as opposed to being data-driven). However, I would argue rule-based learning is much closer to human learning than the current probability-based one, and if we want to teach common sense, we therefore need to adopt the human way.
Machine learning may be at par, but machine training certainly is not. The current machine learning paradigm is data-driven, whereas we could look into ways for concept-driven AI training approaches. Essentially, this is something like reinforcement learning for concept maps.
Introduction. Hal Daumé wrote an interesting blog post about language bias and the black sheep problem. In the post, he defines the problem as follows:
The ”black sheep problem” is that if you were to try to guess what color most sheep were by looking and language data, it would be very difficult for you to conclude that they weren’t almost all black. In English, ”black sheep” outnumbers ”white sheep” about 25:1 (many ”black sheep”s are movie references); in French it’s 3:1; in German it’s 12:1. Some languages get it right; in Korean it’s 1:1.5 in favor of white sheep. This happens with other pairs, too; for example ”white cloud” versus ”red cloud.” In English, red cloud wins 1.1:1 (there’s a famous Sioux named ”Red Cloud”); in Korean, white cloud wins 1.2:1, but four-leaf clover wins 2:1 over three-leaf clover.
Thereafter, Hal accurately points out:
”co-occurance frequencies of words definitely do not reflect co-occurance frequencies of things in the real world.”
But the mistake made by Hal is to assume language describes objective reality (”the real world”). Instead, I would argue that it describes social reality (”the social world”).
Black sheep in social reality. The higher occurence of ’black sheep’ tells us that in social reality, there is a concept called ’black sheep’ which is more common than the concept of white (or any color) sheep. People are using that concept, not to describe sheep, but as an abstract concept in fact describing other people (”she is the black sheep of the family”). Then, we can ask: Why is that? In what contexts is the concept used? And try to teach the machine its proper use through associations of that concept to other contexts (much like we teach kids when saying something is appropriate and when not). As a result, the machine may create a semantic web of abstract concepts which, if not leading to it understanding them, at least helps in guiding its usage of them.
We, the human. That’s assuming we want it to get closer to the meaning of the word in social reality. But we don’t necessarily want to focus on that, at least as a short-term goal. In the short-term, it might be more purposeful to understand that language is a reflection of social reality. This means we, the humans, can understand human societies better through its analysis. Rather than trying to teach machines to imputate data to avoid what we label an undesired state of social reality, we should use the outputs provided by the machine to understand where and why those biases take place. And then we should focus on fixing them. Most likely, technology plays only a minor role in that, although it could be used to encourage balanced view through a recommendation system, for example.
Conclusion. The ”correction of biases” is equivalent to burying your head in the sand: even if they magically disappeared from our models, they would still remain in the social reality, and through the connection of social reality and objective reality, echo in the everyday lives of people.
This report was created by Joni Salminen and Catherine R. Sloan. Publication date: December 10, 2017.
Artificial intelligence (AI) and machine learning are becoming more influential in society, as more decision-making power is being shifted to algorithms either directly or indirectly. Because of this, several research organizations and initiatives studying fairness of AI and machine learning have been started. We decided to conduct a review of these organizations and initiatives.
This is how we went about it. First, we used our prior information about different initiatives that we were familiar with. We used this information to draft an initial list and supplemented this list by conducting Google and Bing searches with key phrases relating to machine learning or artificial intelligence and fairness. Overall, we found 25 organization or initiatives. We analyzed these in greater detail. For each organization / initiative, we aimed to retrieve at least the following information:
- Name of the organization / initiative
- URL of the organization / initiative
- Founded in (year)
- Short description of the organization / initiative
- Purpose of the organization / initiative
- University or funding partner
Based on the above information, we wrote this brief report. Its purpose is to chart current initiatives around the world relating to fairness, accountability and transparency of machine learning and AI. At the moment, several stakeholders are engaged in research on this topic area, but it is uncertain how well they are aware of each other and if there is a sufficient degree of collaboration among them. We hope this list increases awareness and encounters among the initiatives.
In the following, the initiatives are presented in alphabetical order.
AI100: Stanford University’s One Hundred Year Study on Artificial Intelligence
Founded in 2016, this is an initiative launched by computer scientist Eric Horvitz and driven by seven diverse academicians focused on the influences of artificial intelligence on people and society. The goal is to anticipate how AI will impact every aspect of how people work, live and play, including automation, national security, psychology, ethics, law, privacy and democracy. AI100 is funded by a gift from Eric and Mary Horvitz.
AI for Good Global Summit
AI for Good Global Summit was held in Geneva, 7-9 June, 2017 in partnership with a number of United Nation (UN) sister agencies. The Summit aimed to accelerate and advance the development and democratization of AI solutions that can address specific global challenges related to poverty, hunger, health, education, the environment, and other social purposes.
AI Forum New Zealand
The AI Forum was launched in 2017 as a membership funded association for those with a passion for the opportunities AI can provide. The Forum connects AI tech innovators, investor groups, regulators, researchers, educators, entrepreneurs and the public. Its executive council includes representatives of Microsoft and IBM as well as start-ups and higher education. Currently the Forum is involved with a large-scale research project on the impact of AI on New Zealand’s economy and society.
AI Now Institute
The AI Now Institute at New York University (NYU) was founded by Kate Crawford and Meredith Whittaker in 2017. It’s an interdisciplinary research center dedicated to understanding the social implications of artificial intelligence. Its work focuses on four core domains: 1) Rights & Liberties, 2) Labor & Automation, 3) Bias & Inclusion and 4) Safety & Critical Infrastructure. The Institute’s partners include NYU’s schools of Engineering (Tandon), Business (Stern) and Law, the American Civil Liberties Union (ACLU) and the Partnership on AI.
Algorithms, Automation, and News
AAWS is an international conference focusing on impact of algorithms on news. Among the studied topics, the call for papers lists 1) concerns around news quality, transparency, and accountability in general; 2) hidden biases built into algorithms deciding what’s newsworthy; 3) the outcomes of information filtering such as ‘popularism’ (some content is favored over other content) and the transparency and accountability of the decisions made about what the public sees; 4) the privacy of data collected on individuals for the purposes of newsgathering and distribution; 5) the legal issues of libel by algorithm, 6) private information worlds and filter bubbles, and 7) the relationship between algorithms and ‘fake news’. The acceptance rate for the 2018 conference was about 12%. The conference is organized by Center for Advanced Studies at Ludwig-Maximilians-Universität München (LMU) and supported by Volkswagen Foundation and University of Oregon’s School of Journalism and Communication. The organizers aim to release a special issue of Digital Journalism and a book, and one of them (Neil Thurman) is engaged in a research project on ’Algorithmic News’.
This research project was founded in early 2017 at the University of Turku in Finland as a collaboration of its School of Economics with the BioNLP unit of University of Turku. There are currently three researchers involved, one from social science background and two from computer science. The project studies the societal impact and risks of machine decision-making. It has been funded by Kone Foundation and Kaute Foundation.
Center for Democracy and Technology (CDT)
CDT is a non-profit organization headquartered in Washington. They describe themselves as “a team of experts with deep knowledge of issues pertaining to the internet, privacy, security, technology, and intellectual property. We come from academia, private enterprise, government, and the non-profit worlds to translate complex policy into action.” The organization is currently focused on the following issues: 1) Privacy and data, 2) Free expression, 3) Security and surveillance, 4) European Union, and 5) Internet architecture. In August 2017, CDT launched a digital decisions tool to help engineers and product managers mitigate algorithmic bias in machine decision making. The tool translates principles for fair and ethical decision-making into a series of questions that can be addressed while designing and deploying an algorithm. The questions address developers’ choices: what data to use to train the algorithm, what features to consider, and how to test the algorithm’s potential bias.
Data & Society’s Intelligence and Autonomy Initiative
This initiative was founded in 2015 and is based in New York City. It develops grounded qualitative empirical research to provide nuanced understandings of emerging technologies to inform the design, evaluation and regulation of AI-driven systems, while avoiding both utopian and dystopian scenarios. The goal is to engage diverse stakeholders in interdisciplinary discussions to inform structures of AI accountability and governance from the bottom up. I&A is funded by a research grant from the Knight Foundation’s Ethics and Governance of Artificial Intelligence Fund, and was previously supported by grants from the John D. and Catherine T. MacArthur Foundation and Microsoft Research.
Facebook AI Research (FAIR)
Facebook’s research program engages with academics, publications, open source software, and technical conferences and workshops. Its researchers are based in Menlo Park, CA, New York City and Paris, France. Its CommAI project aims to develop new data sets and algorithms to develop and evaluate general purpose artificial agents that rely on a linguistic interface and can quickly adapt to a stream of tasks.
This internal Microsoft group focuses on Fairness, Accountability, Transparency and Ethics in AI and was launched in 2014. Its goal is to develop, via collaborative research projects, computational techniques that are both innovative and ethical, while drawing on the deeper context surrounding these issues from sociology, history and science.
Good AI was founded in 2014 as an international group based in Prague, Czech Republic dedicated to developing AI quickly to help humanity and to understand the universe. Its founding CEO Marek Rosa funded the project with $10M. Good AI’s R&D company went public in 2015 and is comprised of a team of 20 research scientists. In 2017 Good AI participated in global AI conferences in Amsterdam, London and Tokyo and hosted data science competitions.
Jigsaw is a technology incubator focusing on geopolitical challenges, originating from Google Ideas, as a ”think/do tank” for issues at the interface of technology and geopolitics. One of the projects of Jigsaw is the Perspective API that uses machine learning to identify abuse and harassment online. Perspective rates comments based on the perceived impact a comment might have on the conversation. Perspective can be used use to give real-time feedback to commenters, help moderators sort comments more effectively, or allow readers to find relevant information. The first model of Perspective API identifies whether a comment is perceived as “toxic” in a discussion.
IEEE Global Initiative for Ethical Considerations in AI and Autonomous Systems
In 2016, the Institute of Electrical and Electronics Engineers (IEEE) launched a project seeking public input on ethically designed AI. In April 2017, the IEEE hosted a related dinner for the European Parliament in Brussels. In July 2017, it issued a preliminary report entitled Prioritizing Human Well Being in the Age of Artificial Intelligence. IEEE is conducting a consensus driven standards project for “soft governance” of AI that may produce a “bill of rights” regarding what personal data is “off limits” without the need for regulation. They set up 11 different active standards groups for interested collaborators to join in 2017 and were projecting new reports by the end of the year. IEEE has also released a report on Ethically Aligned Design in artificial intelligence, part of a initiative to ensure ethical principles are considered in systems design.
Internet Society (ISOC)
The ISOC is a non-profit organization founded in 1992 to provide leadership in Internet-related standards, education, access, and policy. It is headquartered in Virginia, USA. The organization published a paper in April, 2017 that explains commercial uses of AI technology and provides recommendations for dealing with its management challenges, including 1) transparency, bias and accountability, 2) security and safety, 3) socio-economic impacts and ethics, and 4) new data uses and ecosystems. The recommendations include, among others, adopting ethical standards in the design of AI products and innovation policies, providing explanations to end users about why a specific decision was made, making it simpler to understand why algorithmic decision-making works, and introducing “algorithmic literacy” as a basic skills obtained through education.
Knight Foundation’s Ethics and Governance of Artificial Intelligence Fund https://www.knightfoundation.org/aifund-faq
The AI Fund was founded in January 2017 by the Massachusetts Institute of Technology (MIT) Media Lab, Harvard University’s Berkman-Klein Center, the Knight Foundation, Omidyar Network and Reid Hoffman of LinkedIn. It is currently housed at the Miami Foundation in Miami, Florida.
The goal of the AI Fund is to ensure that the development of AI becomes a joint multidisciplinary human endeavor that bridges computer scientists, engineers, social scientists, philosophers, faith leaders, economists, lawyers and policymakers. The aim is to accomplish this by supporting work around the world that advances the development of ethical AI in the public interest, with an emphasis on research and education.
In May 2017, the Berkman Klein Center at Harvard kicked off its collaboration with the MIT Media Lab on their Ethics and Governance of Artificial Intelligence Initiative focused on strategic research, learning and experimentation. Possible avenues of empirical research were discussed, and the outlines of a taxonomy emerged. Topics of this initiative include: use of AI-powered personal assistants, attitudes of youth, impact on news generation, and moderating online hate speech.
Moreover, Harvard’s Ethics and Governance of AI Fund has committed an initial $7.6M in grants to support nine organizations to strengthen the voice of civil society in the development of AI. An excerpt from their post: “Additional projects and activities will address common challenges across these core areas such as the global governance of AI and the ways in which the use of AI may reinforce existing biases, particularly against underserved and underrepresented populations.” Finally, a report of a December 2017 BKC presentation on building AI for an inclusive society has been published and can be accessed from the above link.
MIT-IBM Watson Lab
Founded in September 2017, MIT’s new $240 million center in collaboration with IBM, is intended to help advance the field of AI by “developing novel devices and materials to power the latest machine-learning algorithms.” This project overlaps with the Partnership on AI. IBM hopes it will help the company reclaim its reputation in the AI space. In another industry sector, Toyota made a billion-dollar investment in funding for its own AI center, plus research at both MIT and Stanford. The MIT-IBM Lab will be one of the “largest long-term university-industry AI collaborations to date,” mobilizing the talent of more than 100 AI scientists, professors, and students to pursue joint research at IBM’s Research Lab. The lab is co-located with the IBM Watson Health and IBM Security headquarters in Cambridge, MA. The stated goal is to push the boundaries in AI technology in several areas: 1) AI algorithms, 2) physics of AI, 3) application of AI to industries, and 4) advanced shared prosperity through AI.
In addition to this collaboration, IBM argues its Watson platform has been designed to be transparent. David Kenny, who heads Watson, said the following in a press conference: “I believe industry has a responsibility to step up. We all have a right to know how that decision was made [by AI],” Kenny said. “It cannot be a blackbox. We’ve constructed Watson to always be able to show how it came to the inference it came to. That way a human can always make a judgment and make sure there isn’t an inherent bias.”
New Zealand Law Foundation Centre for Law & Policy in Emerging Technologies
Professor Colin Gavaghan of the University of Otago heads a research centre examining the legal, ethical and policy issues around new technologies including artificial intelligence. In 2011, it hosted a forum on the Future of Fairness. The Law Foundation provided an endowment of $1.5M to fund the NZLF Centre and Chair in Emerging Technologies.
Obama White House Report: Preparing for the Future of Artificial Intelligence
The Obama Administration’s report on the future of AI was issued on October 16, 2016 in conjunction with a “White House Frontiers” conference focused on data science, machine learning, automation and robotics in Pittsburgh, PA. It followed a series of initiatives conducted by the WH Office of Science & Technology Policy (OSTP) in 2016. The report contains a snapshot of the state of AI technology and identifies questions that evolution of AI raises for society and public policy. The topics include improving government operations, adapting regulations for safe automated vehicles, and making sure AI applications are “fair, safe, and governable.” AI’s impact on jobs and the economy was another major focus. A companion paper laid out a strategic plan for Federally funded research and development in AI. President Trump has not named a Director for OSTP, so this plan is not currently being implemented. However, law makers in the US are showing further interest in legislation. Rep. John Delaney (D-Md.) said in a press conference in June, 2017: “I think transparency [of machine decision making] is obviously really important. I think if the industry doesn’t do enough of it, I think we’ll [need to consider legislation] because I think it really matters to the American people.” These efforts are part of the Congressional AI Caucus launched in May 2017, focused on implications of AI for the tech industry, economy and society overall.
OpenAI is a non-profit artificial intelligence research company in California that aims to develop general AI in such a way as to benefit humanity as a whole. It has received more than 1 billion USD in commitments to promote research and other activities aimed at supporting the safe development of AI. The company focuses on long-term research. Founders of OpenAI include Elon Musk and Sam Altman. The sponsors include, in addition to individuals, YC Research, Infosys, Microsoft, Amazon, and Open Philanthropy Project. The open source contributions can be found at https://github.com/openai.
PAIR: People + AI Research Initiative
This is a Google initiative that was launched in 2017 to focus on discovering how AI can augment the expert intelligence of professionals such as doctors, technicians, designers, farmers, musicians and others. It also aims to make AI more inclusive and accessible to everyone. Visiting faculty members are Hal Abelson and Brendan Meade. Current projects involve drawing and diversity in machine learning, an open library for training neural nets, training data for models, and design via machine learning.
Partnership on AI
The Partnership was founded in September 2016 by Eric Horvitz and Mustafa Suleyman to study and formulate best practices for AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and influences on people and society. Partnership on AI is funded financially and supported in-kind with research by its members, including founding members Amazon, Google/DeepMind, Facebook, IBM and Microsoft. In 2017, it expanded corporate and NGO membership, adding members such as Ebay, Intel, Salesforce and Center for Democracy & Technology (CDT). It hired an Executive Director, Terah Lyons, and boasts independent Board members from UC Berkley and the ACLU. The group has had affiliation discussions with the Association for the Advancement of Artificial Intelligence (AAAI) and the Allen Institute for Artificial Intelligence. In 2016 the Partnership expressed its support for the Obama White House Report.
Rajapinta is a scientific association founded in January 2017 that advocates the social scientific study of ICT and ICT applications to social research in Finland. Its short-term goal is to improve collaboration and provide opportunities for meetings and networking in the hopes of establishing a seat at the table in the global scientific community in the longer term. Funding sources are not readily available.
Royal Society of UK’s Machine Learning Project
The Royal Society is a fellowship of many of the world’s most eminent scientists and is currently conducting a project on machine learning (as a branch of AI), which in April 2017 produced a very comprehensive report titled Machine learning: the power and promise of computers that learn by example. It explores everyday ways in which people interact with machine learning systems, such as in social media image recognition, voice recognition systems, virtual personal assistants and recommendation systems used by online retailers. The grant funding for this particular project within the much larger Royal Society is unclear.
Workshop on Fairness, Accountability, and Transparency in Machine Learning (FatML)
Founded in 2014, FatML is an annual two-day conference that brings together researchers and practitioners concerned with fairness, accountability and transparency in machine learning, given a recognition that ML raises novel challenges around ensuring non-discrimination, due process and explainability of institutional decision-making. According to the initiative, corporations and governments must be supervised in their use of algorithmic decision making. FatML makes current scholarly resources on related subjects publicly available. The conference is funded in part by registration fees and possibly subsidized by corporate organizers such as Google, Microsoft and Cloudflare. Their August 2017 event was held in Halifax, Nova Scotia, Canada.
World Economic Forum (WEF)
WEF released a blog post in July 2017 on the risks of algorithmic decision making to civil rights, mentioning US law enforcement’s use of facial recognition technology, and other examples. The post argues humans are facing “algorithmic regulation” for example in public entitlements or benefits. It cites self-reinforcing bias as one of the five biggest problems with allowing AI into the government policy arena. In September 2017, the WEF released another post suggesting that a Magna Carta (“charter of rights”) for AI is needed; this essentially refers to commonly agreed upon rules and rights for both individuals and yielders of algorithm-based decision making authority. According to the post, the foundational elements of such an agreement include making sure AI creates jobs for all, rules dealing with machine curated news feeds and polarization, rules avoiding discrimination and bias of machine decision making, and safeguards for ensuring personal choice without sacrificing privacy for commercial efficiency.
From the above, we can conclude three points. First, different levels of stakeholders around the world have been activated to study the impact of technology on machine-decision making, as shown by the multitude of projects. On the research side, there are several recently founded research projects and conferences (e.g., AAWS, FatML). In a similar vein, industry players such as IBM, Microsoft and Facebook also show commitment in solving the associated challenges in their platforms. Moreover, policy makers are investigating the issues as well, as shown by the Obama administration’s report and the new Congressional AI Caucus.
Second, in addition to the topic being of interest for different stakeholders, it also involves a considerable number of different perspectives, including but not limited to aspects of computer science, ethics, law, politics, journalism and economics. Such a great degree of cross-sectionalism and multidisciplinary effort is not common for research projects that often tend to focus on a narrower field of expertise; thus, it might be more challenging to produce solutions that are theoretically sound and practically functional.
Third, there seems to be much overlap between the initiatives mentioned here; many of the initiatives seem to focus on solving the same problems, but it is unclear how well the initiatives are aware of each other and whether a centralized research agenda and resource sharing or joint allocation might help achieve results faster.
Notice an initiative or organization missing from this report? Please send information to Dr. Joni Salminen: email@example.com.
There is enormous concern about machine learning and AI replacing human workers. However, according to several economists, and also according to past experience ranging back all the way to the industrial revolution of the 18th century (which caused major distress at the time), the replacement of human workers is not permanent but there will be new jobs to replace the replaced jobs (as postulated by the Schumpeterian hypothesis). In this post, I will briefly share some ideas on what jobs are relatively safe from AI, and how should an individual member of the workforce increase his or her chances of being competitive in the job market of the future.
“Insofar as they are economic problems at all, the world’s problems in this generation and the next are problems of scarcity, not of intolerable abundance. The bogeyman of automation consumes worrying capacity that should be saved for real problems . . .” -Herbert Simon, 1966
What jobs are safe from AI?
The ones involving:
- creativity – machine can ”draw” and ”compose” but it can’t develop a business plan.
- interpretation – even in law which is codified in most countries, lawyers use judgment and interpretation. Cannot be replaced as it currently stands.
- transaction costs – robots could conduct a surgery and even evaluate before that if a surgery is needed, but in between you need people to explain things, to prepare the patients, etc. Most service chains require a lot of mobility and communication, i.e. transaction costs, that are to be handled by people.
How to avoid losing your job to AI?
Make sure your skills are complementary to automation, not substitute of it. For example, if you have great copywriting skills, there was actually never a better time to be a marketer, as digital platforms enable you to reach all the audiences with a few clicks. The machine cannot write compelling ads, so your skills are complementary. The increased automation does not reduce the need for creativity; it amplifies it.
If the machine would learn to be creative in a meaningful way (which is far far away, realistically speaking), then you’d do some other complementary task.
The point is: there is always some part of the process you can complement.
Fear not. Machines will not take all human jobs because not all human jobs exist yet. Machines and software will take care of some parts of service chains, even to a great extent but in fact that will enhance the functioning of the whole chain, and also that of human labor (consider the amplification example of online copywriting). New jobs that we still cannot vision will be created, as needs and human imagination keep evolving.
The answer is in creative destruction: People won’t stop coming up with things to offer because of machines. And other people won’t stop wanting those things because of machines. Jobs will remain also in the era of AI. The key is not to complain about someone taking your job, but to think of other things to offer, and develop your personal competences accordingly. Even if you won’t, the next guy will. There’s no stopping creativity.
- Scherer, F. M. (1986). Innovation and Growth: Schumpeterian Perspectives (MIT Press Books). The MIT Press.
- David, H. (2015). Why are there still so many jobs? The history and future of workplace automation. The Journal of Economic Perspectives, 29(3), 3–30.