Tagged: automation

Machine decision making and workflow engineering

Did you ever want to climb Mount Everest?

If you did, you would have to split such a goal into many tasks: You would first need to find out what resources are needed for it, who could help you, how to prepare mentally and physically, etc. You would come up with a list of tasks that, in a sequence, form your plan of achieving the goal.

The same logic applies to all goals we humans have, both in companies and private lives, and it also applies when evaluting what tasks, given a goal, can be outsourced to machine decision making.

The best to way to conduct such an analysis is to view organizational goals as a sequence of inter-related job tasks, and then evaluate which particular sub-tasks humans are best at handling, and vice versa.

  1. Define the end goal (e.g., launch a marketing campaign)
  2. Define the steps needed to achieve that goal (strategy) (e.g., decide targeting, write ads, define budget, optimize spend)
  3. Divide each step into sub-tasks (e.g., decide targeting: analyze past campaigns, analyze needs from social media)
  4. Evaluate (e.g., on a scale of 1-5) how well machine and human perform in each sub-task (e.g., write ads: human = 5, machine = 1)
  5. Look at the entire chain and identify points of synergy (where machine can be used to enhance human work or vice versa (e.g., analyze social media by supervised machine learning where crowd workers tag tweets).

We find, by applying such logic, that there are plenty of such tasks in organizational workflows that currently cannot be outsourced to machines, out of variety of reasons. Sometimes the reasons relate to manual processes, i.e. the overall context does not support optimal carrying out of tasks. An example: currently, I’m manually downloading receipts from a digital marketing service account => I have to manually log-in and retrieve the receipts as PDF files, and then send them as email attachment to book-keeping. Ideally, the book-keeping system would just retrieve the receipts via an application programming interface (API) automatically, eliminating this unnecessary part of human labor.

At the same time, we should a) work to remove unnecessary barrier to work automation where it is feasible, b) while thinking of ways to provide optimal synergy from human and machine work inputs. This is not about optimizing individual work tasks, but optimizing the entire workflows toward reaching a specific goal. At the moment, there is little research and attention paid to this kind of comprehensive planning, which I call ”workflow engineering”.

A Brief Report on AI and Machine Learning Fairness Initiatives

This report was created by Joni Salminen and Catherine R. Sloan. Publication date: December 10, 2017.

Artificial intelligence (AI) and machine learning are becoming more influential in society, as more decision-making power is being shifted to algorithms either directly or indirectly. Because of this, several research organizations and initiatives studying fairness of AI and machine learning have been started. We decided to conduct a review of these organizations and initiatives.

This is how we went about it. First, we used our prior information about different initiatives that we were familiar with. We used this information to draft an initial list and supplemented this list by conducting Google and Bing searches with key phrases relating to machine learning or artificial intelligence and fairness. Overall, we found 25 organization or initiatives. We analyzed these in greater detail. For each organization / initiative, we aimed to retrieve at least the following information:

  • Name of the organization / initiative
  • URL of the organization / initiative
  • Founded in (year)
  • Short description of the organization / initiative
  • Purpose of the organization / initiative
  • University or funding partner

Based on the above information, we wrote this brief report. Its purpose is to chart current initiatives around the world relating to fairness, accountability and transparency of machine learning and AI. At the moment, several stakeholders are engaged in research on this topic area, but it is uncertain how well they are aware of each other and if there is a sufficient degree of collaboration among them. We hope this list increases awareness and encounters among the initiatives.

In the following, the initiatives are presented in alphabetical order.

***

AI100: Stanford University’s One Hundred Year Study on Artificial Intelligence

https://ai100.stanford.edu/

Founded in 2016, this is an initiative launched by computer scientist Eric Horvitz and driven by seven diverse academicians focused on the influences of artificial intelligence on people and society. The goal is to anticipate how AI will impact every aspect of how people work, live and play, including automation, national security, psychology, ethics, law, privacy and democracy.  AI100 is funded by a gift from Eric and Mary Horvitz.

AI for Good Global Summit

http://www.itu.int/en/ITU-T/AI/Pages/201706-default.aspx

AI for Good Global Summit was held in Geneva, 7-9 June, 2017 in partnership with a number of United Nation (UN) sister agencies. The Summit aimed to accelerate and advance the development and democratization of AI solutions that can address specific global challenges related to poverty, hunger, health, education, the environment, and other social purposes.

AI Forum New Zealand

https://aiforum.org.nz/about/

The AI Forum was launched in 2017 as a membership funded association for those with a passion for the opportunities AI can provide. The Forum connects AI tech innovators, investor groups, regulators, researchers, educators, entrepreneurs and the public.  Its executive council includes representatives of Microsoft and IBM as well as start-ups and higher education.  Currently the Forum is involved with a large-scale research project on the impact of AI on New Zealand’s economy and society.

AI Now Institute

https://ainowinstitute.org/

The AI Now Institute at New York University (NYU) was founded by Kate Crawford and Meredith Whittaker in 2017.  It’s an interdisciplinary research center dedicated to understanding the social implications of artificial intelligence.  Its work focuses on four core domains: 1) Rights & Liberties, 2) Labor & Automation, 3) Bias & Inclusion and 4) Safety & Critical Infrastructure.  The Institute’s partners include NYU’s schools of Engineering (Tandon), Business (Stern) and Law, the American Civil Liberties Union (ACLU) and the Partnership on AI.

Algorithms, Automation, and News

http://www.algorithmic.news/call-for-papers.html

AAWS is an international conference focusing on impact of algorithms on news. Among the studied topics, the call for papers lists 1) concerns around news quality, transparency, and accountability in general; 2) hidden biases built into algorithms deciding what’s newsworthy; 3) the outcomes of information filtering such as ‘popularism’ (some content is favored over other content) and the transparency and accountability of the decisions made about what the public sees; 4) the privacy of data collected on individuals for the purposes of newsgathering and distribution; 5) the legal issues of libel by algorithm, 6) private information worlds and filter bubbles, and 7) the relationship between algorithms and ‘fake news’. The acceptance rate for the 2018 conference was about 12%. The conference is organized by Center for Advanced Studies at Ludwig-Maximilians-Universität München (LMU) and supported by Volkswagen Foundation and University of Oregon’s School of Journalism and Communication. The organizers aim to release a special issue of Digital Journalism and a book, and one of them (Neil Thurman) is engaged in a research project on ’Algorithmic News’.

Algoritmitutkimus

http://www.algoritmitutkimus.fi

This research project was founded in early 2017 at the University of Turku in Finland as a collaboration of its School of Economics with the BioNLP unit of University of Turku. There are currently three researchers involved, one from social science background and two from computer science. The project studies the societal impact and risks of machine decision-making. It has been funded by Kone Foundation and Kaute Foundation.

Center for Democracy and Technology (CDT)

https://cdt.org/blog/digital-decisions-tool/

CDT is a non-profit organization headquartered in Washington. They describe themselves as “a team of experts with deep knowledge of issues pertaining to the internet, privacy, security, technology, and intellectual property. We come from academia, private enterprise, government, and the non-profit worlds to translate complex policy into action.” The organization is currently focused on the following issues: 1) Privacy and data, 2) Free expression, 3) Security and surveillance, 4) European Union, and 5) Internet architecture. In August 2017, CDT launched a digital decisions tool to help engineers and product managers mitigate algorithmic bias in machine decision making. The tool translates principles for fair and ethical decision-making into a series of questions that can be addressed while designing and deploying an algorithm. The questions address developers’ choices: what data to use to train the algorithm, what features to consider, and how to test the algorithm’s potential bias.

Data & Society’s Intelligence and Autonomy Initiative

http://autonomy.datasociety.net/

This initiative was founded in 2015 and is based in New York City. It develops grounded qualitative empirical research to provide nuanced understandings of emerging technologies to inform the design, evaluation and regulation of AI-driven systems, while avoiding both utopian and dystopian scenarios. The goal is to engage diverse stakeholders in interdisciplinary discussions to inform structures of AI accountability and governance from the bottom up. I&A is funded by a research grant from the Knight Foundation’s Ethics and Governance of Artificial Intelligence Fund, and was previously supported by grants from the John D. and Catherine T. MacArthur Foundation and Microsoft Research.

Facebook AI Research (FAIR)

https://research.fb.com/category/facebook-ai-research-fair/

Facebook’s research program engages with academics, publications, open source software, and technical conferences and workshops.  Its researchers are based in Menlo Park, CA, New York City and Paris, France. Its CommAI project aims to develop new data sets and algorithms to develop and evaluate general purpose artificial agents that rely on a linguistic interface and can quickly adapt to a stream of tasks.

FATE 

https://www.microsoft.com/en-us/research/group/fate/

This internal Microsoft group focuses on Fairness, Accountability, Transparency and Ethics in AI and was launched in 2014.  Its goal is to develop, via collaborative research projects, computational techniques that are both innovative and ethical, while drawing on the deeper context surrounding these issues from sociology, history and science.

Good AI

https://www.goodai.com/

Good AI was founded in 2014 as an international group based in Prague, Czech Republic dedicated to developing AI quickly to help humanity and to understand the universe. Its founding CEO Marek Rosa funded the project with $10M. Good AI’s R&D company went public in 2015 and is comprised of a team of 20 research scientists. In 2017 Good AI participated in global AI conferences in Amsterdam, London and Tokyo and hosted data science competitions.

Google Jigsaw

https://jigsaw.google.com/

Jigsaw is a technology incubator focusing on geopolitical challenges, originating from Google Ideas, as a ”think/do tank” for issues at the interface of technology and geopolitics. One of the projects of Jigsaw is the Perspective API that uses machine learning to identify abuse and harassment online. Perspective rates comments based on the perceived impact a comment might have on the conversation. Perspective can be used use to give real-time feedback to commenters, help moderators sort comments more effectively, or allow readers to find relevant information. The first model of Perspective API identifies whether a comment is perceived as “toxic” in a discussion.

IEEE Global Initiative for Ethical Considerations in AI and Autonomous Systems

https://standards.ieee.org/develop/indconn/ec/autonomous_systems.html

In 2016, the Institute of Electrical and Electronics Engineers (IEEE) launched a project seeking public input on ethically designed AI. In April 2017, the IEEE hosted a related dinner for the European Parliament in Brussels.  In July 2017, it issued a preliminary report entitled Prioritizing Human Well Being in the Age of Artificial Intelligence.  IEEE is conducting a consensus driven standards project for “soft governance” of AI that may produce a “bill of rights” regarding what personal data is “off limits” without the need for regulation. They set up 11 different active standards groups for interested collaborators to join in 2017 and were projecting new reports by the end of the year. IEEE has also released a report on Ethically Aligned Design in artificial intelligence, part of a initiative to ensure ethical principles are considered in systems design.

Internet Society (ISOC)

https://www.internetsociety.org/

The ISOC is a non-profit organization founded in 1992 to provide leadership in Internet-related standards, education, access, and policy. It is headquartered in Virginia, USA. The organization published a paper in April, 2017 that explains commercial uses of AI technology and provides recommendations for dealing with its management challenges, including 1) transparency, bias and accountability, 2) security and safety, 3) socio-economic impacts and ethics, and 4) new data uses and ecosystems. The recommendations include, among others, adopting ethical standards in the design of AI products and innovation policies, providing explanations to end users about why a specific decision was made, making it simpler to understand why algorithmic decision-making works, and introducing “algorithmic literacy” as a basic skills obtained through education.

Knight Foundation’s Ethics and Governance of Artificial Intelligence Fund https://www.knightfoundation.org/aifund-faq

The AI Fund was founded in January 2017 by the Massachusetts Institute of Technology (MIT) Media Lab, Harvard University’s Berkman-Klein Center, the Knight Foundation, Omidyar Network and Reid Hoffman of LinkedIn.  It is currently housed at the Miami Foundation in Miami, Florida.

The goal of the AI Fund is to ensure that the development of AI becomes a joint multidisciplinary human endeavor that bridges computer scientists, engineers, social scientists, philosophers, faith leaders, economists, lawyers and policymakers.  The aim is to accomplish this by supporting work around the world that advances the development of ethical AI in the public interest, with an emphasis on research and education.

In May 2017, the Berkman Klein Center at Harvard kicked off its collaboration with the MIT Media Lab on their Ethics and Governance of Artificial Intelligence Initiative focused on strategic research, learning and experimentation. Possible avenues of empirical research were discussed, and the outlines of a taxonomy emerged. Topics of this initiative include: use of AI-powered personal assistants, attitudes of youth, impact on news generation, and moderating online hate speech.

Moreover, Harvard’s Ethics and Governance of AI Fund has committed an initial $7.6M in grants to support nine organizations to strengthen the voice of civil society in the development of AI. An excerpt from their post: “Additional projects and activities will address common challenges across these core areas such as the global governance of AI and the ways in which the use of AI may reinforce existing biases, particularly against underserved and underrepresented populations.” Finally, a report of a December 2017 BKC presentation on building AI for an inclusive society has been published and can be accessed from the above link.

MIT-IBM Watson Lab

http://mitibmwatsonailab.mit.edu/

Founded in September 2017, MIT’s new $240 million center in collaboration with IBM, is intended to help advance the field of AI by “developing novel devices and materials to power the latest machine-learning algorithms.”  This project overlaps with the Partnership on AI. IBM hopes it will help the company reclaim its reputation in the AI space.  In another industry sector, Toyota made a billion-dollar investment in funding for its own AI center, plus research at both MIT and Stanford. The MIT-IBM Lab will be one of the “largest long-term university-industry AI collaborations to date,” mobilizing the talent of more than 100 AI scientists, professors, and students to pursue joint research at IBM’s Research Lab. The lab is co-located with the IBM Watson Health and IBM Security headquarters in Cambridge, MA. The stated goal is to push the boundaries in AI technology in several areas: 1) AI algorithms, 2) physics of AI, 3) application of AI to industries, and 4) advanced shared prosperity through AI.

In addition to this collaboration, IBM argues its Watson platform has been designed to be transparent. David Kenny, who heads Watson, said the following in a press conference: “I believe industry has a responsibility to step up. We all have a right to know how that decision was made [by AI],” Kenny said. “It cannot be a blackbox. We’ve constructed Watson to always be able to show how it came to the inference it came to. That way a human can always make a judgment and make sure there isn’t an inherent bias.”

New Zealand Law Foundation Centre for Law & Policy in Emerging Technologies

http://www.lawfoundation.org.nz/?page_id=171

Professor Colin Gavaghan of the University of Otago heads a research centre examining the legal, ethical and policy issues around new technologies including artificial intelligence.  In 2011, it hosted a forum on the Future of Fairness.  The Law Foundation provided an endowment of $1.5M to fund the NZLF Centre and Chair in Emerging Technologies.

Obama White House Report: Preparing for the Future of Artificial Intelligence

https://obamawhitehouse.archives.gov/blog/2016/10/12/administrations-report-future-artificial-intelligence

The Obama Administration’s report on the future of AI was issued on October 16, 2016 in conjunction with a “White House Frontiers” conference focused on data science, machine learning, automation and robotics in Pittsburgh, PA. It followed a series of initiatives conducted by the WH Office of Science & Technology Policy (OSTP) in 2016. The report contains a snapshot of the state of AI technology and identifies questions that evolution of AI raises for society and public policy.  The topics include improving government operations, adapting regulations for safe automated vehicles, and making sure AI applications are “fair, safe, and governable.”  AI’s impact on jobs and the economy was another major focus. A companion paper laid out a strategic plan for Federally funded research and development in AI. President Trump has not named a Director for OSTP, so this plan is not currently being implemented. However, law makers in the US are showing further interest in legislation. Rep. John Delaney (D-Md.) said in a press conference in June, 2017: “I think transparency [of machine decision making] is obviously really important. I think if the industry doesn’t do enough of it, I think we’ll [need to consider legislation] because I think it really matters to the American people.” These efforts are part of the Congressional AI Caucus launched in May 2017, focused on implications of AI for the tech industry, economy and society overall.

OpenAI

https://www.openai.com/

OpenAI is a non-profit artificial intelligence research company in California that aims to develop general AI in such a way as to benefit humanity as a whole. It has received more than 1 billion USD in commitments to promote research and other activities aimed at supporting the safe development of AI. The company focuses on long-term research. Founders of OpenAI include Elon Musk and Sam Altman. The sponsors include, in addition to individuals, YC Research, Infosys, Microsoft, Amazon, and Open Philanthropy Project. The open source contributions can be found at https://github.com/openai.

PAIR: People + AI Research Initiative

https://ai.google/pair/

This is a Google initiative that was launched in 2017 to focus on discovering how AI can augment the expert intelligence of professionals such as doctors, technicians, designers, farmers, musicians and others. It also aims to make AI more inclusive and accessible to everyone. Visiting faculty members are Hal Abelson and Brendan Meade.  Current projects involve drawing and diversity in machine learning, an open library for training neural nets, training data for models, and design via machine learning.

Partnership on AI

https://www.partnershiponai.org/

The Partnership was founded in September 2016 by Eric Horvitz and Mustafa Suleyman to study and formulate best practices for AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and influences on people and society.  Partnership on AI is funded financially and supported in-kind with research by its members, including founding members Amazon, Google/DeepMind, Facebook, IBM and Microsoft.  In 2017, it expanded corporate and NGO membership, adding members such as Ebay, Intel, Salesforce and Center for Democracy & Technology (CDT).  It hired an Executive Director, Terah Lyons, and boasts independent Board members from UC Berkley and the ACLU.  The group has had affiliation discussions with the Association for the Advancement of Artificial Intelligence (AAAI) and the Allen Institute for Artificial Intelligence. In 2016 the Partnership expressed its support for the Obama White House Report.

Rajapinta.co

https://rajapinta.co/

Rajapinta is a scientific association founded in January 2017 that advocates the social scientific study of ICT and ICT applications to social research in Finland.  Its short-term goal is to improve collaboration and provide opportunities for meetings and networking in the hopes of establishing a seat at the table in the global scientific community in the longer term.  Funding sources are not readily available.

Royal Society of UK’s Machine Learning Project

https://royalsociety.org/topics-policy/projects/machine-learning/

The Royal Society is a fellowship of many of the world’s most eminent scientists and is currently conducting a project on machine learning (as a branch of AI), which in April 2017 produced a very comprehensive report titled Machine learning: the power and promise of computers that learn by example.  It explores everyday ways in which people interact with machine learning systems, such as in social media image recognition, voice recognition systems, virtual personal assistants and recommendation systems used by online retailers.  The grant funding for this particular project within the much larger Royal Society is unclear.


Workshop on Fairness, Accountability, and Transparency in Machine Learning
(FatML)

https://www.fatml.org/

Founded in 2014, FatML is an annual two-day conference that brings together researchers and practitioners concerned with fairness, accountability and transparency in machine learning, given a recognition that ML raises novel challenges around ensuring non-discrimination, due process and explainability of institutional decision-making.  According to the initiative, corporations and governments must be supervised in their use of algorithmic decision making.  FatML makes current scholarly resources on related subjects publicly available. The conference is funded in part by registration fees and possibly subsidized by corporate organizers such as Google, Microsoft and Cloudflare. Their August 2017 event was held in Halifax, Nova Scotia, Canada.

World Economic Forum (WEF)

https://www.weforum.org/

WEF released a blog post in July 2017 on the risks of algorithmic decision making to civil rights, mentioning US law enforcement’s use of facial recognition technology, and other examples. The post argues humans are facing “algorithmic regulation” for example in public entitlements or benefits. It cites self-reinforcing bias as one of the five biggest problems with allowing AI into the government policy arena. In September 2017, the WEF released another post suggesting that a Magna Carta (“charter of rights”) for AI is needed; this essentially refers to commonly agreed upon rules and rights for both individuals and yielders of algorithm-based decision making authority. According to the post, the foundational elements of such an agreement include making sure AI creates jobs for all, rules dealing with machine curated news feeds and polarization, rules avoiding discrimination and bias of machine decision making, and safeguards for ensuring personal choice without sacrificing privacy for commercial efficiency.

Conclusion

From the above, we can conclude three points. First, different levels of stakeholders around the world have been activated to study the impact of technology on machine-decision making, as shown by the multitude of projects. On the research side, there are several recently founded research projects and conferences (e.g., AAWS, FatML). In a similar vein, industry players such as IBM, Microsoft and Facebook also show commitment in solving the associated challenges in their platforms. Moreover, policy makers are investigating the issues as well, as shown by the Obama administration’s report and the new Congressional AI Caucus.

Second, in addition to the topic being of interest for different stakeholders, it also involves a considerable number of different perspectives, including but not limited to aspects of computer science, ethics, law, politics, journalism and economics. Such a great degree of cross-sectionalism and multidisciplinary effort is not common for research projects that often tend to focus on a narrower field of expertise; thus, it might be more challenging to produce solutions that are theoretically sound and practically functional.

Third, there seems to be much overlap between the initiatives mentioned here; many of the initiatives seem to focus on solving the same problems, but it is unclear how well the initiatives are aware of each other and whether a centralized research agenda and resource sharing or joint allocation might help achieve results faster.

***

Notice an initiative or organization missing from this report? Please send information to Dr. Joni Salminen: joolsa@utu.fi.

A.I. – the next industrial revolution?

Introduction

Many workers are concerned about robotization and automation taking away their jobs. Also the media has been writing actively about this topic lately, as can be seen in publications such as New York Times and Forbes.

Although there is undoubtedly some dramatization in the scenarios created by the media, it is true that the trend of automatization took away manual jobs throughout the 20th century and has continued – perhaps even accelerated – in the 21st century.

Currently the jobs taken away by machines are manual labor, but what happens if machines take away knowledge labor as well? I think it’s important to consider this scenario, as most focus has been on the manual jobs, whereas the future disruption is more likely to take place in knowledge jobs.

This article discusses what’s next – in particular from the perspective of artificial intelligence (A.I.). I’ve been developing a theory about this topic for a while now. (It’s still unfinished, so I apologize the fuzziness of thought…)

 Theory on development of job markets

My theory on development of job markets relies on two key assumptions:

  1. with each development cycle, less people are needed
  2. and the more difficult is for average people to add value

The idea here is that while it is relatively easy to replace a job taken away by simple machines (sewing machines still need people to operate them), it is much harder to replace jobs taken away by complex machines (such as an A.I.) providing higher productivity. Consequently, less people are needed to perform the same tasks.

By ”development cycles”, I refer to the drastic shift in job market productivity, i.e.

craftmanship –> industrial revolution –> information revolution –> A.I. revolution

Another assumption is that the labor skills follow the Gaussian curve. This means most people are best suited for manual jobs, while information economy requires skills that are at the upper end of that curve (the smartest and brightest).

In other words, the average worker will find it more and more difficult to add value in the job market, due to sophistication of the systems (a lot more learning is needed to add value than in manual jobs where the training requires a couple of days). Even currently, the majority of global workers best fit to manual labor rather than information economy jobs, and so some economies are at a major disadvantage (consider Greece vs. Germany).

Consistent to the previous definition, we can see the job market including two types of workers:

  • workers who create
  • workers who operate

The former create the systems as their job, whereas the latter operate them as their job. For example, in the sphere of online advertising, Google’s engineers create the AdWords search-engine advertising platform, which is then used by online marketers doing campaigns for their clients. At the current information economy, the best situation is for workers who are able to create systems – i.e. their value-added is the greatest. With an A.I, however, both jobs can be overtaken by machine intelligence. This is the major threat to knowledge workers.

The replacement takes place due to what I call the errare humanum est -effect (disadvantage of humans vis-à-vis machines), according to which a machine is always superior to job tasks compared to human which is an erratic being controlled by biological constraints (e.g., need for food and sleep). Consequently, even the brightest humans will still lose to an A.I.

Examples

Consider these examples:

(Some of these figures are a bit outdated, but in general they serve to support my argument.)

Therefore, the ratio of workers vs. customers is much lower than in previous transitions. To build a car for one customer, you need tens of manufacturing workers. To serve customers in a super-market, the ratio needs to be something like 1:20 (otherwise queues become too long). But when the ratio is 1:1,000,000, not many people are needed to provide a service for the whole market.

As can be seen, the mobile application industry which has been touted as a source of new employment does indeed create new jobs, but it doesn’t create them for masses. This is because not many people are needed to succeed in this business environment.

Further disintermediation takes place when platforms talk to each other, forming super-ecosystems. Currently, this takes place though an API logic (application programming interface) which is a ”dumb” logic, doing only prescribed tasks, but an A.I. would dramatically change the landscape by introducing creative logic in API-based applications.

Which jobs will an A.I. disrupt?

Many professional services are on the line. Here are some I can think of.

1. Marketing managers 

An A.I. can allocate budget and optimize campaigns far more efficiently than erroneous humans. The step from Google AdWords and Facebook Ads to automated marketing solutions is not that big – at the moment, the major advantage of humans is creativity, but the definition of an A.I. in this paper assumes creative functions.

2. Lawyers 

An A.I. can recall all laws, find precedent cases instantly and give correct judgments. I recently had a discussion with one of my developer friends – he was particularly interested in applying A.I. into the law system – currently it’s too big for a human to comprehend, as there are thousands of laws, some of which contradict one another. An A.I. can quickly find contradicting laws and give all alternative interpretations. What is currently the human advantage is a sense of moral (right and wrong) which can be hard to replicate with an A.I.

3. Doctors 

An A.I. makes faster and more accurate diagnoses; a robot performs surgical operations without flaw. I would say many standard diagnoses by human doctors could be replaced by A.I. measuring the symptoms. There have been several cases of incorrect diagnoses due to hurry and the human error factor – as noted previously, an A.I. is immune to these limitations. The major human advantage is sympathy, although some doctors are missing even this.

4. Software developers

Even developers face extinction; upon learning the syntax, an A.I. will improve itself better than humans do. This would lead into exponentially accelerating increase of intellect, something commonly depicted in the A.I. development scenarios.

Basically, all knowledge professions if accessible to A.I. will be disrupted.

Which jobs will remain?

Actually, the only jobs left would be manual jobs – unless robots take them as well (there are some economic considerations against this scenario). I’m talking about low-level manual jobs – transportation, cleaning, maintenance, construction, etc. These require more physical material – due to aforementioned supply and demand dynamics, it may be that people are cheaper to ”build” than robots, and therefore can still assume simple jobs.

At the other extreme, there are experience services offered by people to other people – massage, entertainment. These can remain based on the previous logic.

How can workers prepare?

I can think of a couple of ways.

First, learn coding – i.e. talking to machines. people who understand their logic are in the position to add value — they have an access to the society of the future, whereas those who are unable to use systems get disadvantaged.

The best strategy for a worker in this environment is continuous learning and re-education. From the schooling system, this requires a complete re-shift in thinking – currently most universities are far behind in teaching practical skills. I notice this every day in my job as a university teacher – higher education must catch up, or it will completely lose its value.

Currently higher education is shielded by governments through official diplomas appreciated by recruiters, but true skills trump such an advantage in the long run. Already at this moment I’m advising my students to learn from MOOCs (massive open online courses) rather than relying on the education we give in my institution.

What are the implications for the society?

At a global scale, societies are currently facing two contrasting mega-trends:

It is not hard to see these are contrasting: less people are needed for the same produce, whereas more people are born, and thus need jobs. The increase of people is exponential, while the increase in productivity comes, according to my theory, in large shifts. A large shift is bad because before it takes place, everything seems normal. (It’s like a tsunami approaching – no way to know before it hits you.)

What are the scenarios to solve the mega-trend contradiction?

I can think of a couple of ways:

  1. Marxist approach – redistribution of wealth and re-discovery of “job”
  2. WYSIWYG approach – making the systems as easy as possible

By adopting a Marxist approach, we can see there are two groups who are best off in this new world order:

  • The owners of the best A.I. (system capital)
  • The people with capacity to use and develop A.I. further (knowledge capital)

Others, as argued previously, are at a disadvantage. The phenomenon is much similar to the concept of ”digital divide” which can refer to 1) the difference of citizens from developed and developing countries’ access to technologies, or 2) the ability of the elderly vs. the younger to use modern technology (the latter have, for example, worse opportunities in high-tech job markets).

There are some relaxations to the arguments I’ve made. First, we need to consider that the increase of time people have as well as the general population increase create demand for services relating experiences and entertainment per se; yet, there needs to be consideration of re-distribution of wealth, as people who are unable to work need to consume to provide work for others (in other words, the service economy needs special support and encouragement from government vis-à-vis machine labor).

While it is a precious goal that everyone contribute in the society through work, the future may require a re-check on this protestant work ethic if indeed the supply of work drastically decreases. the major reason, in my opinion, behind the failure of policies reducing work hours such as the 35-hour work-week in France is that other countries besides these pioneers are not adopting them, and so they gain a comparative advantage in the global market. We are yet not in the stage where supply of labor is dramatically reduced at a global scale, but according to my theory we are getting there.

Secondly, a major relaxation, indeed, is that the systems can be usable by people who lack the understanding of their technical finesse. This method is already widely applied – very few understand the operating principles of the Internet, and yet can use it without difficulties. Even more complex professional systems, like Google AdWords, can be used without detailed understanding of the Google’s algorithm or Vickrey second-price sealed auctions.

So, dumbing things down is one way to go. The problem with this approach in the A.I. context is that when the system is smart enough to use itself, there is no need to dumb down – i.e., having humans use it would be a non-optimal use of resources. Already we can see this in some bidding algorithms in online advertising – the system optimizes better than people. At the moment we online marketers can add value through copywriting and other creative ways, but the upcoming A.I. would take away this advantage from us.

Recommendations

It is natural state of job markets that most workers are skilled only for manual labor or very simple machine work; if these jobs are lost, new way of organizing society is needed. Rather than fighting the change, societies should approach it objectively (which is probably one of the hardest things for human psychology).

My recommendations for the policy makers are as follows:

  • decrease cost of human labor (e.g., in Finland sometime in the 70s services were exempted from taxes – this scenario should help)
  • reduce employment costs – the situation is in fact perverse, as companies are penalized through side costs if they recruit workers. In a society where demand of labor is scarce, the reverse needs to take place: companies that recruit need to be rewarded.
  • retain/introduce monetary transfers à la welfare societies – because labor is not enough for everyone, the state needs to pass money from capital holders to underprivileged. The Nordic states are closer to a working model than more capitalistic states such as the United States.
  • push education system changes – because skills required in the job market are more advanced and more in flux than previously, the curriculum substance needs to change faster than it currently does. Unnecessary learning should be eliminated, while focusing on key skills needed in the job market at the moment, and creating further education paths to lifelong learning.

Because the problem of reducing job demand is not acute, these changes are unlikely to take place until there is no other choice (which is, by the way, the case for most political decision making).

Open questions

Up to which point can the human labor be replaced? I call it the point of zero human when no humans are needed to produce an equal or larger output than what is being produced at an earlier point in time. The fortune of humans is that we are all the time producing more – if the production level was at the stage of 18th century, we would already be in the point of zero human. Therefore, job markets are not developing in a predictable way towards point of zero human, but it may nevertheless be a stochastic outcome of the current development rate of technology. Ultimately, time will tell. We are living exciting times.