Tagged: AI

How to teach machines common sense? Solutions for the ambiguity problem of AI

Introduction

The ambiguity problem illustrated:

User: ”Siri, call me an ambulance!”

Siri: ”Okay, I will call you ’an ambulance’.”

You’ll never reach the hospital, and end up bleeding to death.

Solutions

Two potential solutions come to mind:

A. machine builds general knowledge (”common sense”)

B. machine identifies ambiguity & asks for clarification from humans (reinforcement learning)

The whole ”common sense” problem can be solved by introducing human feedback into the system. We really need to tell the machine what is what, just like a child. This is iterative learning, in which trials and errors take place. However, it is better than trying to adapt an unescapably finite dataset into a close to finite space of meanings.

But, in fact, A and B converge by doing so. Which is fine, and ultimately needed.

Contextual awareness

To determine which solution to an ambiguous situation is proper, the machine needs contextual awareness; this can be achieved by storing contextual information from each ambiguous situation, and being explained ”why” a particular piece of information results in disambiguity. It’s not enough to say ”you’re wrong”, but there needs to be an explicit association to a reason (concept, variable). Equally, it’s not enough to say ”you’re right”, but again the same association is needed.

The process:

1) try something

2) get told it’s not right, and why (linking to contextual information)

3) try something else, corresponding to why

4) get rewarded, if it’s right.

The problem is, currently machines are being trained by data, not by human feedback.

New thinking: Training AI pets

So we would need to build machine-training systems which enable training by direct human feedback, i.e. a new way to teach and communicate with the machine. It’s not a trivial thing, since the whole machine-learning paradigm is based on data, not meanings. From data and probabilities, we would need to move into associations and concepts that capture social reality. A new methodology is needed. Potentially, individuals could train their own AIs like pets (think having your own ”AI pet” like Tamagotchi), or we could use large numbers of crowd workers who would explain the machine why things are how they are (i.e., create associations). A specific type of markup (=communication with the machine) would probably also be needed, although conversational UIs would most likely be the best solution.

Through mimicking human learning we can teach the machine common sense. This is probably the only way; since common sense does not exist beyond human cognition, it can only be learnt from humans. An argument can be made that this is like going back in time, to era where machines followed rule-based programming (as opposed to being data-driven). However, I would argue rule-based learning is much closer to human learning than the current probability-based one, and if we want to teach common sense, we therefore need to adopt the human way.

Conclusion

Machine learning may be at par, but machine training certainly is not. The current machine learning paradigm is data-driven, whereas we could look into ways for concept-driven AI training approaches. Essentially, this is something like reinforcement learning for concept maps.

What jobs are safe from AI?

There is enormous concern about machine learning and AI replacing human workers. However, according to several economists, and also according to past experience ranging back all the way to the industrial revolution of the 18th century (which caused major distress at the time), the replacement of human workers is not permanent but there will be new jobs to replace the replaced jobs (as postulated by the Schumpeterian hypothesis). In this post, I will briefly share some ideas on what jobs are relatively safe from AI, and how should an individual member of the workforce increase his or her chances of being competitive in the job market of the future.

“Insofar as they are economic problems at all, the world’s problems in this generation and the next are problems of scarcity, not of intolerable abundance. The bogeyman of automation consumes worrying capacity that should be saved for real problems . . .” -Herbert Simon, 1966

What jobs are safe from AI?

The ones involving:

  1. creativity – machine can ”draw” and ”compose” but it can’t develop a business plan.
  2. interpretation – even in law which is codified in most countries, lawyers use judgment and interpretation. Cannot be replaced as it currently stands.
  3. transaction costs – robots could conduct a surgery and even evaluate before that if a surgery is needed, but in between you need people to explain things, to prepare the patients, etc. Most service chains require a lot of mobility and communication, i.e. transaction costs, that are to be handled by people.

How to avoid losing your job to AI?

Make sure your skills are complementary to automation, not substitute of it. For example, if you have great copywriting skills, there was actually never a better time to be a marketer, as digital platforms enable you to reach all the audiences with a few clicks. The machine cannot write compelling ads, so your skills are complementary. The increased automation does not reduce the need for creativity; it amplifies it.

If the machine would learn to be creative in a meaningful way (which is far far away, realistically speaking), then you’d do some other complementary task.

The point is: there is always some part of the process you can complement.

Fear not. Machines will not take all human jobs because not all human jobs exist yet. Machines and software will take care of some parts of service chains, even to a great extent but in fact that will enhance the functioning of the whole chain, and also that of human labor (consider the amplification example of online copywriting). New jobs that we still cannot vision will be created, as needs and human imagination keep evolving.

The answer is in creative destruction: People won’t stop coming up with things to offer because of machines. And other people won’t stop wanting those things because of machines. Jobs will remain also in the era of AI. The key is not to complain about someone taking your job, but to think of other things to offer, and develop your personal competences accordingly. Even if you won’t, the next guy will. There’s no stopping creativity.

Read more:

  • Scherer, F. M. (1986). Innovation and Growth: Schumpeterian Perspectives (MIT Press Books). The MIT Press.
  • David, H. (2015). Why are there still so many jobs? The history and future of workplace automation. The Journal of Economic Perspectives, 29(3), 3–30.

Research agenda for ethics and governance of artificial intelligence

Ethics of machine learning algorithms has recently been raised as a major research concern. Earlier this year (2017), a fund of $27M USD was started to support research on the societal challenges of AI. The group responsible for the fund includes e.g. the Knight Foundation, Omidyar Network and the startup founder and investor Reid Hoffman.

As stated on the fund’s website, the fund will support a cross-section of AI ethics and governance projects and activities, both in the United States and internationally. They advocate cross-disciplinary research between e.g. computer scientists, social scientists, ethicists, philosophers, economists, lawyers and policymakers.

The fund lays out a list of areas they’re interested in funding. The list can be seen as a sort of a research agenda. The items are:

  • Communicating complexity: How do we best communicate, through words and processes, the nuances of a complex field like AI?
  • Ethical design: How do we build and design technologies that consider ethical frameworks and moral values as central features of technological innovation?
  • Advancing accountable and fair AI: What kinds of controls do we need to minimize AI’s potential harm to society and maximize its benefits?
  • Innovation in the public interest: How do we maintain the ability of engineers and entrepreneurs to innovate, create and profit, while ensuring that society is informed and that the work integrates public interest perspectives?
  • Expanding the table: How do we grow the field to ensure that a range of constituencies are involved with building the tools and analyzing social impact?

As can be seen, the agenda emphasizes the big question: How can we maintain the benefits of the new technologies while making sure that their potential harm is minimized? To answer this question, a host of studies and perspectives is definitely needed. Read here a list of other initiatives working on the societal issues of AI and machine learning.

Ethics and Governance of Artificial Intelligence Initiative

Read about this amazing initiative at Harvard’s website and thought of sharing it:

About the Ethics and Governance of Artificial Intelligence Initiative

Artificial intelligence and complex algorithms, fueled by the collection of big data and deep learning systems, are quickly changing how we live and work, from the news stories we see, to the loans for which we qualify, to the jobs we perform. Because of this pervasive impact, it is imperative that AI research and development be shaped by a broad range of voices—not only by engineers and corporations—but also social scientists, ethicists, philosophers, faith leaders, economists, lawyers, and policymakers.
To address this challenge, several foundations and funders recently announced the Ethics and Governance of Artificial Intelligence Fund, which will support interdisciplinary research to ensure that AI develops in a way that is ethical, accountable, and advances the public interest. The Berkman Klein Center and the MIT Media Lab will act as anchor academic institutions for this fund and develop a range of activities, research, tools, and prototypes aimed at bridging the gap between disciplines and connecting human values with technical capabilities. They will work together to strengthen existing and form new interdisciplinary human networks and institutional collaborations, and serve as a collaborative platform where stakeholders working across disciplines, sectors, and geographies can meet, engage, learn, and share.

Read more: https://cyber.harvard.edu/research/ai

A.I. – the next industrial revolution?

Introduction

Many workers are concerned about robotization and automation taking away their jobs. Also the media has been writing actively about this topic lately, as can be seen in publications such as New York Times and Forbes.

Although there is undoubtedly some dramatization in the scenarios created by the media, it is true that the trend of automatization took away manual jobs throughout the 20th century and has continued – perhaps even accelerated – in the 21st century.

Currently the jobs taken away by machines are manual labor, but what happens if machines take away knowledge labor as well? I think it’s important to consider this scenario, as most focus has been on the manual jobs, whereas the future disruption is more likely to take place in knowledge jobs.

This article discusses what’s next – in particular from the perspective of artificial intelligence (A.I.). I’ve been developing a theory about this topic for a while now. (It’s still unfinished, so I apologize the fuzziness of thought…)

 Theory on development of job markets

My theory on development of job markets relies on two key assumptions:

  1. with each development cycle, less people are needed
  2. and the more difficult is for average people to add value

The idea here is that while it is relatively easy to replace a job taken away by simple machines (sewing machines still need people to operate them), it is much harder to replace jobs taken away by complex machines (such as an A.I.) providing higher productivity. Consequently, less people are needed to perform the same tasks.

By ”development cycles”, I refer to the drastic shift in job market productivity, i.e.

craftmanship –> industrial revolution –> information revolution –> A.I. revolution

Another assumption is that the labor skills follow the Gaussian curve. This means most people are best suited for manual jobs, while information economy requires skills that are at the upper end of that curve (the smartest and brightest).

In other words, the average worker will find it more and more difficult to add value in the job market, due to sophistication of the systems (a lot more learning is needed to add value than in manual jobs where the training requires a couple of days). Even currently, the majority of global workers best fit to manual labor rather than information economy jobs, and so some economies are at a major disadvantage (consider Greece vs. Germany).

Consistent to the previous definition, we can see the job market including two types of workers:

  • workers who create
  • workers who operate

The former create the systems as their job, whereas the latter operate them as their job. For example, in the sphere of online advertising, Google’s engineers create the AdWords search-engine advertising platform, which is then used by online marketers doing campaigns for their clients. At the current information economy, the best situation is for workers who are able to create systems – i.e. their value-added is the greatest. With an A.I, however, both jobs can be overtaken by machine intelligence. This is the major threat to knowledge workers.

The replacement takes place due to what I call the errare humanum est -effect (disadvantage of humans vis-à-vis machines), according to which a machine is always superior to job tasks compared to human which is an erratic being controlled by biological constraints (e.g., need for food and sleep). Consequently, even the brightest humans will still lose to an A.I.

Examples

Consider these examples:

(Some of these figures are a bit outdated, but in general they serve to support my argument.)

Therefore, the ratio of workers vs. customers is much lower than in previous transitions. To build a car for one customer, you need tens of manufacturing workers. To serve customers in a super-market, the ratio needs to be something like 1:20 (otherwise queues become too long). But when the ratio is 1:1,000,000, not many people are needed to provide a service for the whole market.

As can be seen, the mobile application industry which has been touted as a source of new employment does indeed create new jobs, but it doesn’t create them for masses. This is because not many people are needed to succeed in this business environment.

Further disintermediation takes place when platforms talk to each other, forming super-ecosystems. Currently, this takes place though an API logic (application programming interface) which is a ”dumb” logic, doing only prescribed tasks, but an A.I. would dramatically change the landscape by introducing creative logic in API-based applications.

Which jobs will an A.I. disrupt?

Many professional services are on the line. Here are some I can think of.

1. Marketing managers 

An A.I. can allocate budget and optimize campaigns far more efficiently than erroneous humans. The step from Google AdWords and Facebook Ads to automated marketing solutions is not that big – at the moment, the major advantage of humans is creativity, but the definition of an A.I. in this paper assumes creative functions.

2. Lawyers 

An A.I. can recall all laws, find precedent cases instantly and give correct judgments. I recently had a discussion with one of my developer friends – he was particularly interested in applying A.I. into the law system – currently it’s too big for a human to comprehend, as there are thousands of laws, some of which contradict one another. An A.I. can quickly find contradicting laws and give all alternative interpretations. What is currently the human advantage is a sense of moral (right and wrong) which can be hard to replicate with an A.I.

3. Doctors 

An A.I. makes faster and more accurate diagnoses; a robot performs surgical operations without flaw. I would say many standard diagnoses by human doctors could be replaced by A.I. measuring the symptoms. There have been several cases of incorrect diagnoses due to hurry and the human error factor – as noted previously, an A.I. is immune to these limitations. The major human advantage is sympathy, although some doctors are missing even this.

4. Software developers

Even developers face extinction; upon learning the syntax, an A.I. will improve itself better than humans do. This would lead into exponentially accelerating increase of intellect, something commonly depicted in the A.I. development scenarios.

Basically, all knowledge professions if accessible to A.I. will be disrupted.

Which jobs will remain?

Actually, the only jobs left would be manual jobs – unless robots take them as well (there are some economic considerations against this scenario). I’m talking about low-level manual jobs – transportation, cleaning, maintenance, construction, etc. These require more physical material – due to aforementioned supply and demand dynamics, it may be that people are cheaper to ”build” than robots, and therefore can still assume simple jobs.

At the other extreme, there are experience services offered by people to other people – massage, entertainment. These can remain based on the previous logic.

How can workers prepare?

I can think of a couple of ways.

First, learn coding – i.e. talking to machines. people who understand their logic are in the position to add value — they have an access to the society of the future, whereas those who are unable to use systems get disadvantaged.

The best strategy for a worker in this environment is continuous learning and re-education. From the schooling system, this requires a complete re-shift in thinking – currently most universities are far behind in teaching practical skills. I notice this every day in my job as a university teacher – higher education must catch up, or it will completely lose its value.

Currently higher education is shielded by governments through official diplomas appreciated by recruiters, but true skills trump such an advantage in the long run. Already at this moment I’m advising my students to learn from MOOCs (massive open online courses) rather than relying on the education we give in my institution.

What are the implications for the society?

At a global scale, societies are currently facing two contrasting mega-trends:

It is not hard to see these are contrasting: less people are needed for the same produce, whereas more people are born, and thus need jobs. The increase of people is exponential, while the increase in productivity comes, according to my theory, in large shifts. A large shift is bad because before it takes place, everything seems normal. (It’s like a tsunami approaching – no way to know before it hits you.)

What are the scenarios to solve the mega-trend contradiction?

I can think of a couple of ways:

  1. Marxist approach – redistribution of wealth and re-discovery of “job”
  2. WYSIWYG approach – making the systems as easy as possible

By adopting a Marxist approach, we can see there are two groups who are best off in this new world order:

  • The owners of the best A.I. (system capital)
  • The people with capacity to use and develop A.I. further (knowledge capital)

Others, as argued previously, are at a disadvantage. The phenomenon is much similar to the concept of ”digital divide” which can refer to 1) the difference of citizens from developed and developing countries’ access to technologies, or 2) the ability of the elderly vs. the younger to use modern technology (the latter have, for example, worse opportunities in high-tech job markets).

There are some relaxations to the arguments I’ve made. First, we need to consider that the increase of time people have as well as the general population increase create demand for services relating experiences and entertainment per se; yet, there needs to be consideration of re-distribution of wealth, as people who are unable to work need to consume to provide work for others (in other words, the service economy needs special support and encouragement from government vis-à-vis machine labor).

While it is a precious goal that everyone contribute in the society through work, the future may require a re-check on this protestant work ethic if indeed the supply of work drastically decreases. the major reason, in my opinion, behind the failure of policies reducing work hours such as the 35-hour work-week in France is that other countries besides these pioneers are not adopting them, and so they gain a comparative advantage in the global market. We are yet not in the stage where supply of labor is dramatically reduced at a global scale, but according to my theory we are getting there.

Secondly, a major relaxation, indeed, is that the systems can be usable by people who lack the understanding of their technical finesse. This method is already widely applied – very few understand the operating principles of the Internet, and yet can use it without difficulties. Even more complex professional systems, like Google AdWords, can be used without detailed understanding of the Google’s algorithm or Vickrey second-price sealed auctions.

So, dumbing things down is one way to go. The problem with this approach in the A.I. context is that when the system is smart enough to use itself, there is no need to dumb down – i.e., having humans use it would be a non-optimal use of resources. Already we can see this in some bidding algorithms in online advertising – the system optimizes better than people. At the moment we online marketers can add value through copywriting and other creative ways, but the upcoming A.I. would take away this advantage from us.

Recommendations

It is natural state of job markets that most workers are skilled only for manual labor or very simple machine work; if these jobs are lost, new way of organizing society is needed. Rather than fighting the change, societies should approach it objectively (which is probably one of the hardest things for human psychology).

My recommendations for the policy makers are as follows:

  • decrease cost of human labor (e.g., in Finland sometime in the 70s services were exempted from taxes – this scenario should help)
  • reduce employment costs – the situation is in fact perverse, as companies are penalized through side costs if they recruit workers. In a society where demand of labor is scarce, the reverse needs to take place: companies that recruit need to be rewarded.
  • retain/introduce monetary transfers à la welfare societies – because labor is not enough for everyone, the state needs to pass money from capital holders to underprivileged. The Nordic states are closer to a working model than more capitalistic states such as the United States.
  • push education system changes – because skills required in the job market are more advanced and more in flux than previously, the curriculum substance needs to change faster than it currently does. Unnecessary learning should be eliminated, while focusing on key skills needed in the job market at the moment, and creating further education paths to lifelong learning.

Because the problem of reducing job demand is not acute, these changes are unlikely to take place until there is no other choice (which is, by the way, the case for most political decision making).

Open questions

Up to which point can the human labor be replaced? I call it the point of zero human when no humans are needed to produce an equal or larger output than what is being produced at an earlier point in time. The fortune of humans is that we are all the time producing more – if the production level was at the stage of 18th century, we would already be in the point of zero human. Therefore, job markets are not developing in a predictable way towards point of zero human, but it may nevertheless be a stochastic outcome of the current development rate of technology. Ultimately, time will tell. We are living exciting times.