Category: English

It’s not only about individuals – Three levels to consider with automation

Short argument: when talking about automation, it’s important to define what level we mean. There are at least three levels:

  • automation within a professional task = this level deals with understanding how automation impacts people’s work, e.g., replacing them or boosting their productivity (both the good and the bad)
  • automation within an organization = this level deals with understanding how automation shapes organizations; e.g., what news skills are needed, what is the impact on overall productivity and quality of outputs, new challenges, etc.
  • automation within a value chain / network = this level takes a holistic view on automation, inspecting its impact on a cluster or network of organizations and social systems as opposed to micro-level analysis. The questions are similar to the ones in other levels, but they are analyzed for the whole, considering intra-organizational dynamics.

Depending on the level, the implications, recommendations, best practices, and societal impacts differ.

Random notes from automation workshop

Notes from a CHI2018 workshop:

  • creativity is not an excuse for ignorance
  • software doing part of your work might be more work
  • errors from the developers are cascading in the system
  • never trust the marketing aspect of automation
  • frictions in the promise of automation
  • automation surprises
  • automation was expected to have better consistent behavior
  • automation has an intended use — but people may use it differently: they revert to the easiest choice
  • automation is already everywhere
  • automation is inside the interaction technique
  • automation is old
  • ”automation — this word is useless!”
  • transparency is a good property to automation
  • notation of automation
  • automation is not always good for human – especially in the case of security
  • you need to spend a lot of time for training because automation is not what was expected
  • levels of automation are intertwined

Some of these are pretty good insights.

On Social Media Sampling

In social media sampling, there are many issues. Two of them are: 1) the silent majority problem and 2) the grouping problem.

The former refers to the imbalance between participants and spectators: can we trust that the vocal few represent the views of all?

The latter means that people of similar opinions tend to flock together, meaning that looking at one online community or even social media platform we can get a biased understanding of the whole population.

Solving these problems is hard, and requires understanding of the online communities, their polarity, sociology and psychology driving the participation, and the functional principles of the algorithms that determine visibility and participation in the platforms.

Prior knowledge on the online communities can be used as a basis for stratified sampling that can be a partial remedy.

Web 3.0: The dark side of social media

Web 2.0 was about all the pretty, shiny things about social media, like user-generated content, blogs, customer participation, ”everyone has a voice,” etc. Now, Web 3.0 is all about the dark side: algorithmic bias, filter bubbles, group polarization, flame wars, cyberbullying, etc. We discovered that maybe everyone should not have a voice, after all. Or at least that voice should be used with more attention to what you are saying.

While it is tempting to blame Facebook, media, or ”technology” for all this (just as it is easy to praise it for the other things), the truth is that individuals should accept more responsibility of their own behavior. Technology provides platforms for communication and information, but it does not generate communication and information; people do.

In consequence, I’m very skeptical about technological solutions to the Web 3.0 problems; they seem not to be technological problems but social ones, requiring primarily social solutions and secondly hybrid solutions. We should start respecting the opinions of others, get educated about different views, and learn how to debate based on facts and finding fundamental differences, not resorting to argumentation errors. Here, machines have only limited power – it’s up to us to re-learn these things and keep teaching them to new generations. It’s quite pitiful that even though our technology is 1000x better than in Ancient Greek, our ability to debate properly is one tenth of what it was 2000 years ago.

Avoiding the enslavement of machines requires going back to the basics of humanity.

Machine decision making and workflow engineering

Did you ever want to climb Mount Everest?

If you did, you would have to split such a goal into many tasks: You would first need to find out what resources are needed for it, who could help you, how to prepare mentally and physically, etc. You would come up with a list of tasks that, in a sequence, form your plan of achieving the goal.

The same logic applies to all goals we humans have, both in companies and private lives, and it also applies when evaluting what tasks, given a goal, can be outsourced to machine decision making.

The best to way to conduct such an analysis is to view organizational goals as a sequence of inter-related job tasks, and then evaluate which particular sub-tasks humans are best at handling, and vice versa.

  1. Define the end goal (e.g., launch a marketing campaign)
  2. Define the steps needed to achieve that goal (strategy) (e.g., decide targeting, write ads, define budget, optimize spend)
  3. Divide each step into sub-tasks (e.g., decide targeting: analyze past campaigns, analyze needs from social media)
  4. Evaluate (e.g., on a scale of 1-5) how well machine and human perform in each sub-task (e.g., write ads: human = 5, machine = 1)
  5. Look at the entire chain and identify points of synergy (where machine can be used to enhance human work or vice versa (e.g., analyze social media by supervised machine learning where crowd workers tag tweets).

We find, by applying such logic, that there are plenty of such tasks in organizational workflows that currently cannot be outsourced to machines, out of variety of reasons. Sometimes the reasons relate to manual processes, i.e. the overall context does not support optimal carrying out of tasks. An example: currently, I’m manually downloading receipts from a digital marketing service account => I have to manually log-in and retrieve the receipts as PDF files, and then send them as email attachment to book-keeping. Ideally, the book-keeping system would just retrieve the receipts via an application programming interface (API) automatically, eliminating this unnecessary part of human labor.

At the same time, we should a) work to remove unnecessary barrier to work automation where it is feasible, b) while thinking of ways to provide optimal synergy from human and machine work inputs. This is not about optimizing individual work tasks, but optimizing the entire workflows toward reaching a specific goal. At the moment, there is little research and attention paid to this kind of comprehensive planning, which I call ”workflow engineering”.

From polarity to diversity of opinions

The problem with online discussions and communities is that the extreme poles draw people effectively, causing group polarization in which the original opinion of a person becomes more radical due to influence of the group. In Finnish, we have a saying ”In a group, stupidity concentrates” (joukossa tyhmyys tiivistyy).

Here, I’m exploring the idea that this effect, namely the growth of polar extremes (for example, being for or against immigration, as currently many European citizens are) is simply because people are lacking options to identify with. There are only the extremes, but no neutral or moderate group, even though, as I’m arguing here, most people in fact are moderate and understand that extremes and absolutes are misleading simplifications either way.

In other words, when there are only two ”camps” of opinion, people are more easily split between them. However, my argument is that people have preferences that correspond to being in the middle, not in the extremes.

These preferences remain hidden because there are only two camps to subscribe to: One cannot be moderate because there is no moderate group.

For example, there are liberals and conservatives, but what about the people in the middle? What about them who share some ideas of liberals and others from conservatives? By having only these two groups, other combinations become socially impossible because people are, again socially, pressed to observe all the opinions of the group they’re subscribing to, even if they wouldn’t agree with a particular view. This effect has been studied in relation to the concept of groupthink, but no permanent remedy has been found.

How to solve the problem of extremes?

My idea is simple: we should start more camps, more views to subscribe to, especially those representing moderate views.

The argument is that having more supply of camps, people will distribute more evenly between them and we have less polarization as a consequence.

This is illustrated in the picture (sketched quickly in Paint since I got an inspiration).

a and b

In (A), public discourse is dominated by the extremes (the distribution of attention is skewed toward the extremes of a given opinion spectrum). In (B), the distribution is focused on the center of the opinion spectrum (=moderate views) while the extremes are marginalized (as they should be, according to the assumption of moderate majority).

An example: having several political parties results in more diverse views being presented. In the US, you are either a Democrat or a Republican (although there are  marginal Green Party and the progressives, it must be stated), but in Finland you can also be many others: Center Party, National Coalition Party, or Green Party, for example. The same applies to most countries in Europe. Although I don’t have facts for this, it seems that the public discourse in the US is exceptionally polarized compared to many other countries [1].

Giving more choices to identify with for the ”silent majority” that is moderate rather than extreme, revealing the ”true” opinions of citizens, would ideally marginalize both extremes, avoiding the tyrannity of minority [2] currently dominating the public discourse.

Finally, all this could be formalized in game theory by assuming heterogeneity of preferences over the opinion spectrum and parameters such as gravity (”pull factor” by the extremes), justifiable e.g. by media attention given to extreme views over moderate ones. But the implication reains the same: diversity of classes reduces polarization under the set of assumptions.

Footnotes

[1] Of course there are other reasons, such as media taking political sides.

[2] This means extreme views are not representative to the whole population (which is more moderate than either view) but they get disproportionate attention in the media and public discourse. This is because the majority views are hidden; they would need to be revealed.

How to teach machines common sense? Solutions for the ambiguity problem of AI

Introduction

The ambiguity problem illustrated:

User: ”Siri, call me an ambulance!”

Siri: ”Okay, I will call you ’an ambulance’.”

You’ll never reach the hospital, and end up bleeding to death.

Solutions

Two potential solutions come to mind:

A. machine builds general knowledge (”common sense”)

B. machine identifies ambiguity & asks for clarification from humans (reinforcement learning)

The whole ”common sense” problem can be solved by introducing human feedback into the system. We really need to tell the machine what is what, just like a child. This is iterative learning, in which trials and errors take place. However, it is better than trying to adapt an unescapably finite dataset into a close to finite space of meanings.

But, in fact, A and B converge by doing so. Which is fine, and ultimately needed.

Contextual awareness

To determine which solution to an ambiguous situation is proper, the machine needs contextual awareness; this can be achieved by storing contextual information from each ambiguous situation, and being explained ”why” a particular piece of information results in disambiguity. It’s not enough to say ”you’re wrong”, but there needs to be an explicit association to a reason (concept, variable). Equally, it’s not enough to say ”you’re right”, but again the same association is needed.

The process:

1) try something

2) get told it’s not right, and why (linking to contextual information)

3) try something else, corresponding to why

4) get rewarded, if it’s right.

The problem is, currently machines are being trained by data, not by human feedback.

New thinking: Training AI pets

So we would need to build machine-training systems which enable training by direct human feedback, i.e. a new way to teach and communicate with the machine. It’s not a trivial thing, since the whole machine-learning paradigm is based on data, not meanings. From data and probabilities, we would need to move into associations and concepts that capture social reality. A new methodology is needed. Potentially, individuals could train their own AIs like pets (think having your own ”AI pet” like Tamagotchi), or we could use large numbers of crowd workers who would explain the machine why things are how they are (i.e., create associations). A specific type of markup (=communication with the machine) would probably also be needed, although conversational UIs would most likely be the best solution.

Through mimicking human learning we can teach the machine common sense. This is probably the only way; since common sense does not exist beyond human cognition, it can only be learnt from humans. An argument can be made that this is like going back in time, to era where machines followed rule-based programming (as opposed to being data-driven). However, I would argue rule-based learning is much closer to human learning than the current probability-based one, and if we want to teach common sense, we therefore need to adopt the human way.

Conclusion

Machine learning may be at par, but machine training certainly is not. The current machine learning paradigm is data-driven, whereas we could look into ways for concept-driven AI training approaches. Essentially, this is something like reinforcement learning for concept maps.