Tagged: society

Web 3.0: The dark side of social media

Web 2.0 was about all the pretty, shiny things about social media, like user-generated content, blogs, customer participation, ”everyone has a voice,” etc. Now, Web 3.0 is all about the dark side: algorithmic bias, filter bubbles, group polarization, flame wars, cyberbullying, etc. We discovered that maybe everyone should not have a voice, after all. Or at least that voice should be used with more attention to what you are saying.

While it is tempting to blame Facebook, media, or ”technology” for all this (just as it is easy to praise it for the other things), the truth is that individuals should accept more responsibility of their own behavior. Technology provides platforms for communication and information, but it does not generate communication and information; people do.

In consequence, I’m very skeptical about technological solutions to the Web 3.0 problems; they seem not to be technological problems but social ones, requiring primarily social solutions and secondly hybrid solutions. We should start respecting the opinions of others, get educated about different views, and learn how to debate based on facts and finding fundamental differences, not resorting to argumentation errors. Here, machines have only limited power – it’s up to us to re-learn these things and keep teaching them to new generations. It’s quite pitiful that even though our technology is 1000x better than in Ancient Greek, our ability to debate properly is one tenth of what it was 2000 years ago.

Avoiding the enslavement of machines requires going back to the basics of humanity.

From polarity to diversity of opinions

The problem with online discussions and communities is that the extreme poles draw people effectively, causing group polarization in which the original opinion of a person becomes more radical due to influence of the group. In Finnish, we have a saying ”In a group, stupidity concentrates” (joukossa tyhmyys tiivistyy).

Here, I’m exploring the idea that this effect, namely the growth of polar extremes (for example, being for or against immigration, as currently many European citizens are) is simply because people are lacking options to identify with. There are only the extremes, but no neutral or moderate group, even though, as I’m arguing here, most people in fact are moderate and understand that extremes and absolutes are misleading simplifications either way.

In other words, when there are only two ”camps” of opinion, people are more easily split between them. However, my argument is that people have preferences that correspond to being in the middle, not in the extremes.

These preferences remain hidden because there are only two camps to subscribe to: One cannot be moderate because there is no moderate group.

For example, there are liberals and conservatives, but what about the people in the middle? What about them who share some ideas of liberals and others from conservatives? By having only these two groups, other combinations become socially impossible because people are, again socially, pressed to observe all the opinions of the group they’re subscribing to, even if they wouldn’t agree with a particular view. This effect has been studied in relation to the concept of groupthink, but no permanent remedy has been found.

How to solve the problem of extremes?

My idea is simple: we should start more camps, more views to subscribe to, especially those representing moderate views.

The argument is that having more supply of camps, people will distribute more evenly between them and we have less polarization as a consequence.

This is illustrated in the picture (sketched quickly in Paint since I got an inspiration).

a and b

In (A), public discourse is dominated by the extremes (the distribution of attention is skewed toward the extremes of a given opinion spectrum). In (B), the distribution is focused on the center of the opinion spectrum (=moderate views) while the extremes are marginalized (as they should be, according to the assumption of moderate majority).

An example: having several political parties results in more diverse views being presented. In the US, you are either a Democrat or a Republican (although there are  marginal Green Party and the progressives, it must be stated), but in Finland you can also be many others: Center Party, National Coalition Party, or Green Party, for example. The same applies to most countries in Europe. Although I don’t have facts for this, it seems that the public discourse in the US is exceptionally polarized compared to many other countries [1].

Giving more choices to identify with for the ”silent majority” that is moderate rather than extreme, revealing the ”true” opinions of citizens, would ideally marginalize both extremes, avoiding the tyrannity of minority [2] currently dominating the public discourse.

Finally, all this could be formalized in game theory by assuming heterogeneity of preferences over the opinion spectrum and parameters such as gravity (”pull factor” by the extremes), justifiable e.g. by media attention given to extreme views over moderate ones. But the implication reains the same: diversity of classes reduces polarization under the set of assumptions.

Footnotes

[1] Of course there are other reasons, such as media taking political sides.

[2] This means extreme views are not representative to the whole population (which is more moderate than either view) but they get disproportionate attention in the media and public discourse. This is because the majority views are hidden; they would need to be revealed.

The black sheep problem in machine learning

Introduction. Hal Daumé wrote an interesting blog post about language bias and the black sheep problem. In the post, he defines the problem as follows:

The ”black sheep problem” is that if you were to try to guess what color most sheep were by looking and language data, it would be very difficult for you to conclude that they weren’t almost all black. In English, ”black sheep” outnumbers ”white sheep” about 25:1 (many ”black sheep”s are movie references); in French it’s 3:1; in German it’s 12:1. Some languages get it right; in Korean it’s 1:1.5 in favor of white sheep. This happens with other pairs, too; for example ”white cloud” versus ”red cloud.” In English, red cloud wins 1.1:1 (there’s a famous Sioux named ”Red Cloud”); in Korean, white cloud wins 1.2:1, but four-leaf clover wins 2:1 over three-leaf clover.

Thereafter, Hal accurately points out:

”co-occurance frequencies of words definitely do not reflect co-occurance frequencies of things in the real world.”

But the mistake made by Hal is to assume language describes objective reality (”the real world”). Instead, I would argue that it describes social reality (”the social world”).

Black sheep in social reality. The higher occurence of ’black sheep’ tells us that in social reality, there is a concept called ’black sheep’ which is more common than the concept of white (or any color) sheep. People are using that concept, not to describe sheep, but as an abstract concept in fact describing other people (”she is the black sheep of the family”). Then, we can ask: Why is that? In what contexts is the concept used? And try to teach the machine its proper use through associations of that concept to other contexts (much like we teach kids when saying something is appropriate and when not). As a result, the machine may create a semantic web of abstract concepts which, if not leading to it understanding them, at least helps in guiding its usage of them.

We, the human. That’s assuming we want it to get closer to the meaning of the word in social reality. But we don’t necessarily want to focus on that, at least as a short-term goal. In the short-term, it might be more purposeful to understand that language is a reflection of social reality. This means we, the humans, can understand human societies better through its analysis. Rather than trying to teach machines to imputate data to avoid what we label an undesired state of social reality, we should use the outputs provided by the machine to understand where and why those biases take place. And then we should focus on fixing them. Most likely, technology plays only a minor role in that, although it could be used to encourage balanced view through a recommendation system, for example.

Conclusion. The ”correction of biases” is equivalent to burying your head in the sand: even if they magically disappeared from our models, they would still remain in the social reality, and through the connection of social reality and objective reality, echo in the everyday lives of people.

What jobs are safe from AI?

There is enormous concern about machine learning and AI replacing human workers. However, according to several economists, and also according to past experience ranging back all the way to the industrial revolution of the 18th century (which caused major distress at the time), the replacement of human workers is not permanent but there will be new jobs to replace the replaced jobs (as postulated by the Schumpeterian hypothesis). In this post, I will briefly share some ideas on what jobs are relatively safe from AI, and how should an individual member of the workforce increase his or her chances of being competitive in the job market of the future.

“Insofar as they are economic problems at all, the world’s problems in this generation and the next are problems of scarcity, not of intolerable abundance. The bogeyman of automation consumes worrying capacity that should be saved for real problems . . .” -Herbert Simon, 1966

What jobs are safe from AI?

The ones involving:

  1. creativity – machine can ”draw” and ”compose” but it can’t develop a business plan.
  2. interpretation – even in law which is codified in most countries, lawyers use judgment and interpretation. Cannot be replaced as it currently stands.
  3. transaction costs – robots could conduct a surgery and even evaluate before that if a surgery is needed, but in between you need people to explain things, to prepare the patients, etc. Most service chains require a lot of mobility and communication, i.e. transaction costs, that are to be handled by people.

How to avoid losing your job to AI?

Make sure your skills are complementary to automation, not substitute of it. For example, if you have great copywriting skills, there was actually never a better time to be a marketer, as digital platforms enable you to reach all the audiences with a few clicks. The machine cannot write compelling ads, so your skills are complementary. The increased automation does not reduce the need for creativity; it amplifies it.

If the machine would learn to be creative in a meaningful way (which is far far away, realistically speaking), then you’d do some other complementary task.

The point is: there is always some part of the process you can complement.

Fear not. Machines will not take all human jobs because not all human jobs exist yet. Machines and software will take care of some parts of service chains, even to a great extent but in fact that will enhance the functioning of the whole chain, and also that of human labor (consider the amplification example of online copywriting). New jobs that we still cannot vision will be created, as needs and human imagination keep evolving.

The answer is in creative destruction: People won’t stop coming up with things to offer because of machines. And other people won’t stop wanting those things because of machines. Jobs will remain also in the era of AI. The key is not to complain about someone taking your job, but to think of other things to offer, and develop your personal competences accordingly. Even if you won’t, the next guy will. There’s no stopping creativity.

Read more:

  • Scherer, F. M. (1986). Innovation and Growth: Schumpeterian Perspectives (MIT Press Books). The MIT Press.
  • David, H. (2015). Why are there still so many jobs? The history and future of workplace automation. The Journal of Economic Perspectives, 29(3), 3–30.

Research agenda for ethics and governance of artificial intelligence

Ethics of machine learning algorithms has recently been raised as a major research concern. Earlier this year (2017), a fund of $27M USD was started to support research on the societal challenges of AI. The group responsible for the fund includes e.g. the Knight Foundation, Omidyar Network and the startup founder and investor Reid Hoffman.

As stated on the fund’s website, the fund will support a cross-section of AI ethics and governance projects and activities, both in the United States and internationally. They advocate cross-disciplinary research between e.g. computer scientists, social scientists, ethicists, philosophers, economists, lawyers and policymakers.

The fund lays out a list of areas they’re interested in funding. The list can be seen as a sort of a research agenda. The items are:

  • Communicating complexity: How do we best communicate, through words and processes, the nuances of a complex field like AI?
  • Ethical design: How do we build and design technologies that consider ethical frameworks and moral values as central features of technological innovation?
  • Advancing accountable and fair AI: What kinds of controls do we need to minimize AI’s potential harm to society and maximize its benefits?
  • Innovation in the public interest: How do we maintain the ability of engineers and entrepreneurs to innovate, create and profit, while ensuring that society is informed and that the work integrates public interest perspectives?
  • Expanding the table: How do we grow the field to ensure that a range of constituencies are involved with building the tools and analyzing social impact?

As can be seen, the agenda emphasizes the big question: How can we maintain the benefits of the new technologies while making sure that their potential harm is minimized? To answer this question, a host of studies and perspectives is definitely needed. Read here a list of other initiatives working on the societal issues of AI and machine learning.