Tagged: technology

Web 3.0: The dark side of social media

Web 2.0 was about all the pretty, shiny things about social media, like user-generated content, blogs, customer participation, ”everyone has a voice,” etc. Now, Web 3.0 is all about the dark side: algorithmic bias, filter bubbles, group polarization, flame wars, cyberbullying, etc. We discovered that maybe everyone should not have a voice, after all. Or at least that voice should be used with more attention to what you are saying.

While it is tempting to blame Facebook, media, or ”technology” for all this (just as it is easy to praise it for the other things), the truth is that individuals should accept more responsibility of their own behavior. Technology provides platforms for communication and information, but it does not generate communication and information; people do.

In consequence, I’m very skeptical about technological solutions to the Web 3.0 problems; they seem not to be technological problems but social ones, requiring primarily social solutions and secondly hybrid solutions. We should start respecting the opinions of others, get educated about different views, and learn how to debate based on facts and finding fundamental differences, not resorting to argumentation errors. Here, machines have only limited power – it’s up to us to re-learn these things and keep teaching them to new generations. It’s quite pitiful that even though our technology is 1000x better than in Ancient Greek, our ability to debate properly is one tenth of what it was 2000 years ago.

Avoiding the enslavement of machines requires going back to the basics of humanity.

Machine decision making and workflow engineering

Did you ever want to climb Mount Everest?

If you did, you would have to split such a goal into many tasks: You would first need to find out what resources are needed for it, who could help you, how to prepare mentally and physically, etc. You would come up with a list of tasks that, in a sequence, form your plan of achieving the goal.

The same logic applies to all goals we humans have, both in companies and private lives, and it also applies when evaluting what tasks, given a goal, can be outsourced to machine decision making.

The best to way to conduct such an analysis is to view organizational goals as a sequence of inter-related job tasks, and then evaluate which particular sub-tasks humans are best at handling, and vice versa.

  1. Define the end goal (e.g., launch a marketing campaign)
  2. Define the steps needed to achieve that goal (strategy) (e.g., decide targeting, write ads, define budget, optimize spend)
  3. Divide each step into sub-tasks (e.g., decide targeting: analyze past campaigns, analyze needs from social media)
  4. Evaluate (e.g., on a scale of 1-5) how well machine and human perform in each sub-task (e.g., write ads: human = 5, machine = 1)
  5. Look at the entire chain and identify points of synergy (where machine can be used to enhance human work or vice versa (e.g., analyze social media by supervised machine learning where crowd workers tag tweets).

We find, by applying such logic, that there are plenty of such tasks in organizational workflows that currently cannot be outsourced to machines, out of variety of reasons. Sometimes the reasons relate to manual processes, i.e. the overall context does not support optimal carrying out of tasks. An example: currently, I’m manually downloading receipts from a digital marketing service account => I have to manually log-in and retrieve the receipts as PDF files, and then send them as email attachment to book-keeping. Ideally, the book-keeping system would just retrieve the receipts via an application programming interface (API) automatically, eliminating this unnecessary part of human labor.

At the same time, we should a) work to remove unnecessary barrier to work automation where it is feasible, b) while thinking of ways to provide optimal synergy from human and machine work inputs. This is not about optimizing individual work tasks, but optimizing the entire workflows toward reaching a specific goal. At the moment, there is little research and attention paid to this kind of comprehensive planning, which I call ”workflow engineering”.

Mediated relationship between users and algorithms

The relationship between users and algorithms is always a mediated one, meaning that there is always a proxy between the algorithm and the user. The proxy can be understood differently based on the particular level we’re interested in. For example, it can be a social media platform (e.g., Facebook, Twitter) where people retrieve their news content (Nielsen & Schrøder, 2014). Or, at a closer level of interaction, it can be understood as user interface (UI). The following picture illustrates this thinking.

Figure 1  Mediated relationship between users and algorithms

In both cases, however, the interaction – and therefore the experience of the user – is mediated by a proxy entity. This is a critical notion when examining the interaction between algorithms and users because such a thing cannot exist in pure form. Essentially, the research of algorithms deal with how algorithms transform into user experience. Through the mediating nature we can build phenomenological bridges to technology adoption, such as TAM2 and UTAUT and models (Venkatesh & Davis, 2000; and Venkatesh et al., 2003, respectively) or generally to experience of technology usage, examined e.g. in human-computer interaction (HCI) literature (see Card et al., 1983; Dix et al., 2003).

References

Card, S. K., Newell, A., & Moran, T. P. (1983). The Psychology of Human-Computer Interaction. Hillsdale, NJ, USA: L. Erlbaum Associates Inc.
Dix, A., Finlay, J. E., Abowd, G. D., & Beale, R. (2003). Human-Computer Interaction (3 edition). Harlow, England ; New York: Pearson.
Venkatesh, V., & Davis, F. D. (2000). A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies. Management Science, 46(2), 186–204.
Venkatesh, V., Morris, M., Davis, G., & Davis, F. (2003). User Acceptance of Information Technology:  Toward a Unified View. Management Information Systems Quarterly, 27(3). Retrieved from http://aisel.aisnet.org/misq/vol27/iss3/5