Tagged: common sense

Web 3.0: The dark side of social media

Web 2.0 was about all the pretty, shiny things about social media, like user-generated content, blogs, customer participation, ”everyone has a voice,” etc. Now, Web 3.0 is all about the dark side: algorithmic bias, filter bubbles, group polarization, flame wars, cyberbullying, etc. We discovered that maybe everyone should not have a voice, after all. Or at least that voice should be used with more attention to what you are saying.

While it is tempting to blame Facebook, media, or ”technology” for all this (just as it is easy to praise it for the other things), the truth is that individuals should accept more responsibility of their own behavior. Technology provides platforms for communication and information, but it does not generate communication and information; people do.

In consequence, I’m very skeptical about technological solutions to the Web 3.0 problems; they seem not to be technological problems but social ones, requiring primarily social solutions and secondly hybrid solutions. We should start respecting the opinions of others, get educated about different views, and learn how to debate based on facts and finding fundamental differences, not resorting to argumentation errors. Here, machines have only limited power – it’s up to us to re-learn these things and keep teaching them to new generations. It’s quite pitiful that even though our technology is 1000x better than in Ancient Greek, our ability to debate properly is one tenth of what it was 2000 years ago.

Avoiding the enslavement of machines requires going back to the basics of humanity.

How to teach machines common sense? Solutions for the ambiguity problem of AI

Introduction

The ambiguity problem illustrated:

User: ”Siri, call me an ambulance!”

Siri: ”Okay, I will call you ’an ambulance’.”

You’ll never reach the hospital, and end up bleeding to death.

Solutions

Two potential solutions come to mind:

A. machine builds general knowledge (”common sense”)

B. machine identifies ambiguity & asks for clarification from humans (reinforcement learning)

The whole ”common sense” problem can be solved by introducing human feedback into the system. We really need to tell the machine what is what, just like a child. This is iterative learning, in which trials and errors take place. However, it is better than trying to adapt an unescapably finite dataset into a close to finite space of meanings.

But, in fact, A and B converge by doing so. Which is fine, and ultimately needed.

Contextual awareness

To determine which solution to an ambiguous situation is proper, the machine needs contextual awareness; this can be achieved by storing contextual information from each ambiguous situation, and being explained ”why” a particular piece of information results in disambiguity. It’s not enough to say ”you’re wrong”, but there needs to be an explicit association to a reason (concept, variable). Equally, it’s not enough to say ”you’re right”, but again the same association is needed.

The process:

1) try something

2) get told it’s not right, and why (linking to contextual information)

3) try something else, corresponding to why

4) get rewarded, if it’s right.

The problem is, currently machines are being trained by data, not by human feedback.

New thinking: Training AI pets

So we would need to build machine-training systems which enable training by direct human feedback, i.e. a new way to teach and communicate with the machine. It’s not a trivial thing, since the whole machine-learning paradigm is based on data, not meanings. From data and probabilities, we would need to move into associations and concepts that capture social reality. A new methodology is needed. Potentially, individuals could train their own AIs like pets (think having your own ”AI pet” like Tamagotchi), or we could use large numbers of crowd workers who would explain the machine why things are how they are (i.e., create associations). A specific type of markup (=communication with the machine) would probably also be needed, although conversational UIs would most likely be the best solution.

Through mimicking human learning we can teach the machine common sense. This is probably the only way; since common sense does not exist beyond human cognition, it can only be learnt from humans. An argument can be made that this is like going back in time, to era where machines followed rule-based programming (as opposed to being data-driven). However, I would argue rule-based learning is much closer to human learning than the current probability-based one, and if we want to teach common sense, we therefore need to adopt the human way.

Conclusion

Machine learning may be at par, but machine training certainly is not. The current machine learning paradigm is data-driven, whereas we could look into ways for concept-driven AI training approaches. Essentially, this is something like reinforcement learning for concept maps.