In social media sampling, there are many issues. Two of them are: 1) the silent majority problem and 2) the grouping problem. The former refers to the imbalance between participants and spectators: can we trust that the vocal few represent the views of all? The latter means that people of similar opinions tend to flock […]
Avainsana: social media
Web 3.0: The dark side of social media
Web 2.0 was about all the pretty, shiny things about social media, like user-generated content, blogs, customer participation, ”everyone has a voice,” etc. Now, Web 3.0 is all about the dark side: algorithmic bias, filter bubbles, group polarization, flame wars, cyberbullying, etc. We discovered that maybe everyone should not have a voice, after all. Or at […]
From polarity to diversity of opinions
The problem with online discussions and communities is that the extreme poles draw people effectively, causing group polarization in which the original opinion of a person becomes more radical due to influence of the group. In Finnish, we have a saying ”In a group, stupidity concentrates” (joukossa tyhmyys tiivistyy). Here, I’m exploring the idea that […]
Questions from ICWSM17
In the ”Studying User Perceptions and Experiences with Algorithms” workshop, there were many interesting questions popping up. Here are some of them: Will increased awareness of algorithm functionality change user behavior? How How can we build better algorithms to diversify information users are exposed to? Do most people care about knowing how Google works? What’s […]
Reading list from ICWSM17
In one of the workshops of the first conference day, ”Studying User Perceptions and Experiences with Algorithms”, the participants recommended papers to each other. Here are, if not all, then most of them, along with their abstracts. Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, […]
This is a brief post describing a key problem in using fact-checking sites as inputs to filter undesirable content (e.g., fake news) from social media newsfeeds (e.g., Facebook). The premise sounds good, right? We use human raters to verify truthfulness of an article, and use that information as part of the decision-making algorithm. However, there […]