In social media sampling, there are many issues. Two of them are: 1) the silent majority problem and 2) the grouping problem.
The former refers to the imbalance between participants and spectators: can we trust that the vocal few represent the views of all?
The latter means that people of similar opinions tend to flock together, meaning that looking at one online community or even social media platform we can get a biased understanding of the whole population.
Solving these problems is hard, and requires understanding of the online communities, their polarity, sociology and psychology driving the participation, and the functional principles of the algorithms that determine visibility and participation in the platforms.
Prior knowledge on the online communities can be used as a basis for stratified sampling that can be a partial remedy.
Ethics of machine learning algorithms has recently been raised as a major research concern. Earlier this year (2017), a fund of $27M USD was started to support research on the societal challenges of AI. The group responsible for the fund includes e.g. the Knight Foundation, Omidyar Network and the startup founder and investor Reid Hoffman.
As stated on the fund’s website, the fund will support a cross-section of AI ethics and governance projects and activities, both in the United States and internationally. They advocate cross-disciplinary research between e.g. computer scientists, social scientists, ethicists, philosophers, economists, lawyers and policymakers.
The fund lays out a list of areas they’re interested in funding. The list can be seen as a sort of a research agenda. The items are:
- Communicating complexity: How do we best communicate, through words and processes, the nuances of a complex field like AI?
- Ethical design: How do we build and design technologies that consider ethical frameworks and moral values as central features of technological innovation?
- Advancing accountable and fair AI: What kinds of controls do we need to minimize AI’s potential harm to society and maximize its benefits?
- Innovation in the public interest: How do we maintain the ability of engineers and entrepreneurs to innovate, create and profit, while ensuring that society is informed and that the work integrates public interest perspectives?
- Expanding the table: How do we grow the field to ensure that a range of constituencies are involved with building the tools and analyzing social impact?
As can be seen, the agenda emphasizes the big question: How can we maintain the benefits of the new technologies while making sure that their potential harm is minimized? To answer this question, a host of studies and perspectives is definitely needed. Read here a list of other initiatives working on the societal issues of AI and machine learning.
Read about this amazing initiative at Harvard’s website and thought of sharing it:
About the Ethics and Governance of Artificial Intelligence Initiative
Artificial intelligence and complex algorithms, fueled by the collection of big data and deep learning systems, are quickly changing how we live and work, from the news stories we see, to the loans for which we qualify, to the jobs we perform. Because of this pervasive impact, it is imperative that AI research and development be shaped by a broad range of voices—not only by engineers and corporations—but also social scientists, ethicists, philosophers, faith leaders, economists, lawyers, and policymakers.
To address this challenge, several foundations and funders recently announced the Ethics and Governance of Artificial Intelligence Fund, which will support interdisciplinary research to ensure that AI develops in a way that is ethical, accountable, and advances the public interest. The Berkman Klein Center and the MIT Media Lab will act as anchor academic institutions for this fund and develop a range of activities, research, tools, and prototypes aimed at bridging the gap between disciplines and connecting human values with technical capabilities. They will work together to strengthen existing and form new interdisciplinary human networks and institutional collaborations, and serve as a collaborative platform where stakeholders working across disciplines, sectors, and geographies can meet, engage, learn, and share.
Read more: https://cyber.harvard.edu/research/ai