Earlier, I had a brief exchange of tweets with @jonathanstray about algorithms. It started from his tweet:
Perhaps the biggest technical problem in making fair algorithms is this: if they are designed to learn what humans do, they will.
To which I replied:
Yes, and that’s why learning is not the way to go. ”Fair” should not be goal, is inherently subjective. ”Objective” is better
Then he wrote:
lots of things that are really important to society are in no way objective, though. Really the only exception is prediction.
And I wrote:
True, but I think algorithms should be as neutral (objective) as possible. They should be decision aids for humans.
And he answered:
what does ”neutral” mean though?
After which I decided to write a post about it, since the idea is challenging to explain in 140 characters.
So, what is a neutral algorithm? I would define it like this:
A neutral algorithm is a decision-making program whose operating principles are minimally inflenced by values or opinions of its creators. 
An example of a neutral algorithm is a standard ad optimization algorithm: it gets to decide whether to show Ad1, Ad2, or Ad3. As opposed to asking from designers or corporate management which ad to display, it makes the decision based on objective measures, such as click-through rate (CTR).
A treatment that all ads (read: content, users) get is fair – they are diffused based on their merits (measured objectively by an unambiguous metric), not based on favoritism of any sort.
The roots of algorithm neutrality stem from freedom of speech and net neutrality . No outsiders can impose their values and opinions (e.g., censoring politically sensitive content) and interfere with the operating principles of the algorithm. Instead of being influenced by external manipulation, the decision making of the algorithm is as value-free (neutral) as possible. For example, in the case of social media, it chooses to display information which accurately reflects the sentiment and opinions of the people at a particular point in time.
Now, I grant there are issues with ”freedom”, some of which are considerable. For example, 1) for media, CTR-incentives lead to clickbaiting (alternative goal metrics should be considered), 2) for politicians and electorate, facts can be overshadowed by misinformation and short videos taken out of context to give false impression of individuals; and 3) for regular users, harmful misinformation can spread as a consequnce of neutrality (e.g., anti vaccination propaganda). But these are ”true” social issues that the algorithm is accurately reflecting. If we want more ”just” outcomes, we will actually need to make neutral algorithms biased. Among other questions, this leads into the problem space of positive discrimination. It is also valid to ask: Who determines what is just?
A natural limitation to machine decisions, and an answer to the previous question, is legislation – illegal content should be kept out by the algorithm. In this sense, the neutral algorithm needs to adhere to a larger institutional and regulatory context, but given that the laws themselves are ”fair” this should impose no fundamental threat to the objective of neutral algorithms: free decision-making and, consequently, freedom of speech. I wrote a separate post about the neutrality dilemma.
Inspite of the aforementioned issues, with a neutral algorithm each media/candidate/user has a level playing field. In time, they must learn to use it to argue in a way that merits the diffusion of their message.
The rest is up to humans – educated people respond to smart content, whereas ignorant people respond to and spread non-sense. A neutral algorithm cannot influence this; it can only honestly display what the state of ignorance/sophistication is in a society. A good example is Microsoft’s infamous bot Tay , a machine learning experiment turned bad. The alarming thing about the bot is not that ”machines are evil”, but that *humans are evil*; the machine merely reflects that. Hence my original point of curbing human evilness by keeping algorithms free of human values as much as possible.
Perhaps in the future an algorithm could figuratively spoken save us from ourselves, but at the moment that act requires conscious effort from us humans. We need to make critical decisions based on our own judgment, instead of outsourcing ethically difficult choices to algorithms. Just as there is separation of church and state, there should be separation of humans and algorithms to the greatest possible extent.
 Initially, I thought about definition that would say ”not influenced”, but it is not safe to assume that the subjectivity of its creators would not in some way be reflected to the algorithm. But ”minimal” leads into normative argument that that subjectivity should be mitigated.
 Wikipedia (2016): ”Net neutrality (…) is the principle that Internet service providers and governments should treat all data on the Internet the same, not discriminating or charging differentially by user, content, site, platform, application, type of attached equipment, or mode of communication.”
 A part of the story is that Tay was trolled heavily and therefore assumed a derogatory way of speech.