Just spent 1.5hrs talking to a journalist about algorithms.
Sharing my notes, containing many ”unpopular opinions” that I nonetheless believe should be part of the public discussion about these topics. TL;DR: It’s easy to blame algorithms, hard to take individual responsibility.
Here’s what’s wrong with the debate on algorithms:
(1) algorithms are used as scapegoats for human action by media and researchers.
(2) “algorithms” are wrong terminology in the first place; it’s the whole system or app experience as well as platforms’ network effects bringing people from different backgrounds together => natural clash of culture.
(3) addictive design / addictive UX is the biggest issue — greedy algorithms feed you content that make it harder to distance yourself from the screen (looks, notifications, multimedia experience, sounds… optimized for sticking) (the worst is TikTok).
(4) small advertisers are brought at level playing field with access to online ads platforms (e.g., $5 USD budget to get started with FB/GOOG Ads, having the same access as big players) — data is not EVIL, it’s the great democratizer.
(5) ad auctions ARE fair; big brands can’t buy all the inventory (quality and relevance mechanisms). Compare with TV! (small player cannot even enter the marketplace)
(6) media don’t know crap about algorithms; they have a lot of fallacies about how these systems work (in reality, they don’t work like that — the systems have a lot of flaws and not everything can be or is tracked).
(7) lack of knowledge results in ”data xenophobia” that’s the same as any other fear of the unknown; people assume the worst (people need to educate themselves and think with their own brain).
(8) media are looking for scandals instead of objective information; clicks sell! For example, my scapegoat ideas were not considered interesting enough by one journalism – they wanted a scoop! (they WANT to blame the algorithms)
(9) politicians take Twitter as a part of their decision making process – huge problem.
(10) ad platforms ARE transparent already; tons of information about how they work, how privacy settings can be changed / opted out, and even know why a particular ad was shown. People’s responsibility to inform themselves instead of just complaining and accusing.
(11) online platforms are a social contract – free use in exchange for giving your information (people need to accept or move on).
(12) your information is already being used in aggregated format that protects your privacy – another fallacy because people don’t know about this (“big bad corporation exploits my information” — NOT true!).
(13) changing what information Google shows by manually removing content is WRONG. It’s a form of censorship done in Soviet Union and East Germany; the end path is thousands of exceptions of what can and cannot be spoken about.
(14) Cambridge Analytica scandal was political scapegoating – people are not stupid, their behavior is not decisively directed by “psychological profiling” and seeing a couple of FB Ads. If it was this easy, all advertisers would be millionaires.
(15) people are lazy and like free things but still complain about them! (and don’t find out how the ads really work and what’s the benefit of giving data.)
(16) asking people “do you want more privacy?” is a moot question – everybody will say yes, but if they are made to pay for services, then they’d be ready to give their data or wouldn’t care (see how many people use paid YouTube; very few!).
(17) multiple stakeholders should be considered, not only consumers but also advertisers – especially small ones that are most hurt by taking away data.
(18) taking away data is already reality; platforms like iOS, Google Analytics, Facebook… consistently REDUCING data shared with advertisers (not good for anyone; but worse for small players).
(19) the advertent effect of “wanting good” via privacy legislation can result in Google gaining even stronger monopoly power => they block cookies from Chrome and move to Federated Learning => kills off 3rd party ad platforms (Google’s competition) => advertiser still has access to aggregated groups, but only via Google’s platform!
(20) there is no perfect solution to complex socio-technological problems but we should aim for Pareto-optimal solutions (ones that maximize good outcomes for as many people as possible).
(21) legislation should focus on the PROCESS of how false positives / negatives are handled regarding automated decision making (e.g., I was unfairly banned from FB marketplace => petitioning is handled by another algorithm that makes the same wrong decision!) (in reality, should be human checking the mistakes).