(How) can democracy, as we know it, survive in times of strongly increasing market power of a few online media aggregators?
Let us take this fundamental question in pieces. First, why should “democracy, as we know it,” be at stake? A few months ago, an article by Robert Epstein and Ronald Robertson, two behavioral researchers, attracted a lot of media attention (google “search engine manipulation effect,” and get lost for hours). In short, Epstein and Roberts conducted a series of experiments, where voters could collect information about political candidates using a search engine. Some of the accessible (real) websites favored one candidate, others favored the other candidate. The only treatment of the researchers was that they manipulated the ranking of the search results differently across different treatment groups. They found astonishingly large effects of their manipulations on the voting behavior of the subjects in the different groups. Here is a nice summary of the article.
Recently, the question was taken up again: Could a large social media provider (say, Facebook) influence political elections by rigging the political news its users see? The crucial common features of both cases are that any “manipulation” would occur at the level of the algorithm of the media aggregator that is in charge of selecting news for a given user (and can be made contingent on that user’s known characteristics) – and that the media aggregator is more effective the more users it has …
… which brings us to the second claim made in the initial statement above: “strongly increasing market power of a few media aggregators.” Admittedly, thorough empirical studies proving the increasing market power of online media aggregators (such as Google and Facebook) are rare. But the available statistics all point into the same direction. Argenton and Prüfer documented how market shares in the search engine market started to tip after Google had taken over market leadership in 2003. A similar pattern is documented for social networking sites, dominated by Facebook and for other “data-driven markets,” such as geographical maps. The intuition of the theory is that, if an online platform manages to attract more users than its competitors, these users generate more data about their preferences, e.g. by clicking certain links on a website or preferring certain media articles over others, than the competitors have access to. Data about user preferences today improves the services of a platform tomorrow. As the user-generated data are private property of the platform collecting it, more users today allows a platform to improve its services by more than its competitors with lower market shares, hence giving rise to an ever greater divergence of perceived quality levels and market shares, or “market tipping.”
Thanks to the upcoming U.S. Presidential elections, the topic is especially relevant. From the institutional perspective, a session on “How to deal with big data?” will discuss these and related questions at the upcoming SIOE conference in Paris.
In Epstein and Robertson’s article, one of the findings is that “[a]mong the most vulnerable groups [to search engine rigging] we identified were Moderate Republicans.” A highly respected co-author of mine explained this finding as follows:
“Apparently the authors of that search engine paper don’t recognize that they are a meta-illustration of their thesis. The first clue is their characterization of Fox News as “biased,” implying that other news sources are unbiased. But if all information sources are biased, then adding new but differently biased sources may serve to offset existing biases. Similarly, “vulnerable to manipulation” could be more neutrally characterized as “willing to update in response to new information” (moderate Republicans are just Bayesians), the opposite of which would be “closed minded.”
Given this characterization, researchers might be more vulnerable to such manipulation of their opinions than others. A fact we might want to remember.
[This post was first published at sioe.org]