News

Job Opening: Postdoc on Economic Governance of Data-driven Markets

The Tilburg Law and Economics Center (TILEC) advertises a position for a Postdoctoral Researcher on the Economic Governance of Data-driven Markets (starting date: September 2018). This is a three-year position targeted at promising researchers in economics or related fields who work on the effects of digitalization and datafication on institutions, including markets, legal and political institutions. We are looking for a researcher who combines expertise in the quantitative techniques of modern economics with an interest both in technologies currently transforming our societies, industries, polities, and jurisdictions and with a sincere openness to interdisciplinary research approaches from other social sciences. Within this field, all types of research interests are welcome, with a slight preference for researchers performing empirical work.

TILEC also advertises another Postdoc Position, on the Economics of Innovation. This is a two-year position targeted at promising researchers in economics or related fields who work on the Economics of Innovation. Within this field, all types of research interests are welcome but preference will be given to researchers performing empirical work and/or working on standardization and standard-setting.

Details on the jobs and the application procedure are available via EconJobMarket. 

“Faithful Strategies: How Religion Shapes Nonprofit Management” to be published in Management Science

Nonprofit organizations, religious values, and a complete dataset on strategic choices of German hospitals: these are the main ingredients into a paper co-authored with my colleague Lapo Filistrucchi, which is forthcoming in Management Science.

This paper makes three key contributions. First, it confirms that the values and beliefs of organizational decision makers, as required by their employers, influence the strategic actions of firms. This study is the first to show that this premise of some behavioral scholars also holds for nonprofits.

Second, this paper is among the first to investigate the strategic effects of decision makers’ religious values. Thereby, as Laurence Iannacone told me recently, we are the first to show that the content of religious teachings actually influence firms’ strategies in a coherent and, thereby, predictable way: roughly speaking, we propose that, because Catholicism is more communal and because Protestantism is more individualistic (and education-oriented), Catholic hospitals are larger, make more revenues, and serve more medical fields, whereas Protestant hospitals are smaller, focus on a few medical fields, specialize in more complex (and more expensive) treatments, and have more links to universities.

Third, we have tackled the difficulties of the empirical and theoretical literatures on nonprofits to predict their strategic choices by fleshing out lines of distinction between different nonprofit subgroups that are distinguishable according to observable organizational characteristics (here: whether a given organization is Catholic or Protestant or neither of the two).

Earlier press coverage related to the paper:

“Firms, Nonprofits and Cooperatives” in high demand

We just received a nice piece of appreciation: The Annals of Public and Cooperative Economics informed us that our paper, “Firms, Nonprofits and Cooperatives: A Theory of Organizational Choice” (joint with Patrick Herbst) is in the top 5 most downloaded papers of the journal during 2016. Apparently, this equals > 500 downloads.

Given that the business of publishing research in economics is tough—and rejections are very frequent (and necessary to get acceptance rates of 10% and below at the top journals)—receiving such positive information is balm for the soul…

A brief description of the paper is here.

New working paper: Competing with Big Data

One of the hot topics these days is big data – and the consequences of the increasing datafication of our world for all parts of our lives, i.e. on polities, jurisdictions, societies, and markets. Regarding the latter, the key questions asked frequently both by researchers and by policy makers these days comprise: Is there anything special about markets driven by (big) data? If so, what? How important is the “big” part and what is a “data-driven” market, in the first place? What does this imply for the business models of sellers in data-driven markets? In which way, if at all, should (competition) policy react?

In “Competing with Big Data,” Christoph Schottmüller (University of Copenhagen) and I try to shed some light on these questions. A nontechnical summary of the paper recently appeared in the blog of the Data Science Center Tilburg. A summary cum further thinking (not all in line with the paper) was written by a Danish consulting firm.

Workshop on “Economic Governance of Data-driven Markets”

For the Tilburg Law and Economics Center (TILEC) and the Governance and Regulation Chair (GovReg) at University Paris-Dauphine, PSL Research University, I will organize a two-day Workshop on “Economic Governance of Data-driven Markets” at Tilburg University, the Netherlands, on October 12 and 13, 2017.

We aim to discuss the specific problems that arise on markets and in political and legal systems through the ongoing process of datafication. Once such a problem and the optimal response/intervention are identified, the enforcement institution must be endogenized: How should data-driven markets or political systems be governed? By national or supranational regulation (public ordering)? Or by self-governance of industry-participants in some form (private ordering)?

Scholars from various backgrounds, including institutional economics, industrial organization, and law & economics, and extending to neighboring disciplines such as political science, management, and information science will meet in a pleasant atmosphere to advance our knowledge.

Keynote Speakers:

• Yochai Benkler (Harvard)
• Paul Seabright (Toulouse)
• Joshua Tucker (NYU)
• Marshall Van Alstyne (Boston U)

Important Dates:

The deadline for submissions is May 14, 2017. The call for papers is here. Further details are here.

Why Do (Some) Consumers Pay for Privacy?

In a recent working paper, “Consumers’ Privacy Choices in the Era of Big Data,” together with job market candidate Sebastian Dengler I study a question that is at the heart of the economics of privacy: Why would consumers with standard preferences facing a monopolistic seller ever anonymize themselves if it comes at a cost?

Empirical background

This paper is driven by two recent empirical developments: first, the increasing ability of sellers, often via the Internet, to obtain many data points about the preferences and characteristics of individual consumers. This enables them to come closer and closer to perfect price discrimination, a case that was dismissed by the literature in industrial organization for many decades due to its strong information requirements. Second, the combined access of sellers to large datasets and the power of algorithms to respond to information about a given consumer quickly. By contrast, many consumers feel overwhelmed by the choices they have to make while shopping and face cognitive constraints.

A guessing game between sellers and buyers

We combine these ingredients in a generic model, where consumers with heterogenous willingness-to-pay for a product can either use a direct sales channel — but expect that the seller will charge them their full willingness-to-pay — or they can spend a cost to hide their identity from the seller, via an anonymous sales channel. For anonymized shoppers, the seller only sees that they anonymized but, for direct shoppers, the seller knows the exact willingness-to-pay.

The tricky part of the model is the guessing game between a consumer and the seller, where the seller sets prices for the product depending on the information she has, and the consumer decides about the sales channel based on his expectation of the prices in both channels. This is implemented as a level-k cognitive hierarchy model. Consumers are characterized by a certain level k, starting at k=0, which stands for the number of iterations they can predict the seller’s best pricing response. The traditional assumption of unlimited cognitive sophistication is nested in this model for k = ∞. The seller, having access to big data and powerful algorithms, has higher k than consumers.

Who anonymizes — and why? Microfounding privacy choices

The information assumptions turn out to be crucial for the model’s results. We show that consumers with high willingness-to-pay have an incentive to anonymize, whereas those with low willingness-to-pay use the direct channel, get perfectly-discriminated prices, and walk home with zero consumer surplus. Anonymous shoppers, however, do not (fully) predict that the seller understands that their cost of anonymization are sunk when deciding about a purchase. Therefore, the seller will increase the price in the anonymous market segment by the cost of anonymization, which leaves some consumers with negative and others with positive consumer surplus. If consumers understand this threat (at k=1), those with moderate willingness-to-pay also prefer the direct channel. This process is repeated with increasing k of consumers: the anonymous market unravels level by level. Importantly, it unravels already for finite k. That is, unlimited sophistication of consumers is a sufficient but not necessary condition for the breakdown of the anonymous channel. In essence, the higher consumers’ sophistication, the fewer consumers choose to anonymize, and the lower is aggregate consumer surplus. On the other hand, the seller benefits from this unraveling as her profits are maximized on consumers in the direct channel. We also show that k and the cost of anonymization interact in equilibrium: for a given k, increased anonymization costs also lead to a smaller anonymized market and, finally, to the breakdown of that segment.

The key contribution of this paper is to show that and when a costly anonymous sales channel is used in equilibrium. This offers a micro-foundation of privacy choices—the existence of privacy preferences is often assumed in other works.

Policy implications

Moreover, this paper has clear and important policy implications. Considering that consumers’ level of sophistication is exogenous, the key parameter that can be influenced by policy makers is the cost of anonymization. We show that consumers fare best if these costs are zero, whereas the seller maximizes profits if no consumer anonymizes, due to prohibitive anonymization costs. From a total welfare perspective, these two cases have identical consequences. Hence, the model suggests that those authorities focusing on consumers’ interests invest in cheaper and easier-to-use privacy-protective tools and legislate that the default sales channel be the anonymous one. That is, to allow sellers to offer consumers registration/tracking technologies, in exchange for a price discount, but to prohibit that consumers are tracked and targeted by default and have to anonymize for a cost (which is standard today).

Does an individual’s religiosity influence her moral attitudes and economic behavior?

Economics of religion is a relatively new field within economics (which just got its own JEL code, Z12, not long ago).  By contrast, connecting economics and the moral attitudes of man has a long history and include philosophers of ancient Greece just as 18-century whizkid Adam Smith.

ReligionWhat these scholarly giants have not scrutinized, however, is whether and, if so, how the moral attitudes of individuals are shaped by their personal religiosities. Religiosity can be measured by church membership, the frequency of church attendance or private praying, a believer’s intensity of believing in God or in religious concepts.

Using data for a representative sample of the Dutch population with information about participants’ religious background, in “Religion, Moral Attitudes & Economic Behavior,” with Isadora Kirchmaier and Stefan Trautmann (both at the University of Heidelberg) I study the link between religion and moral behavior and attitudes. We find that religious people are less accepting of unethical behavior and report more volunteering. They report lower preference for redistribution. Religious people are equally likely as non-religious people to betray trust in an anonymous experimental game. Controlling for Christian denominations, we find that Catholics betray less than non-religious people, while Protestants betray more than Catholics and are indistinguishable from the non-religious. We also explore the intergenerational transmission of religiosity effects.

Competing with Big Data

The recent process of “datafication,” also coined ”the rise of big data” (Mayer-Schönberger and Cukier, 2013) is explained by two simultaneous, recent technological innovations: first, the increasing availability of data, owing to the fact that more and more economic and social transactions take place aided by information and communication technologies which easily and inexpensively store the information such transactions produce or transmit; second, the increasing ability of firms and governments to analyze the novel big data sets. Einav and Levin (2014:3) write: “But what exactly is new about [big data]? The short answer is that data is now available faster, has greater coverage and scope, and includes new types of observations and measurements that previously were not available.”

In “Competing with Big Data” (working paper expected late summer), Christoph Schottmüller (University of Copenhagen) and I attempt to better understand what we call data-driven markets and to study associated competitive behavior and outcomes. Thereby, we focus on the consequences of datafication in the economic sphere and largely leave its effects on the social, legal, and political domains out of consideration.

Both policy makers and researchers working on digital markets have repeatedly underlined that the most relevant dimension of competition in such markets, e.g. search engines or online services, is not the quantity a firm produces or the price it charges but the innovation efforts it invests. Besides, to understand the long-term effects of competition in such markets (and to make predictions about their likely development), it would be necessary to employ dynamic models, which study behavior of firms over time. Therefore, we construct and analyze a dynamic model, where competing firms choose their innovation efforts again and again. The important feature of the model is that it incorporates indirect network effects that arise on the supply side of a market, via decreasing marginal costs of innovation, but that are driven by user demand.

Indirect Network Effects

Demand for the services of one provider generates data about its users’ preferences or characteristics (henceforth: user information), as a natural and machine-generated by-product of using the service. This user information, which Zuboff (2016) calls “behavioral surplus,” is private information of the provider who collected it and can be used to innovate by adapting the product better to users’ preferences, thereby increasing perceived quality in the future. Thus, higher initial demand reduces the marginal cost of innovation: it makes it cheaper to produce one additional unit of product or service quality, as perceived by users.

For exemplification of the indirect network effects, think of a search engine. If a user places a query and then clicks on the third link she is shown, the search engine – and only this one search engine – knows which links the user could choose from and which one she preferred, thereby revealing her preferences. This information can be used by the search engine’s algorithm the next time a user enters a similar search term, thereby improving this search engine’s quality, as perceived by users, over competitors who do not have user information.

Data-driven Markets

We define markets that are subject to these indirect network effects fueled by user information as data-driven markets. We show that such markets tip under very mild conditions, moving towards monopoly. That is, we expect one firm to be dominant and other firms to serve little niches or to exit the market. We also identify a strong first-mover advantage, that is, the first firm with a business model that makes use of user information is likely to win the entire market over time.

Shallow Incentives to Innovate

An important result of the model is that it shows the extent of innovation incentives of the competing firms once the market has tipped, that is, when one firm is dominant and the other firm(s) have only small (and shrinking) market shares: they are close to nil.

The reason is that a firm with very little demand that considers to invest a lot into research, even if it has the idea for a radical innovation, faces a dominant firm that has much lower marginal cost of innovation. Hence, the dominant firm would only have to wait for the small firm to invest – and then could invest itself, thereby delivering high quality at much lower cost and ruining the profitability of the smaller firm’s investment. This is why the small firm, foreseeing the threat of the dominant firm owning a lot of user information, will not invest further. The dominant firm, on the other side, understands that the small firm has no incentive to innovate – which is why the dominant firm itself, serving a large share of the market, is best off by also saving all investment expenses and not innovating further.

Connected Markets and the Domino Effect

Going a step further, we study under which circumstances a dominant position in one data-driven market could be used to gain a dominant position in another market that is (initially) not data-driven. We show that if the market entry cost are not too high, a firm that manages to find a “data-driven” business model can dominate any market in the long term. If the data on user preferences or characteristics on one market have some value in the innovation process in another market, we define the markets to be connected.

Consequently, if technology firms realize that user information constitutes a key input into the production of quality in data-driven markets, they need to identify connected markets, where these data can be used as well. In those follow-up markets, the same results as in the initial markets apply, suggesting a domino effect: a first mover in market A can leverage its dominant position, which comes with an advantage on user information, to gain a first-mover advantage in market B and let that market tip, too.

This result of our model suggests a race. On the one hand, technology firms with large stocks of existing data on user preferences and characteristics will be looking to identify data-driven business models utilizing these data stocks in other industries. On the other hand, traditional companies will be trying to increase data-independent product quality in order to make it prohibitively costly for those data-driven firms to enter their markets. Complementary, they will try to collect as much user information as possible themselves (and preemptively) in order to avoid losing the entire market once some firm identifies a data-driven business model and actually enters their market.

So what? – A Theory of Harm for Data-driven Markets

Finally, we study the normative implications of our results. Because a tipped market provides no incentives for firms of any size to innovate further, market tipping is negative for consumers in this market (this is, in legal jargon, our “theory of harm” in data-driven markets). It also deters market entry of new firms, even if they come along with a revolutionary technology.

Therefore, we analyze the effects of a specific market intervention that was recently proposed—by regulation, not competition law—in our model. Based on Argenton and Prüfer (2012), we study: what if firms with data-driven business models were legally required to share their (anonymized) data about user preferences or characteristics with their competitors?

Mandatory Data-sharing?!

Contrary to the claims of some commentators on that earlier paper, we show that a first mover’s incentives to innovate further do not decline after such forced sharing of user information, even in a dynamic model. Instead, we show that with data sharing (voluntary, or not), data-driven markets do not tip, that is, the level of competition – and, hence, the level of innovation – in these markets remains high, which benefits consumers.

The intuition is that with mandatory data sharing, both competitors face the same cost structure; a firm with initially higher demand does not have a comparative advantage in producing quality. As a result, the sharing of user information avoids the negative consequences for innovation that are specific to data-driven markets.

As a caveat, these markets can still be dominated by one or a few firms, just as any other market. But in that case, we could be more confident that the source of dominance is a fundamentally superior sales proposition and not a windfall innovation cost-reduction from earlier success in the market.

[This post was originally written for the blog of the Data Science Center Tilburg.]

References:

  • Argenton, Cédric and Jens Prüfer. 2012. “Search Engine Competition with Network Externalities,” Journal of Competition Law & Economics 8(1): 73-105.
  • Einav, Liran and Jonathan Levin. 2014. “The Data Revolution and Economic Analysis,” in: Josh Lerner and Scott Stern (eds.), Innovation Policy and the Economy 14(1), NBER Books, National Bureau of Economic Research: 1-24.
  • Mayer-Schönberger, Viktor and Kenneth Cukier. 2013. “The Rise of Big Data,” Foreign Affairs, May/June Issue.
  • Zuboff, Shoshana. 2016. “The Secrets of Surveillance Capitalism,” Frankfurter Allgemeine Zeitung, March 5, 2016.

 

Academic Director MSc Economics Program

Since 2011, I have served as Education Coordinator Microeconomics at Tilburg University’s CentER Graduate School, being in charge of mentoring all Research Master and PhD students in Microeconomics and facilitating them in making academic (and some non-academic) decisions. This appointment will end after the summer.

Instead, I was asked to follow up on Sjak Smulders as Academic Director of the MSc. Economics program. In this function I will be responsible for the Master program, run by the Tilburg School of Economics and Management, which aims at educating talented bachelor students from economics and neighboring disciplines further and provide them with a targeted profile of expertise in one of six tracks, thereby preparing them for the highest positions in private and public organizations.

The succession of Sjak Smulders will proceed gradually over the academic year 2016-17.

Democracy and Online Platforms

(How) can democracy, as we know it, survive in times of strongly increasing market power of a few online media aggregators?

Let us take this fundamental question in pieces. First, why should “democracy, as we know it,” be at stake? A few months ago, an article by Robert Epstein and Ronald Robertson, two behavioral researchers, attracted a lot of media attention (google “search engine manipulation effect,” and get lost for hours). In short, Epstein and Roberts conducted a series of experiments, where voters could collect information about political candidates using a search engine. Some of the accessible (real) websites favored one candidate, others favored the other candidate. The only treatment of the researchers was that they manipulated the ranking of the search results differently across different treatment groups. They found astonishingly large effects of their manipulations on the voting behavior of the subjects in the different groups. Here is a nice summary of the article.

Recently, the question was taken up again: Could a large social media provider (say, Facebook) influence political elections by rigging the political news its users see? The crucial common features of both cases are that any “manipulation” would occur at the level of the algorithm of the media aggregator that is in charge of selecting news for a given user (and can be made contingent on that user’s known characteristics) – and that the media aggregator is more effective the more users it has …

… which brings us to the second claim made in the initial statement above: “strongly increasing market power of a few media aggregators.” Admittedly, thorough empirical studies proving the increasing market power of online media aggregators (such as Google and Facebook) are rare. But the available statistics all point into the same direction. Argenton and Prüfer documented how market shares in the search engine market started to tip after Google had taken over market leadership in 2003. A similar pattern is documented for social networking sites, dominated by Facebook and for other “data-driven markets,” such as geographical maps. The intuition of the theory is that, if an online platform manages to attract more users than its competitors, these users generate more data about their preferences, e.g. by clicking certain links on a website or preferring certain media articles over others, than the competitors have access to. Data about user preferences today improves the services of a platform tomorrow. As the user-generated data are private property of the platform collecting it, more users today allows a platform to improve its services by more than its competitors with lower market shares, hence giving rise to an ever greater divergence of perceived quality levels and market shares, or “market tipping.”

Thanks to the upcoming U.S. Presidential elections, the topic is especially relevant. From the institutional perspective, a session on “How to deal with big data?” will discuss these and related questions at the upcoming SIOE conference in Paris.

Post scriptum

In Epstein and Robertson’s article, one of the findings is that “[a]mong the most vulnerable groups [to search engine rigging] we identified were Moderate Republicans.” A highly respected co-author of mine explained this finding as follows:

“Apparently the authors of that search engine paper don’t recognize that they are a meta-illustration of their thesis. The first clue is their characterization of Fox News as “biased,” implying that other news sources are unbiased. But if all information sources are biased, then adding new but differently biased sources may serve to offset existing biases. Similarly, “vulnerable to manipulation” could be more neutrally characterized as “willing to update in response to new information” (moderate Republicans are just Bayesians), the opposite of which would be “closed minded.”

Given this characterization, researchers might be more vulnerable to such manipulation of their opinions than others. A fact we might want to remember.

[This post was first published at sioe.org]