Home » Articles posted by pruferj

Author Archives: pruferj

Annual Research Conference of EU Commission on Institutions

On November 13-15, 2023, the European Commission will organize a large conference, called Annual Research Conference (ARC 2023), which aims at bringing together academic research(ers), Commission staff members, and policy makers. This year’s conference should be especially interesting for researchers studying institutions and organizations because:

  • The topic is “European integration, institutions and development.”
  • The EU Commission took initiative and expressed strong interest in getting in touch with the research community on Institutional and Organizational Economics.They realize a deficit of knowledge in these areas and are very open to scholarly and policy-oriented input. This is reflected by “satellite events” around the main conference day, which offer researchers and Commission staff members and policy makers various forms of interaction.
  • Some selected papers will be published in a special issue of the Journal of Comparative Economics.
  • There will be a keynote lecture by Tim Besley (LSE).
  • A row of highly knowledgeable scholars of institutions, all with an eye on policy making, are involved in the selection of papers.

The submission deadline for research papers is May 31, 2023. A full call for papers is here.

“Membership, Governance, and Lobbying in Standard-Setting Organizations” forthcoming in Research Policy

Standard-setting organizations (SSOs) are collectively self-governed industry associations, formed by innovators and implementers. They are a key organizational form to agree on and manage technical standards, and form the foundation for many technological and economic sectors. Together with Clemens Fiedler and Maria Larrain, we develop a model of endogeneous SSO participation that highlights different incentives for joining (namely licensing, learning, and implementation). We analyze equilibrium selection and conduct comparative statics for a policy parameter that is related to implementer-friendly Intellectual Property Rights policies, or alternatively, minimum viable implementation. The results can reconcile existing evidence, including that many SSO member firms are small. The extent of statutory participation of implementers in SSO control has an inverted U-shape effect on industry profits and welfare.

The former TILEC Discussion Paper is now forthcoming in Research Policy. More background is here.

ChatGPT vs. Bard in search engines: good to have a theory

On November 30, 2022, Open AI released its chatbot ChatGPT [1]. Many users were immediately thrilled because of the bot’s unprecedented quality, some firms saw strategic opportunities: on January 23, 2023, Microsoft announced a deepening of their partnership with OpenAI, ChatGPT’s developer [2] and confirmed a U$10bn investment soon after [3].

While the chatbot has many (potential) applications, an obvious one is to use it in search engines, in order to improve the quality of search results. Alphabet, the owner of the leading search engine, Google, had the same thought: they reacted on February 6 by announcing Bard, “a conversational AI similar to ChatGPT” [4]. Microsoft reacted immediately and confirmed on February 7 that they are actually planning to add ChatGPT’s functionality to their own search engine, Bing [5].

So much for the empirics until today. Naturally, many companies, investors, decision makers in politics and society, and a zillion users are asking themselves what will happen now [6]. The scene looks like a clash of titans. But is it really?

While I cannot comment on the many other battlefields the two chatbots will compete on, for the (economically and societally highly relevant) search engine market (and related “data-driven markets”) we have a scientific theory, at least. The idea and mechanism of “Competing with Big Data” (joint work with Christoph Schottmueller) is described in earlier blog posts and was published as an academic journal article in 2021.

The theory of data-driven markets

The theory studies a market with two (main) service providers. If on this market (i) the interaction between users and a provider works via computers, which enables a provider to save many characteristics (e.g. IP address) and decisions of users (e.g. where to click) in search log files and (ii) if these user information about users’ preferences and characteristics are a valuable input into the provider’s innovation process (e.g. because they know what extra product features users really want), then we call such a market “data-driven.” In economic terms: the marginal cost of innovation must be decreasing in the amount of user-generated data (which is a function of demand).

If a market is data-driven, this has tremendous consequences. Look at the left panel of the following figure, which is taken from the academic article. It displays a numerical simulation of (equilibrium) investment decisions of both firms over time (red dots for firm 1 and black stars for firm 2) and the resulting market shares (blue curve: the vertical value 0.0 means 50% market share of each firm, 1.0 means firm 1 has 100% market share and firm 2 has 0%).

At the start of the simulation, we assumed 50:50 market shares. Firm 1 only had one advantage because it could invest in innovation first, which increases its product’s quality and, hence, market share. Only in period 2, firm 2 could innovate itself (and so forth, i.e. firm 1 invests in odd periods, firm 2 in even periods; this alternating order has game-theoretic reasons. For details, see the paper).

What the figure shows is that, for several periods, both firms invest heavily in innovation: they compete for the market. However, due to the steadily decreasing market share of firm 2 (the blue curve on average goes up), its marginal innovation cost increases. Consequently, it invests less and less in innovation (the black stars decrease). By contrast, firm 1 benefits from its first-mover advantage and, due to more data and lower marginal cost of innovation than its competitor, can innovate heavily (red dots increase). Crucially, this process of intense innovation of firm 1 stops when it has reached a very high market share (blue curve is approaching 1.0). Then, firm 2 has given up (red stars remain at low level). Firm 1 is dominating the market, constantly renewing its stream of user-generated data, and enjoying monopoly profits. Gladly, it does not even need to spend large sums on innovation, anymore (also red dots sink and then remain at low level).

The right panel of the figure captures a situation where profits in the future are even more important. Here, firm 1 is incentivized to innovate until firm 2 is literally kicked out of the market (has 0% market share) and does not innovate at all. As a best response, firm 1 also does not innovate at all (red dots and black stars remain at zero). Both figures show that, after the initial phase of competing for the market, firm 1 virtually monopolizes the market: economists say, the market has tipped.

Key intuition

Before we go back to the case of ChatGPT vs. Bard, it is crucial to understand why firm 2 does not innovate once the market has tipped. Assume firm 2 has a great idea for a better product. Its problem is its very low market share, which implies that its algorithm has access to relatively little user information. Consequently, its marginal cost of innovation are high. If it does roll out its great idea, for which it will need deep financial pockets, it can convince some users and gain a bit of market share (see the left panel after market tipping/period t=12, where the blue curve decreases a bit for one period). Firm 2’s problem is that firm 1 also learns about the great idea and will try to copy it (=invest in innovation) — however at much lower cost of innovation because it has such an advantage in valuable user-generated data. Consequently, after a brief fight firm 2 will be back at its very low market share — but still have to pay back the debt for the attempted innovation leap. Therefore, in equilibrium (i.e., in a stable situation) firm 2 (or any other firm considering market entry) is deterred from attempting to innovate heavily. This explains low degrees of innovation in a tipped data-driven market.

Back to the case!

One interpretation of the developments around ChatGPT and Bard sketched above is that Google has just enjoyed its super-dominant position on the search engine market (with a global market share of 92.9% over the past 12 months [7]) for a very long time. Alphabet may have innovated a lot elsewhere but improvements in Google’s organic search result quality over the past decade, or so, have been relatively modest, while its profits and market capitalisation soared.

Evidence: In an experiment with a small search engine, we recently showed that, for popular search queries where the algorithms of all search engines have sufficient user-generated data, Google’s quality is not assessed better than Bing’s or even than that of a small search engine (Cliqz) by human assessors (the assessors did not know which engine the search results came from). Only for rare queries, where Google has so much more user-generated data to train its algorithm what users are looking for, the small search engine could not compete anymore. Unfortunately for the small search engine, 74% of the traffic in our data consisted of rare queries, which may explain why Cliqz went out of business shortly after our experiment in April 2020.

That paper also showed that the search engine market is data-driven and that, hence, the theory of data-driven markets explained above is applicable.

Bringing the theory to the case

Google’s relative long-term idleness in search engine innovation is no surprise in light of the theory. What is a surprise is that Microsoft, whose search engine Bing has underwhelming 3.03% market share globally [7], invests heavily in this data-driven market (in contrast to the “key intuition” above, explaining why firm 2 should not try to innovate). Apparently, Microsoft thinks that ChatGPTs technology is so revolutionary that it can overcome its huge disadvantage in accessing user-generated data. By reacting so quickly, Alphabet even testified to this interpretation to some extent.

These decisions, however, can also be interpreted from the perspective of the theory of data-driven markets, namely what economists call “off-equilibrium behavior”: the “key intuition” part above predicts no innovation attempts after market tipping. However, if firm 2 (here: Microsoft) invests heavily in innovation (U$10bn for ChatGPT), this forces firm 1 (Alphabet) to innovate as well (which is positive for users, by the way). Crucially, because firm 1 has much more data and, hence, much lower cost of innovation (recall that they were able to announce a competing chatbot only about two months after ChatGPT was launched), they can innovate relatively easily by copying the challenger’s innovation. This is expected to bring the market development back on the “equilibrium path” (see the figures above after market tipping): after one of the spikes where firm 2 invested in innovation, firm 1 regains its dominant position, whereas firm 2 returns to its low market share.

Even if ChatGPT has fantastic features, it is an algorithm that relies on access to data — and the experiment cited above has shown that not any data found on the internet but data about users’ characteristics and preferences, as captured in earlier searches, is key on this market. Even more, if “generative AI” within a search engine is set up to reply to all kinds of questions, including much richer language, this will only increase the share of rare queries in all search queries. Consequently, the dominant firm’s data advantage will be even more pronounced.

Summing up, the theory of data-driven markets does not suggest that Microsoft’s investment in OpenAI will be a game changer for the search engine industry. Now let us watch how life plays out!


(NB: the best policy remedy for this situation seems to be mandatory data sharing, which is becoming regulatory reality in the EU soon [8]. Here is a proposal how to implement mandatory data sharing on data-driven markets.)


[1] https://en.wikipedia.org/wiki/ChatGPT

[2] https://blogs.microsoft.com/blog/2023/01/23/microsoftandopenaiextendpartnership/

[3] https://www.forbes.com/sites/qai/2023/01/27/microsoft-confirms-its-10-billion-investment-into-chatgpt-changing-how-microsoft-competes-with-google-apple-and-other-tech-giants/?sh=496f8d853624

[4] https://www.freethink.com/robots-ai/conversational-ai-google-bard

[5] https://www.wsj.com/articles/microsoft-adds-chatgpt-ai-technology-to-bing-search-engine-11675793525

[6] https://www.reuters.com/technology/bard-vs-chatgpt-what-do-we-know-about-googles-ai-chatbot-2023-02-07/

[7] https://gs.statcounter.com/search-engine-market-share

[8] EU’s Digital Markets Act, Art. 6 (10,11)

Potential competition impacts of Big Tech entry in retail financial services — a consultation response

One of the nice features of the UK’s political landscape is the openness of regulatory and competition authorities to academic input. Recently, the Financial Conduct Authority (FCA) published a discussion paper titled: “The potential competition impacts of Big Tech entry and expansion in retail financial services“. In parallel, they organized a Webinar introducing the topic as well as several roundtables, where interested parties (big tech, start-ups, academics, consultants, other policy makers) could raise their voices. They also opened a consultation and asked for written comments.

This consultation is academically interesting because (i) retail finance, covering markets for deposits, payments, insurance, and consumer lending, is economically relevant for many banks, financial intermediaries, and very many consumers. (ii) Big tech firms have started to venture into this highly regulated sector, e.g. by offering Apple Pay and Google Pay, but might plan a more massive market entry. Whoever understands the theory of connected markets may know, why. Hence, getting the pros and cons of such potential market entries right is important for respective regulators, in this case the FCA.

Together with my CCP and UEA School of Economics-colleague Andrea Calef, who knows much more about finance than I do, we tried to contribute our perspective in a response submitted to the FCA. In a nutshell, we identify as the key question for the FCA, whether each of the markets is “data driven”, or not. Being “data driven” implies that a market is subject to data-driven indirect network effects and, hence, very likely to tip in the future. If so, this would negatively impact the innovation incentives of both dominant and smaller firms (and potential entrants) and consequently be very bad for consumers.

In the past, we developed a test for data-drivenness of a market (some details here and here), which would also serve the FCA well to answer this crucial question. It suggests that, if a market is found to be data driven, the regulator should intervene. If it is not data driven, let unhampered competition have its way!

Institutional Economics at the EU Commission

The EU Commission’s Directorate-General for Economic and Financial Affairs (DG ECFIN) is responsible for the implementation of several investment programs of the EU. Among them is the legendary Recovery and Resilience Facility (RRF), a program set up to fight the consequences of the Coronacrisis across the EU (endowed with > EUR 650 billion!) and to foster the bloc’s major economic policy goals.

To complement existing expertise, DG ECFIN is now collaborating with several Institutional Economists, whose task it is to bring in new approaches, insights, and ideas about institutional design and economic governance, to help in the implementation of these programs.

I have the privilege of being on board of this team and putting some existing initiatives, such as LearnIOE (see also here) in touch with policy makers and in areas where knowledge about institutions can prove to be very valuable.

Relatedly, the Annual Research Conference 2023 of the EU Commission will be dedicated to “European Integration, Institutions, and Development.” I am very impressed by these developments and hope the EU can also take a globally leading role beyond the regulation of the digital world, namely in institutional design.

New working paper: “How important are user-generated data for search engine quality? Experimental results”

Online search engines are a key “platform market” and are used by billions of users every day. They offer the basic infrastructure for many other industries and are, therefore, of very high economic, political, and social importance. Over the past few years, an intense policy debate has formed around the question: do some search engines produce better search results because their algorithm is better, or because they have access to more data from past searches? In the former case, it may be best to refrain from interventions in the market in order not to stifle the innovation incentives of successful entrepreneurs (and their potential contestants). In the latter case, mandatory data sharing of user-generated data, a policy that is currently discussed and already contained in the EU’s Digital Markets Act, could trigger innovation and would benefit all users of search engines.

Together with Tobias Klein, Madina Kurmangaliyeva, and Patricia Prufer, I have had the opportunity to study this question empirically (theory is here). In 2020, we conducted an experiment with a small search engine, Cliqz, on behalf of the German Finance Ministry, who wanted to know when a market is “data driven” (results are here).

Now, the core academic paper is available, which reports background, methodology, and results in detail. The results show that the mandatory sharing of user data — a provision in the EU’s Digital Markets Act that is planned to be enforced in 2023 — may be an appropriate remedy on the search engine market: it would likely allow entrants, such as Cliqz, to successfully compete with the incumbent (Google) by enabling Cliqz to provide search results that are also of high quality for rare queries.

Unlike in other contexts, this remedy does not directly harm the incumbent, as it makes use of the non-rivalry of information: the incumbent will still be able to use the same data. Only the exclusivity of data access would be reduced. Consequently, users would benefit. 

A CCP Policy Brief, explaining the study in a nutshell, is here.

If you are interested In how to implement mandatory data sharing on data-driven markets in an economically efficient way that is in line with European competition law, consumer protection law, IP law, and privacy law, read the article linked here.

Go West

I am happy to announce that, as of September 2022, I will move to the University of East Anglia’s School of Economics as Professor in Economics and join the Centre for Competition Policy. I am very much looking forward to the new environment and to contributing to interdisciplinary, policy-relevant research in economics, law, and political science.

CfP: Workshop on Economic Governance and Legitimacy

TILEC, the Tilburg Law and Economics Center, will be organizing a workshop on “Economic Governance and Legitimacy” at Tilburg University, the Netherlands, on May 19-20, 2022.

A foundational question for any economic governance system concerns the legitimacy of its rules, where legitimacy is defined as the degree to which individual citizens believe they have a moral obligation to obey the ruler (Bisin, Rubin, Seror, and Verdier, 2021). Obviously, if (most) people believe the ruler (president, queen, chieftain, dictator, association director, influencer, etc.) has the right to rule, ruling becomes cheaper, quicker, and more efficient. But what are the origins of legitimacy in political, legal, and social systems across the world? Why do some players have a lot of influence and are listened to by many followers, whereas others do not (even if their arguments or proposals may be better)?

During a multidisciplinary and discussion-intensive two-day on-site workshop, we aim to learn from theoretical, empirical, experimental, and conceptual papers addressing the topic from various angles.

Keynote addresses will cross disciplinary boundaries between economics and law (Gillian Hadfield, Toronto), sociology (Sonja Opper, Bocconi), political science (Gérard Roland, Berkeley), and religion studies (Jared Rubin, Chapman).

The deadline for paper submissions is January 16, 2022. Papers should be submitted in PDF format to TILECgovernance@tilburguniversity.edu. More details are in the call for papers and at the Workshop website.

“Consumers’ Privacy Choices in the Era of Big Data” forthcoming in Games and Economic Behavior

In 2013, Edward Snowden shocked the world by revealing large surveillance programs of US intelligence services. In 2012, Sebastian Dengler and I had started to think about privacy from an economic perspective. Of course, we were not the only ones, as this interim review article shows. It turned out to be a hard task to trade off the costs and benefits of privacy against other goods. Therefore, we are very happy that this work has now borne fruit.

Our paper, “Consumers’ Privacy Choices in the Era of Big Data” (working paper version), has just been accepted for publication in Games and Economic Behavior. There, combining Industrial Organization, Behavioral Economics, and insights about digital markets, we start from the observation that recent progress in information technologies provides sellers with detailed knowledge about consumers’ preferences, approaching perfect price discrimination in the limit. We construct a model where consumers with less strategic sophistication than the seller’s pricing algorithm face a trade-off when buying. They choose between a direct, transaction cost-free sales channel and a privacy-protecting, but costly, anonymous channel. We show that the anonymous channel is used even in the absence of an explicit taste for privacy if consumers are not too strategically sophisticated. This provides a micro-foundation for consumers’ privacy choices. Some consumers benefit but others suffer from their anonymization.

Video “Mandatory Data Sharing on Data-driven Markets: Why, When, and How?”

The University of Passau (Germany) dedicated a series of talks to the platform economy this summer. A diverse set of scholars had the opportunity (and time) to browse through various research projects and to point out connections and uncharted territories. In my contribution, now on video, I could tell the full story of the idea to implement mandatory sharing of user-generated data on data-driven markets: from economic theory via the development of a “test for data-drivenness,” its exemplification (by experimental testing with a search engine and in a representative consumer panel) up to the current draft of the Digital Markets Act and our proposal how to implement mandatory data sharing in practice.