Home » Uncategorized

Category Archives: Uncategorized

20th Anniversary Conference of TILEC

For the past 20 years, the Tilburg Law and Economics Center (TILEC) has supported and stimulated academic research on the governance of economic activity. In its research themes competition & regulation, innovation, and institutions, TILEC members from law and economics study how interactions between users and firms, organizations and their members, public agencies and regulated entities, courts and litigants, as well as voters and political parties are governed. Therefore, on April 11 & 12, 2024, we are organizing an anniversary conference in Tilburg. It marks a time to gather outstanding researchers and policy makers in the domain of TILEC’s activities, to reflect and celebrate, to meet old friends and make new ones.

More background, a program, and a registration form are here.

Big Tech’s Entry and Expansion in Retail Financial Services: an AI & Data Story

The British regulator of financial services, the Financial Conduct Authority (FCA), is highly interested in understanding the potential competition impacts of entry by Big Tech firms (especially Alphabet, Meta, Apple and Microsoft) in retail financial service. In other words, what can high-powered AI and masses of data about users’ preferences and characteristics, paired with very deep pockets, do to existing banks and other financial intermediaries (and to fintech-startups)?

The FCA published a first discussion paper on the state of the market in fall 2022 and opened a public consultation, to which I reacted together with my UEA-colleague Andrea Calef a year ago.

After an assembly of the feedback received, the FCA updated the discussion paper and asked specific open questions, such as:

  • To what extent does this data asymmetry hold between Big Tech firms and financial services firms in retail financial services markets?
  • Do you expect that data asymmetry to become more significant over time? If so, how?
  • Should wholesale markets be scrutinized as well by the regulator, next to retail markets?
  • Where is the evidence for what we think we know?

Now, in a team of researchers at the Centre for Competition Policy, we reacted and tried to answer the questions posed.

It is great to see how open-minded the FCA is. Now, we hope very much that they will take the received feedback on board and move forward to action. Economically, the big problem is that financial retail markets are most likely “data-driven markets.” If that is true, these markets will most likely tip and be dominated by the firm that has the most data about users’ preferences and characteristics and draws the most valuable insights out of it. Without proper regulation, which we discuss in the response, prospects look dim for traditional financial services firms; in the UK and elsewhere.

“Microtargeting, Voters’ Unawareness, and Democracy” to be published in Journal of Law, Economics, and Organization

Together with Wieland Müller and Freek van Gils, I have studied the consequences of digitization and datafication for democratic elections since early 2016. Inspired by news resports right after the Brexit referendum and the U.S. Presidential elections 2016, which claimed that those votes had been influenced, if not tilted, by misinformation spread via social media, we first asked whether and how this is possible theoretically. Notably, our thinking went, if voters do not trust election outcomes anymore, they may also lose trust in democracy as a political system in the first place. Indeed, there is a strong decline of public trust over the past 20 years, which at least correlates with the rise of social media, the main source of political information for many voters (proving causality is more difficult).

Now, this first paper, titled “Microtargeting, Voters’ Unawareness, and Democracy,” is forthcoming in the Journal of Law, Economics, and Organization. There, we study how two recent technological developments have raised concerns about threats to democracy because of their potential to distort election outcomes: (a) data-driven voter research enabling political microtargeting, and (b) growing news consumption via social media and news aggregators that obfuscate the origin of news items, leading to voters’ unawareness about a news sender’s identity. We provide a theoretical framework in which we can analyze the effects that microtargeting by political interest groups and unawareness have on election outcomes in comparison to “conventional” news reporting. We show which voter groups suffer from which technological development, (a) or (b). While both microtargeting and unawareness have negative effects on voter welfare, we show that only unawareness can flip an election. Our model framework allows the theory-based discussion of policy proposals, such as to ban microtargeting or to require news platforms to signal the political orientation of a news item’s originator.

The second paper of this co-author team, which is described here and features a large lab experiment testing the theory empirically, is still in the reviewing process.

Director of Tilburg Law and Economics Center

TILEC, the Tilburg Law and Economics Center, is a fascinating and very successful interdisciplinary research center at Tilburg University. Gathering more than 40 legal scholars and economists and over 50 extramural fellows mainly working at other organizations, TILEC was named a “global leader in its field of law and economics” by an external assessment committee.

I am very happy to announce that, as of September 2023, I have taken over the TILEC directorship from Panos Delimatsis, who performed the job over an impressive 12-year period. Here is my take on the Center, as published on TILEC’s website:

“TILEC’s main purpose is to organize interdisciplinary academic events related to the governance of economic activity in the digital age. It serves as a platform of exchange, especially between lawyers and economists. TILEC also strives to support policy-relevant, well-founded research of its members and to connect TILEC members with other researchers and decision makers in politics, firms, public agencies, and society at large. I am very grateful to be connected to this globally renowned research center, which has attracted attention from highest-level policy makers, judges, regulators and academics. Truly interdisciplinary research is the unique feature of the Center: it enriches a scholar’s life to see different perspectives on a “well known” subject, and it keeps us humble and awake to know that there is always somebody in the room who knows more. If you feel similarly, please do not be shy but get in contact with us!”

Roundtable on international effects of the EU’s Digital Services Act (video)

The European Commission recently held a large conference on the Digital Services Act (DSA), where they gathered perspectives on this monumental piece of legislation, the sister of the Digital Markets Act. One of the sessions asked: “What are the implications of the EU’s new social media rules for the rest of the world?” As envoy of the EU Observatory on the Online Platform Economy, I participated in the roundtable alongside delegates from the UN, UNESCO, LIRNEasia (a Sri Lankan think tank), and the EU’s “ambassador” to Silicon Valley. Here is the video of the session:

New working paper: “Social Media and Democracy: Experimental Results”

Western-style representative democracy is under pressure. Both US President Biden and EU-Commission President von der Leyen, as well as US Secretary of State Blinken and EU-Commission Vice President Vestager recently declared: “Democracies must deliver.” A key problem is the deteriorating trust of many voters in political institutions and the democratic system itself. Correlational evidence suggests that the information distributed via social media platforms contributes to this problem. Two technology-rooted characteristics of social media appear to be main drivers of the issues: political interest groups have the ability (i) to microtarget news based on individual-level voter data and (ii) to obfuscate their identities, which can be exploited to spread disinformation.

Social media platforms and legislators on both sides of the Atlantic have started to take action in order to mitigate the perceived negative effects. Specifically, discussions are about a ban on microtargeting and mandatory disclosure of politcial interest groups/advertisers’ interests. For instance, the EU’s Digital Services Act (DSA) mandates Very Large Online Platforms to fight disinformation: they have to deliver (public) annual risk-assessment reports and risk-mitigation reports, including annual audits by independent parties.

However, a solid empirical foundation for the suggested interventions is missing.

Together with Freek van Gils and Wieland Müller, in new working paper “Social Media and Democracy: Experimental Results” I develop, theoretically analyze and experimentally test a series of games to study the effects of two interventions, a ban on microtargeting and mandatory disclosure of senders’ interests, on individual voting behavior and election outcomes. The games are implemented in a laboratory setting that follows key features of a social media environment. This approach can both mimic the behavior of political interest groups and voters, in a stylized and framing-free environment, and thereby “look” behind the curtain of proprietary data of social media firms. It also suggests causal relationships that are in line with the evidence, which are necessary to inform policy implications.

We find that mandatory disclosure of interests, with or without a microtargeting ban, increases the efficiency of aggregate voter decision-making. However, only the combination of disclosure of interests and a microtargeting ban counteracts election manipulation. The implementation of a microtargeting ban without disclosure requirements has adverse effects. The latter result, in particular, should be of interest to legislators and regulators.

A short policy briefing is here.

Annual Research Conference of EU Commission on Institutions

On November 13-15, 2023, the European Commission will organize a large conference, called Annual Research Conference (ARC 2023), which aims at bringing together academic research(ers), Commission staff members, and policy makers. This year’s conference should be especially interesting for researchers studying institutions and organizations because:

  • The topic is “European integration, institutions and development.”
  • The EU Commission took initiative and expressed strong interest in getting in touch with the research community on Institutional and Organizational Economics.They realize a deficit of knowledge in these areas and are very open to scholarly and policy-oriented input. This is reflected by “satellite events” around the main conference day, which offer researchers and Commission staff members and policy makers various forms of interaction.
  • Some selected papers will be published in a special issue of the Journal of Comparative Economics.
  • There will be a keynote lecture by Tim Besley (LSE).
  • A row of highly knowledgeable scholars of institutions, all with an eye on policy making, are involved in the selection of papers.

The submission deadline for research papers is May 31, 2023. A full call for papers is here.

“Membership, Governance, and Lobbying in Standard-Setting Organizations” forthcoming in Research Policy

Standard-setting organizations (SSOs) are collectively self-governed industry associations, formed by innovators and implementers. They are a key organizational form to agree on and manage technical standards, and form the foundation for many technological and economic sectors. Together with Clemens Fiedler and Maria Larrain, we develop a model of endogeneous SSO participation that highlights different incentives for joining (namely licensing, learning, and implementation). We analyze equilibrium selection and conduct comparative statics for a policy parameter that is related to implementer-friendly Intellectual Property Rights policies, or alternatively, minimum viable implementation. The results can reconcile existing evidence, including that many SSO member firms are small. The extent of statutory participation of implementers in SSO control has an inverted U-shape effect on industry profits and welfare.

The former TILEC Discussion Paper is now forthcoming in Research Policy. More background is here.

ChatGPT vs. Bard in search engines: good to have a theory

[Slight update below, 18 August 2023]*

On November 30, 2022, Open AI released its chatbot ChatGPT [1]. Many users were immediately thrilled because of the bot’s unprecedented quality, some firms saw strategic opportunities: on January 23, 2023, Microsoft announced a deepening of their partnership with OpenAI, ChatGPT’s developer [2] and confirmed a U$10bn investment soon after [3].

While the chatbot has many (potential) applications, an obvious one is to use it in search engines, in order to improve the quality of search results. Alphabet, the owner of the leading search engine, Google, had the same thought: they reacted on February 6 by announcing Bard, “a conversational AI similar to ChatGPT” [4]. Microsoft reacted immediately and confirmed on February 7 that they are actually planning to add ChatGPT’s functionality to their own search engine, Bing [5].

So much for the empirics until today. Naturally, many companies, investors, decision makers in politics and society, and a zillion users are asking themselves what will happen now [6]. The scene looks like a clash of titans. But is it really?

While I cannot comment on the many other battlefields the two chatbots will compete on, for the (economically and societally highly relevant) search engine market (and related “data-driven markets”) we have a scientific theory, at least. The idea and mechanism of “Competing with Big Data” (joint work with Christoph Schottmueller) is described in earlier blog posts and was published as an academic journal article in 2021.

The theory of data-driven markets

The theory studies a market with two (main) service providers. If on this market (i) the interaction between users and a provider works via computers, which enables a provider to save many characteristics (e.g. IP address) and decisions of users (e.g. where to click) in search log files and (ii) if these user information about users’ preferences and characteristics are a valuable input into the provider’s innovation process (e.g. because they know what extra product features users really want), then we call such a market “data-driven.” In economic terms: the marginal cost of innovation must be decreasing in the amount of user-generated data (which is a function of demand).

If a market is data-driven, this has tremendous consequences. Look at the left panel of the following figure, which is taken from the academic article. It displays a numerical simulation of (equilibrium) investment decisions of both firms over time (red dots for firm 1 and black stars for firm 2) and the resulting market shares (blue curve: the vertical value 0.0 means 50% market share of each firm, 1.0 means firm 1 has 100% market share and firm 2 has 0%).

At the start of the simulation, we assumed 50:50 market shares. Firm 1 only had one advantage because it could invest in innovation first, which increases its product’s quality and, hence, market share. Only in period 2, firm 2 could innovate itself (and so forth, i.e. firm 1 invests in odd periods, firm 2 in even periods; this alternating order has game-theoretic reasons. For details, see the paper).

What the figure shows is that, for several periods, both firms invest heavily in innovation: they compete for the market. However, due to the steadily decreasing market share of firm 2 (the blue curve on average goes up), its marginal innovation cost increases. Consequently, it invests less and less in innovation (the black stars decrease). By contrast, firm 1 benefits from its first-mover advantage and, due to more data and lower marginal cost of innovation than its competitor, can innovate heavily (red dots increase). Crucially, this process of intense innovation of firm 1 stops when it has reached a very high market share (blue curve is approaching 1.0). Then, firm 2 has given up (red stars remain at low level). Firm 1 is dominating the market, constantly renewing its stream of user-generated data, and enjoying monopoly profits. Gladly, it does not even need to spend large sums on innovation, anymore (also red dots sink and then remain at low level).

The right panel of the figure captures a situation where profits in the future are even more important. Here, firm 1 is incentivized to innovate until firm 2 is literally kicked out of the market (has 0% market share) and does not innovate at all. As a best response, firm 1 also does not innovate at all (red dots and black stars remain at zero). Both figures show that, after the initial phase of competing for the market, firm 1 virtually monopolizes the market: economists say, the market has tipped.

Key intuition

Before we go back to the case of ChatGPT vs. Bard, it is crucial to understand why firm 2 does not innovate once the market has tipped. Assume firm 2 has a great idea for a better product. Its problem is its very low market share, which implies that its algorithm has access to relatively little user information. Consequently, its marginal cost of innovation are high. If it does roll out its great idea, for which it will need deep financial pockets, it can convince some users and gain a bit of market share (see the left panel after market tipping/period t=12, where the blue curve decreases a bit for one period). Firm 2’s problem is that firm 1 also learns about the great idea and will try to copy it (=invest in innovation) — however at much lower cost of innovation because it has such an advantage in valuable user-generated data. Consequently, after a brief fight firm 2 will be back at its very low market share — but still have to pay back the debt for the attempted innovation leap. Therefore, in equilibrium (i.e., in a stable situation) firm 2 (or any other firm considering market entry) is deterred from attempting to innovate heavily. This explains low degrees of innovation in a tipped data-driven market.

Back to the case!

One interpretation of the developments around ChatGPT and Bard sketched above is that Google has just enjoyed its super-dominant position on the search engine market (with a global market share of 92.9% over the past 12 months [7]) for a very long time. Alphabet may have innovated a lot elsewhere but improvements in Google’s organic search result quality over the past decade, or so, have been relatively modest, while its profits and market capitalisation soared.

Evidence: In an experiment with a small search engine, we recently showed that, for popular search queries where the algorithms of all search engines have sufficient user-generated data, Google’s quality is not assessed better than Bing’s or even than that of a small search engine (Cliqz) by human assessors (the assessors did not know which engine the search results came from). Only for rare queries, where Google has so much more user-generated data to train its algorithm what users are looking for, the small search engine could not compete anymore. Unfortunately for the small search engine, 74% of the traffic in our data consisted of rare queries, which may explain why Cliqz went out of business shortly after our experiment in April 2020.

That paper also showed that the search engine market is data-driven and that, hence, the theory of data-driven markets explained above is applicable.

Bringing the theory to the case

Google’s relative long-term idleness in search engine innovation is no surprise in light of the theory. What is a surprise is that Microsoft, whose search engine Bing has underwhelming 3.03% market share globally [7], invests heavily in this data-driven market (in contrast to the “key intuition” above, explaining why firm 2 should not try to innovate). Apparently, Microsoft thinks that ChatGPTs technology is so revolutionary that it can overcome its huge disadvantage in accessing user-generated data. By reacting so quickly, Alphabet even testified to this interpretation to some extent.

These decisions, however, can also be interpreted from the perspective of the theory of data-driven markets, namely what economists call “off-equilibrium behavior”: the “key intuition” part above predicts no innovation attempts after market tipping. However, if firm 2 (here: Microsoft) invests heavily in innovation (U$10bn for ChatGPT), this forces firm 1 (Alphabet) to innovate as well (which is positive for users, by the way). Crucially, because firm 1 has much more data and, hence, much lower cost of innovation (recall that they were able to announce a competing chatbot only about two months after ChatGPT was launched), they can innovate relatively easily by copying the challenger’s innovation. This is expected to bring the market development back on the “equilibrium path” (see the figures above after market tipping): after one of the spikes where firm 2 invested in innovation, firm 1 regains its dominant position, whereas firm 2 returns to its low market share.

Even if ChatGPT has fantastic features, it is an algorithm that relies on access to data — and the experiment cited above has shown that not any data found on the internet but data about users’ characteristics and preferences, as captured in earlier searches, is key on this market. Even more, if “generative AI” within a search engine is set up to reply to all kinds of questions, including much richer language, this will only increase the share of rare queries in all search queries. Consequently, the dominant firm’s data advantage will be even more pronounced.

Summing up, the theory of data-driven markets does not suggest that Microsoft’s investment in OpenAI will be a game changer for the search engine industry. Now let us watch how life plays out!

——————————–

(NB: the best policy remedy for this situation seems to be mandatory data sharing, which is becoming regulatory reality in the EU soon [8]. Here is a proposal how to implement mandatory data sharing on data-driven markets.)

*[Update]: On August 18, 2023, i.e. 5+ months after this blog post was published, other industry observers agreed to this assessment.

References

[1] https://en.wikipedia.org/wiki/ChatGPT

[2] https://blogs.microsoft.com/blog/2023/01/23/microsoftandopenaiextendpartnership/

[3] https://www.forbes.com/sites/qai/2023/01/27/microsoft-confirms-its-10-billion-investment-into-chatgpt-changing-how-microsoft-competes-with-google-apple-and-other-tech-giants/?sh=496f8d853624

[4] https://www.freethink.com/robots-ai/conversational-ai-google-bard

[5] https://www.wsj.com/articles/microsoft-adds-chatgpt-ai-technology-to-bing-search-engine-11675793525

[6] https://www.reuters.com/technology/bard-vs-chatgpt-what-do-we-know-about-googles-ai-chatbot-2023-02-07/

[7] https://gs.statcounter.com/search-engine-market-share

[8] EU’s Digital Markets Act, Art. 6 (10,11)

Potential competition impacts of Big Tech entry in retail financial services — a consultation response

One of the nice features of the UK’s political landscape is the openness of regulatory and competition authorities to academic input. Recently, the Financial Conduct Authority (FCA) published a discussion paper titled: “The potential competition impacts of Big Tech entry and expansion in retail financial services“. In parallel, they organized a Webinar introducing the topic as well as several roundtables, where interested parties (big tech, start-ups, academics, consultants, other policy makers) could raise their voices. They also opened a consultation and asked for written comments.

This consultation is academically interesting because (i) retail finance, covering markets for deposits, payments, insurance, and consumer lending, is economically relevant for many banks, financial intermediaries, and very many consumers. (ii) Big tech firms have started to venture into this highly regulated sector, e.g. by offering Apple Pay and Google Pay, but might plan a more massive market entry. Whoever understands the theory of connected markets may know, why. Hence, getting the pros and cons of such potential market entries right is important for respective regulators, in this case the FCA.

Together with my CCP and UEA School of Economics-colleague Andrea Calef, who knows much more about finance than I do, we tried to contribute our perspective in a response submitted to the FCA. In a nutshell, we identify as the key question for the FCA, whether each of the markets is “data driven”, or not. Being “data driven” implies that a market is subject to data-driven indirect network effects and, hence, very likely to tip in the future. If so, this would negatively impact the innovation incentives of both dominant and smaller firms (and potential entrants) and consequently be very bad for consumers.

In the past, we developed a test for data-drivenness of a market (some details here and here), which would also serve the FCA well to answer this crucial question. It suggests that, if a market is found to be data driven, the regulator should intervene. If it is not data driven, let unhampered competition have its way!