Session 1: Empirical Implementation of Theoretical Models of Strategic Interaction and Dynamic Behavior
- Andres Santos, UCLA
- Azeem Shaikh, University of Chicago
- Frank Wolak, Stanford University
Different from previous years of this session on “Economic Theory-Based Models,” this year’s session will focus on the econometric methodology side of theory-based econometric models from the fields of empirical Industrial Organization (IO), Labor Economics, Energy and Environmental Economics, Health Economics, and the Economics of Education. Topics for papers include: (1) methods for estimating and drawing inferences about partially identified econometric models, (2) methods for identifying and estimating dynamic single agent models, (3) methods for identifying and estimating static and dynamic models of non-cooperative games, (4) nonparametric and semiparametric methods for estimating economic primitives that impose shape and homogeneity restrictions implied by economic theory, and (5) methods for estimating empirically relevant features of complex models of economic behavior. These papers can be both purely methodological or have a significant empirical component. Contributions are welcome from scholars in econometric theory or any of the above empirical fields. The unifying theme of the papers is a theoretical model of an economic interaction and an empirical implementation of this theoretical model. A major goal of the session is to encourage across-field interaction among researchers in econometric theory and researchers undertaking theory-based empirical research in all fields of applied microeconomics.
In This Session
Monday, July 12, 2021
9:00 am - 9:45 am PDT
Deep Learning for Individual Heterogeneity
We propose a methodology for effectively modeling individual heterogeneity using deep learning while still retaining the interpretability and economic discipline of classical models. We pair a transparent, interpretable modeling structure with rich data environments and machine learning methods to estimate heterogeneous parameters based on potentially high dimensional or complex observable characteristics. Our framework is widely-applicable, covering numerous settings of economic interest. We recover, as special cases, well-known examples such as average treatment effects and parametric components of partially linear models. However, we also seamlessly deliver new results for diverse examples such as price elasticities, willingness-to-pay, and surplus measures in choice models, average marginal and partial effects of continuous treatment variables, fractional outcome models, count data, heterogeneous production function components, and more. Deep neural networks are particularly well-suited to structured modeling of heterogeneity in economics: we show how the network architecture can be easily designed to match the global structure of the economic model, giving novel methodology for deep learning as well as, more formally, improved rates of convergence. Our results on deep learning have consequences for other structured modeling environments and applications, such as for additive models or other varying coefficient models. Our inference results are based on an influence function we derive, which we show to be flexible enough to to encompass all settings with a single, unified calculation, removing any requirement for case-by-case derivations. The usefulness of the methodology in economics is shown in two empirical applications: we study the response of 410(k) participation rates to firm matching and the impact of prices on subscription choices for an online service. Extensions of the main ideas to instrumental variables and multinomial choices are shown.
9:45 am - 10:00 am PDT
Chat with Max Farrell
10:00 am - 10:45 am PDT
Inference on Average Welfare with High-Dimensional State Space
10:45 am - 11:00 am PDT
Chat with Vira Semenova
11:00 am - 12:00 pm PDT
Mid-Day Break
12:00 pm - 12:45 pm PDT
An Instrumental Variable Approach to Dynamic Models
We present a new class of methods for identification and inference in dynamic models with serially correlated unobservables, which typically imply that state variables are econometrically endogenous. In the context of Industrial Organization, these state variables often reflect econometrically endogenous market structure. We propose the use of Generalized Instrument Variables methods to identify those dynamic policy functions that are consistent with instrumental variable (IV) restrictions. Extending popular \two-step" methods, these policy functions then identify a set of structural parameters that are consistent with the dynamic model, the IV restrictions and the data. We provide computed illustrations to both single-agent and oligopoly examples. We also present a simple empirical analysis that, among other things, supports the counterfactual study of an environmental policy entailing an increase in sunk costs.
12:45 pm - 1:00 pm PDT
Chat with Giovanni Compiani
1:00 pm - 1:45 pm PDT
Counterfactual Analysis for Structural Dynamic Discrete Choice Models
Discrete choice data allow researchers to recover differences in utilities, but these differences may not suffice to identify policy-relevant counterfactuals of interest. This fundamental tension is important for dynamic discrete choice models because agents' behavior depends on value functions, which require utilities in levels. We propose a unified approach to investigate how much one can learn about counterfactual outcomes under mild assumptions, for a large and empirically relevant class of counterfactuals. We derive analytical properties of sharp identified sets under alternative model restrictions and develop a valid inference approach based on subsampling. To aid practitioners, we propose computationally tractable procedures that bypass model estimation and directly obtain the identified sets for the counterfactuals and the corresponding confidence sets. We illustrate in Monte Carlos, as well as an empirical exercise of firms' export decisions, the informativeness of the identified sets, and we assess the impact of (common) model restrictions on results.
1:45 pm - 2:00 pm PDT
Chat with Eduardo Souza-Rodrigues
Tuesday, July 13, 2021
9:00 am - 9:45 am PDT
Estimation of (Static or Dynamic) Games under Equilibrium Multiplicity
We propose a multiplicity-robust estimation method for (static or dynamic) games. The method allows for distinct behaviors and strategies across markets by treating market specific behaviors as correlated latent variables, with their conditional probability measure treated as an infinite-dimensional nuisance parameter. Instead of solving the intermediate problem which requires optimization over the infinite-dimensional set, we consider the equivalent dual problem which entails optimization over only a finite-dimensional Euclidean space. This property allows for a practically feasible characterization of the identified region for the structural parameters. We apply the estimation method to newspaper market previously studied in Gentzkow et al. (2014) to characterize the identified region of marginal costs.
9:45 am - 10:00 am PDT
Chat with Yuya Sasaki
10:00 am - 10:45 am PDT
Disequilibrium Play in Tennis
Are the serves of the world’s best tennis pros consistent with the theoretical prediction of Nash equilibrium in mixed strategies? We analyze their serve direction choices (to the returner’s left, right or body) with data from an online database called the Match Charting Project. Using a new methodology, we test and decisively reject a key implication of a mixed strategy Nash equilibrium, namely, that the probability of winning a service game is the same for all serve directions. We also use dynamic programming (DP) to numerically solve for the best-response serve strategies to probability models of service game outcomes estimated for individual server-returner pairs, such as Novak Djokovic serving to Rafael Nadal. We show that for most elite pro servers, the DP serve strategy significantly increases their service game win probability compared to the mixed strategies they actually use, which we estimate using flexible reduced-form logit models. Stochastic simulations verify that our results are robust to estimation error.
10:45 am - 11:00 am PDT
Chat with John Rust
11:00 am - 12:00 pm PDT
Mid-Day Break
12:00 pm - 12:45 pm PDT
Identifying Return to Schooling in Constrained School Choice
A growing number of educational institutions use centralized assignment mechanisms to allocate students into programs in a way that reflects student preferences and institution priorities. When the choice set of students is constrained, being truthful is no longer a weakly dominant strategy for students, so they may decide to be strategic. In this paper, we provide an identification approach to construct informative sharp bounds on the effect of the program-institution of graduation on later outcomes when students are not necessarily truth-tellers but are still playing undominated strategies.
12:45 pm - 1:00 pm PDT
Chat with Ismael Mourifie
1:00 pm - 1:45 pm PDT
On the Use of Outcome Tests for Detecting Bias in Decision Making
The decisions of judges, lenders, journal editors, and other gatekeepers often lead to disparities in outcomes across affected groups. An important question is whether, and to what extent, these group-level disparities are driven by relevant differences in underlying individual characteristics, or by biased decision makers. Becker (1957) proposed an outcome test for bias leading to a large body of related empirical work, with recent innovations in settings where decision makers are exogenously assigned to cases and vary progressively in their decision tendencies. We carefully examine what can be learned about bias in decision making in such settings. Our results call into question recent conclusions about racial bias among bail judges, and, more broadly, yield four lessons for researchers considering the use of outcome tests of bias. First, the so-called generalized Roy model, which is a workhorse of applied economics, does not deliver a logically valid outcome test without further restrictions, since it does not require an unbiased decision maker to equalize marginal outcomes across groups. Second, the more restrictive "extended" Roy model, which isolates potential outcomes as the sole admissible source of analyst-unobserved variation driving decisions, delivers both a logically valid and econometrically viable outcome test. Third, this extended Roy model places strong restrictions on behavior and the data generating process, so detailed institutional knowledge is essential for justifying such restrictions. Finally, because the extended Roy model imposes restrictions beyond those required to identify marginal outcomes across groups, it has testable implications that may help assess its suitability across empirical settings.
1:45 pm - 2:00 pm PDT
Chat with Ivan Canay
Wednesday, July 14, 2021
9:00 am - 9:45 am PDT
Discrete Choice Models with Heterogeneous Preferences and Consideration
This paper proposes a discrete choice model where decision makers differ both in their preferences as well as in the products they consider – their consideration sets. The paper shows how to point identify both the preference distribution and the consideration set formation mechanism under a wide range of assumptions and consideration set formation mechanisms. In particular, we show that identification can be attained even when the consideration changes with preferences as well as with (many of the) product characteristics. We compare our model with the standard mixed logit model and the pure characteristics demand model. We illustrate the properties of our approach and its computational advantages in large scale simulations and an empirical application.
9:45 am - 10:00 am PDT
Chat with Francesca Molinari
10:00 am - 10:45 am PDT
Spectral and Post-Spectral Estimators for Grouped Panel Data Models
In this paper, we develop spectral and post-spectral estimators for grouped panel data models. Both estimators are consistent in the asymptotics where the number of observations N and the number of time periods T simultaneously grow large. In addition, the post-spectral estimator is root-NT consistent and asymptotically normal with mean zero under the assumption of well-separated groups even if T is growing much slower than N. The post-spectral estimator has, therefore, theoretical properties that are similar to those of the grouped fixed-effect estimator developed by Bonhomme and Manresa in [9]. In contrast to thegrouped fixed-effect estimator, however, our post-spectral estimator is computationally straightforward.
10:45 am - 11:00 am PDT
Chat with Elena Manresa
11:00 am - 12:00 pm PDT
Mid-Day Break
12:00 pm - 12:45 pm PDT
Risk and Information in Dispute Resolution: An Empirical Study of Arbitration
This paper studies arbitration, a widespread dispute resolution method. We develop an arbitration model where disputing parties choose strategic actions given asymmetric risk attitudes and learning by the arbitrator. We also model arbitration’s effect on negotiated settlements. Upon establishing identification, we estimate the model using public sector wage disputes in New Jersey. Counterfactual simulations find that the more risk-averse party obtains superior outcomes in arbitration but inferior outcomes upon accounting for negotiated settlements. Simulations comparing two popular arbitration designs—final-offer and conventional— support the view that final-offer arbitration leads to less divergent offers and superior information revelation but higher-variance awards.
12:45 pm - 1:00 pm PDT
Chat with Bernardo Silveira
1:00 pm - 1:45 pm PDT
Demand Analysis With Supply-Side Rationing: An Application to the US Dialysis Market
Supply-side rationing is common in markets that do not rely exclusively on prices to determine allocation. Examples include matching markets for schools and colleges; entry-level labor markets; and healthcare services. In these cases, consumer choice sets are constrained endogeneously by supply-side admission policies induced by capacity constraints. We use a random utility model for consumer preferences and a reduced-form rationing rule for the supply side that can be derived from models of the examples described above. We study identification of this model, propose an estimator, and apply these methods to study admissions in the market for kidney dialysis in California. Our results establish identification of the model using two sets of instruments, one that only affects consumer preferences and the other that only affects the rationing rule. These results also suggest tests of supply-side rationing, which we apply to the dialysis market. We find that dialysis facilities are less likely to admit new patients when they have higher than normal caseload and that patients are more likely to travel further when nearby facilities, and high quality facilities in particular, have high caseloads. Finally, we use these data to estimate the preferences of consumers and the rationing rules used by facilities using a Gibbs’ sampler.
1:45 pm - 2:00 pm PDT