Gateway to Think Tanks
来源类型 | Working Paper |
规范类型 | 报告 |
DOI | 10.3386/w21124 |
来源ID | Working Paper 21124 |
Improving Policy Functions in High-Dimensional Dynamic Games | |
Carlos A. Manzanares; Ying Jiang; Patrick Bajari | |
发表日期 | 2015-05-04 |
出版年 | 2015 |
语种 | 英语 |
摘要 | In this paper, we propose a method for finding policy function improvements for a single agent in high-dimensional Markov dynamic optimization problems, focusing in particular on dynamic games. Our approach combines ideas from literatures in Machine Learning and the econometric analysis of games to derive a one-step improvement policy over any given benchmark policy. In order to reduce the dimensionality of the game, our method selects a parsimonious subset of state variables in a data-driven manner using a Machine Learning estimator. This one-step improvement policy can in turn be improved upon until a suitable stopping rule is met as in the classical policy function iteration approach. We illustrate our algorithm in a high-dimensional entry game similar to that studied by Holmes (2011) and show that it results in a nearly 300 percent improvement in expected profits as compared with a benchmark policy. |
主题 | Econometrics ; Estimation Methods ; Microeconomics ; Game Theory ; Industrial Organization ; Market Structure and Firm Performance |
URL | https://www.nber.org/papers/w21124 |
来源智库 | National Bureau of Economic Research (United States) |
引用统计 | |
资源类型 | 智库出版物 |
条目标识符 | http://119.78.100.153/handle/2XGU8XDN/578799 |
推荐引用方式 GB/T 7714 | Carlos A. Manzanares,Ying Jiang,Patrick Bajari. Improving Policy Functions in High-Dimensional Dynamic Games. 2015. |
条目包含的文件 | 条目无相关文件。 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。