Arthur Lewbel, insightful as always, asserts in a recent post that:
The people who argue that machine learning, natural experiments, and randomized controlled trials are replacing structural economic modeling and theory are wronger than wrong.
As ML and experiments uncover ever more previously unknown correlations and connections, the desire to understand these newfound relationships will rise, thereby increasing, not decreasing, the demand for structural economic theory and models.
I agree. New measurement produces new theory, and new theory produces new measurement — it’s hard to imagine stronger complements. And as I said in an earlier post,
Measurement and theory are rarely advanced at the same time, by the same team, in the same work. And they don’t need to be. Instead we exploit the division of labor, as we should. Measurement can advance significantly with little theory, and theory can advance significantly with little measurement. Still each disciplines the other in the long run, and science advances.
If the theory/measurement pendulum tends to swing widely, it nevertheless swings. If the 1970’s and 1980’s were a golden age of theory, recent decades have witnessed explosive advances in measurement linked to the explosion of Big Data. But Big Data presents both measurement opportunities and pitfalls — dense fogs of “digital exhaust” — which fresh theory will help us penetrate. Theory will be back.
[Related earlier posts: “Big Data the Big Hassle” and “Theory gets too Much Respect, and Measurement Doesn’t get Enough“]
Please comment on the article here: No Hesitations