The economics of everything

Angus Deaton writes an entertaining letter for the Royal Economic Society’s quarterly newsletter. In his latest missive, he discusses how the scope of US economics is changing, by discussing the presentations from this year’s Princeton job market candidates.

Among the topics presented on this year’s job market were studies of the prison parole system in Georgia, (several) of HIV/AIDS in Africa, of child immunization in India, of the political bias of newspapers, of child soldiering, of racial profiling, of rain and leisure choices, of mosquito nets, of malaria, of treatment for leukemia, of the stages of child development, of special education, of war and democracy, of the effects of TV coverage on democracy, of bilingualism and democracy, and many others. (Among the leading departments, only Stanford’s graduate students appear to be working almost exclusively on traditional topics.)

Twenty years ago, there was essentially none of this. Applied theses were mostly applied price theory, using a set of generally agreed-upon (preferably ‘frontier’) econometric methods. Issues that seem central now, like poverty, inequality, national and international health, education, the environment, and much of economic development) were left to other disciplines on the grounds that (standard) economics had no framework for analyzing such ill-defined topics.

So what is it that economics brings to malaria, child soldiering, or the consequences of parole boards? Price theory is certainly no longer our comparative advantage. It is not that it cannot be applied to a wide range of topics, as Gary Becker and others have repeatedly shown. But if current graduate students know anything of price theory, it would have had to have been self-taught, because it is no longer on the curriculum in the ‘best’ American departments. (Except Chicago where it hangs on by a whisker, and where in a last ditch attempt to preserve it from extinction, Becker, Kevin Murphy, and Steve Levitt are running an intensive price theory summer camp for graduate students from outside of Chicago.)

The advantage that economists have, if advantage it is, is their data handling skills (most social sciences are far from comfortable with millions of observations, to say the least), as well as their well-developed armoury of econometric techniques. If the typical thesis of the eighties was an elaborate piece of price theory estimated by non-linear maximum likelihood on a very small number of observations, the typical thesis of today uses little or no theory, much simpler econometrics, and hundreds of thousands of observations. (The amount of computing time has remained more or less constant.) The extent to which data can effectively be substituted for theory is clearly a topic that is being actively explored, at least empirically. …

In the end, it is hard not to think that the quality of research owes more to people than to methods. Certainly, the best of the job market candidates this year made important advances and showed great imagination and skill, irrespective of the unresolved methodological debates that divide the profession. Given this abundant talent, and the new-found (or re-found) commitment of young economists to the great issues of poverty and health around the world, there is surely no fear for the future of economics. And perhaps one day soon, there will once again be a closer dialogue between theory and application.

This entry was posted in Economics Generally, From the Frontiers. Bookmark the permalink.

10 Responses to The economics of everything

  1. Damien Eldridge says:

    Andrew, the material that you quoted from Angus Deaton also includes the following passage:

    The extent to which data can effectively be substituted for theory is clearly a topic that is being actively explored, at least empirically.

    In one sense, more data can be substituted for theory. The more observations that you have, the more degrees of freedom you have. As such, you can estimate more parameters. This allows you to use more flexible functional forms. For example, the CES utility and pdn functions contain Cobb-Douglas, perfect complements and perfect substitutes as special cases. There are also various functional forms for utilty and production functions that nest even more preference orderings and production technologies as special cases. These include the translog, generalised Leontief, normalised quadratic and generalised Barnett specifications. However, moving from more restrictive specifications to less restrictive specifications does not seem to me to reduce the role of theory. It is simply a change in the particular specification of the theoretical model that is employed in the econometric study.

    In my view, economic theory will always be an essential part of most applied economic studies. While data may tell you something about what might have happened, economic theory is a necessary component of any convincing explanation about why it happened. 

  2. Andrew Leigh says:

    Damien, I think applied economics has much to learn from theory, but it doesn’t therefore follow that every applied paper must also have a theory section. (If you think that theory-free empirical studies are a waste of time, are empirics-free theory papers also a waste of time?).* For example, an applied paper on estimating the elasticity of labour demand with respect to the minimum wage might do best to skip the theory, and just give us its findings.

    That said, I freely admit that my own papers are theoretically underpowered, and I might do better to even the balance.

    * This line responds to a comment from Damien that “Theory-free empirical studies are, for the most part, a waste of time.”. Damien subsequently asked me to replace his comment with one that did not include this sentence.

  3. Damien Eldridge says:

    Andrew,

    I think that theory papers can stand alone. They are exploring the implications of particular artificial economies or artificial segments of economies. This is a necessary first step in any attempt to provide a sensible explanation of observed phenomena. I similarly think that papers in econometric theory can stand alone. This is a necessary first step in exploring the properties of particular estimation strategies in particular settings. As I indicated earlier, empirical papers that elicit stylised factis can potentially stand alone. But I am very wary of other empirical papers that do not have a theory section. The authors of these papers typically do have theory in mind, but it is simply left unspecified. To use your example of the elasticity of labour demand with respect to a change in the minimum wage, there are a number of theoretical issues that need to be dealt with. These include the nature of the underlying production technology, the presence or absence of market power and the nature of market outcomes in the presence of the minimum wage. Does the minimum wage bind initially for everyone, for some people, for nobody? Does it bind for everyone for some or for nobody after it is raised? What is the relationship between the minimum wage and the actual wages that are paid? Are we on the short-side of the market, so that we can be sure we are estimating a demand curve, or do we need to worry about identification problems?

  4. Kevin Cox says:

    I was trained as an engineer in the days of the slide rule. We learnt about modelling systems with the tools we had at the time which were theories about the behaviour of materials and physical systems expressed in mathematical formula. The modelling tools for the maths were essentially mathematical short cuts that enabled us do the calculations. Of course these tools were imperfect and our mathematical expressions were often gross simplications of the “real world” which has more randomness and non linear behaviour and more interactions.

    Since the slide rule has been supplanted by the computer the tools and the way modelling is done has changed. The tool has become the model. Problems are divided up into smaller bits where the mathematics works reliably but the interactions, randomness and non linearity is handled by the computer model.

    I know little of economics – beyond the occasional paper I try to read and the popular books on economics and trying to understand enough to talk sensibly to “economic advisers” but as an outsider it seems there is not enough emphasis on building models and experimenting with those models. I may be wrong but economists who write papers appear to do little complex simulations and modelling of the problems using computers but instead try to stretch the mathematics beyond its capabilities. I suspect, but do not know, that “practical economists” depend very much on computer models.

    Why is it that I don’t see reports of computer simulations of complex systems and the running of experiments on those simulations?

  5. allen says:

    Look up some modern macro – DSGE and the like.

  6. conrad says:

    “Why is it that I don’t see reports of computer simulations of complex systems and the running of experiments on those simulations?”

    I’m not an economist, but the area I work in overlaps a lot with some areas of economics (although the general underlying assumptions and theories are very different — often for essentially identical behavioral data). At least in my area, the answer to you question is that you aren’t looking in the right journals. Complex simulations for things like group dynamics, decision making, and so on abound.

  7. derrida derider says:

    Kevin, try these websites:

    http://comp-econ.org/

    http://www.econ.iastate.edu/tesfatsi/ace.htm

    I think the big problems in ACE (Agent-based Computational Economics) are methodological. Reproducibility, transparency and external validity are real issues, and are the main reason the approach does not dominate applied economics. But there’s still a lot of work being done in it, as conrad notes.

  8. If economics becomes “about everything”, and economic theory plays less of a role, then maybe economics is becoming about nothing in particular, and about “social science” (which would be ok by me).

  9. Uncle Milton says:

    “I think that theory papers can stand alone”

    Sure, but there is nothing worse than a theory paper that can contains a section on “policy implications” of the theory.

    Measurement without theory is not without its problems, but policy witout facts is much worse.

  10. Damien Eldridge says:

    Andrew, part of the passage that you quote from Angus Deaton says the following.

    Begin quote:

    “If the typical thesis of the eighties was an elaborate piece of price theory estimated by non-linear maximum likelihood on a very small number of observations, the typical thesis of today uses little or no theory, much simpler econometrics, and hundreds of thousands of observations. (The amount of computing time has remained more or less constant.) The extent to which data can effectively be substituted for theory is clearly a topic that is being actively explored, at least empirically.”

    End quote.

    If it were the case that economics has moved from a position in which theory plays a central role in applied studies to one in which theory plays almost no role in applied studies, then I think this would suggest that the profession had lost some important knowledge that it once possessed. However, I do not think this is the case. In my view, theory does play a central role in most good applied work in economics.

    Economics is not applied statistics. Economists are interested in the underlying causes of economic phenomena. While I think there is an important role for theory-free empirical papers in economics, that role is limited. In essence, theory-free empirical papers can do little more than develop a set of stylised facts that need explaining. Data alone can sometimes give you some indication of what might have happened, but by themselves they cannot explain why it happened. Any attempt to understand what causes particular economic phenomena will necessarily require some economic theory.

    There are two very good examples of the need for economic theory in empirical studies. These are the identification problem and the Lucas critique. These problems suggest that attempts to design policy based on a theory-free analysis of the facts will potentially fail.

    On the other hand, it is often possible to say sensible things about economic policy without any recourse to economic data. In particular, a theoretical analysis of the impacts of a policy can identify the particular parameters that are likely to influence the policy outcomes. This allows policy makers to focus their data analysis on estimating the magnitudes of those parameters. It also alerts policy makers to the potential for those parameters themselves to vary with the policy being analysed. This is the essence of the Lucas critique of econometric policy evaluation.

    There is a need to differentiate between structural parameters and reduced form parameters. Identifying the underlying structural parameters will require an understanding of the economic forces driving the data generating process. This will necessarily require researchers to make use of economic theory. A classic example of this problem is the need to account for the fact that both supply forces and demand forces will affect prices and quantities. If you want to separately identify demand and supply equations, you will need some variables that enter only demand equations and some variables that enter only supply equations. This is the essence of the identification problem.

Comments are closed.