D

Regular commenter Derrida Derrider (double-D) draws my attention to the fact that Daniel Davies (D-squared) has posted the third part of his review of Freakonomics. 

Derrida likes Daniel’s review. I wasn’t so enamoured. Much of it is about the problems that can arise from using instrumental variables. Yet rather than being directed at particular papers of Levitt’s, it’s mostly a broad-brush ‘these are the kinds of mistakes he makes’ posting. I kept waiting for a killer example, the moment when Daniel would tell me about crucial error that both Steve and the referees at the AER/QJE/JPE had missed. But it never came.

Advertisements
This entry was posted in Economics Generally, From the Frontiers. Bookmark the permalink.

5 Responses to D

  1. As a fellow D-squared I feel I should have something to say about this…

  2. Bruce Bradbury says:

    I also think the criticisms are overstated even as a general critique of IV approaches.

    Re Weak instruments. These are not such a problem (or at not in the way D2 describes). Yes, they can lead to biased estimates of effect sizes. However, often we are mainly interested in whether there is an effect at all. This can be ascertained by testing whether there is a relationship between the instrument (ie the natural experiment) and the outcome variable using simple statistical techniques like OLS. If the instrument is weak, then this association will be weak, but if the sample size is large enough we can still identify an association. We do, however, need to be quite sure that our assumptions about the non-existence of spurious causal links are accurate.

    Re data mining. Yes, this is a problem, but I don’t see how it is any different from other empirical research. It is even more likely to be relevant to case studies (anecdotes). It even applies to experiments – where the ‘non interesting’ experiments don’t get written or published.

    I agree that a problem with using natural experiments is simply that they are few and far between. They can give us good answers to questions that are often not that important.

  3. dsquared says:

    ahhh, vanity searches.

    If the instrument is weak, then this association will be weak, but if the sample size is large enough we can still identify an association.

    This isn’t true, Bruce. The weak instruments problem is serious in even in studies with hundreds of thousands of data points. Asymptotically, the bias disappears, but even in very large finite samples it is still there, and makes you tend to find spurious relationships. I don’t think that the simple OLS test you describe works either; the point of instrumental variables is that the IV estimate of the variance of the underlying estimator is a ratio of two normals, so it isn’t even nearly t-distributed unless the instrument is strong.

    Data mining is much more of a problem with natural experiments than in normal empirical research, because in normal empirical work, the miner can only do his stuff by playing around with functional forms – also, it is possible to say things about the space of possible models which allow you to make algorithms like PcGets which help you to say something sensible about the effect of model-dredging. With natural experiments, there aren’t anything like so many limits on the ability of the miner.

    I agree that there isn’t a smoking gun in the case of Levitt, but what were you expecting? It isn’t good reviewing practice to play Stephen Milloy with someone else’s empirical work, and Levitt is actually a pretty ethical researcher. I could have tricked up one or two of his papers into a “gotcha” (in particular, his response to criticism of the abortion & crime thing was pretty wretched), but it wouldn’t have been representative. It’s the baleful trend in economics that he represents that I have a problem with, not Levitt himself – I’ll get on to this in part 5.

  4. derrida derider says:

    Well I did enjoy dsquared’s review – as I enjoy all his rants. I reckon his analysis of the economics in that Pisan canto is about the best blog post I’ve ever read. de gustibus non est disputandam.

    I’m more of a user (though a heavy one) than a perpetrator of this sort of stuff – my dabbling would probably attract derision from some of the people posting here. In fact Paul Frijters in this recent thread gently accused me of being too gullible in trusting econometric hocus-pocus. I have a professional interest in following these arguments but less ability to contribute to them. But this is the blogosphere, where ignorance is happily no barrier to opinion, so:

    [In IV estimation w]e do, however, need to be quite sure that our assumptions about the non-existence of spurious causal links are accurate. – Bruce

    And how do you do that? The world is chock full of unrecognised weak causal links and my intuition suggests that the weaker the link between your instrument and what it’s supposed to be proxying for, the more vulnerable it will be to omitted variable bias. And this is not a property that disappears with sample size. Is my intuition wrong?

    On IV strategies generally, I found this paper – a more complete version of a JEP paper – enlightening.

  5. Bruce Bradbury says:

    dSquared: The OLS method is sensible because the IV estimate is essentially a ratio a/b – where a is the strength of the link between the instrument and the outcome variable and b is the link between the instrument and the hypothesised causal variable. As you rightly say, if b is close to zero (the instrument is weak) this gets ugly no matter how big the sample size. However, we can usually make sensible assumptions about the sign of b, and so b can thought of as just an inflator for a. Knowing whether a is significantly different from zero (eg using OLS) can therefore tell us a lot about whether the IV relationship is likely to be signficant.

    However, I agree with you that weak instruments are a bad thing – essentially for the reasons that derrida-d says. If the instrument is weak then a weak violation of the exclusion restriction will stuff things up.

    But I still can’t follow your data mining point. There are so many ways for researchers to data mine (Eg getting a new database and looking to see what is associated with what). It seems to me that the relative scarcity of natural experiments will make it harder rather than easier for researchers to datamine.

Comments are closed.