More Random Musings

I’m attending a Productivity Commission roundtable in Canberra today on the topic ‘Strengthening Evidence-Based Policy in the Australian Federation’. In an attempt to provoke, my paper is titled Evidence-Based Policy: Summon the Randomistas?. Full text here. I’ll have a month to revise it, so all comments are welcome.

Advertisements
This entry was posted in Randomisation. Bookmark the permalink.

6 Responses to More Random Musings

  1. conrad says:

    I still think your view, at least for many (probably most) educational problems is far too biased towards the need for running truly-randomized trials. All you really need is (a) a good theoretical basis from which to work on; and (b) a longitudinal study. You are never going to be able to work out (a) via truly randomized designs in your lifetime (half your life would be spent fighting ethics committees…); and for (b) finding two or more groups (i.e., a control group [e.g., a program of similar effort in a different domain], experimental group 1, experimental group 2, …) that are matched at the first time-point is all that is necessary – if you find an interaction at the second time point (or more), the program works.

    A good example of this is the Scottish phonics study, where the government there obviously finally got sick of sociologists telling them how to teach children to read. They they therefore ran a 7-year longitudinal study (see http://www.scotland.gov.uk/Publications/2005/02/20688/52452), and pretty much found the results everyone that knows anything about early reading development expected. The important point, however, is that this already a big hard study to run, and very few people are going to dispute the results. It should also have a major influence over a large chunk of the curriculum. Now, if this had been run as a truly randomized study, where they went out and found a truly random group of kids, stuck them into truly randomized classes and so on, it would have taken forever to run for much the same the results. So at least in this case, there is simply no need for these uber expensive perfectly randomized experiments which take forever to run.

  2. Jennifer Doggett says:

    The main challenge to improving the evidence base of public health policy is convincing people that not everything that looks like a parachute (health warnings on alcohol products, junk food ad bans, pro-breastfeeding campaigns etc) actually is a parachute.

  3. ChrisPer says:

    Not to mention Jennifer the activist basis of public health campaigns leading to alarmism and the erosion of intellectual honesty, as well as unrelated activist causes co-opting the ‘public health paradigm’ to build credibility.

  4. Bruce Bradbury says:

    I think the piece would benefit from more discussion of the reasons for _not_ doing randomised controlled trials (RCTs). My take…

    Most social/economic policy evaluations are what are commonly termed ‘process evaluations’. Investigators collect descriptive data about programs, gather the opinions of various participants, and synthesise this with existing knowledge to come up with recommendations. The problem with ‘outcome evaluations’ (of which RCTs are the gold standard) is that they can only test a small number of characteristics. Randomised trials can tell you if the program works, but not how to fix or improve it. Process evaluations, even if flawed, are often the most efficient way to address the latter question.

    We need to know the answer to both questions. Those people implementing the program need to know how to improve it. Those deciding whether the program should continue to exist, or be expanded, need to know if it works.

    Right now, the main people commissioning evaluations are working at the implementation level. For them, process evaluations make more sense. Randomised control trials are expensive, too narrow and too late in providing results.

    Governments (and researchers) need to recognise that outcome evaluations are going to be of most value to those higher up in the decision-making process. If central agencies want to know if programs work, they need to take the lead in demanding randomised outcome evaluations.

  5. Simon says:

    A very interesting paper. My main suggestion is this: I think you should encourage policymakers in the paper to conduct randomised trials and the consequent analysis in a manner that tries to explain WHY a given policy works, not merely WHETHER it works. I thought this was an interesting aspect of Deaton’s paper — eg about looking for heterogeneous treatment in randomised trials, or building structural models to understand the theoretical/behavioural underpinnings of results from randomised trials. I think the danger otherwise is that policymakers may understand you to be advocating policy on the basis only of average treatment effects, with a one-size-fits-all interpretation of social policy. For example, we might spend lots to reduce class sizes in rural NSW merely because a randomised trial across the state showed that smaller class sizes increased test scores by x% (even if, for example, that result was driven by a heterogeneous effect in elite Sydney private boys’ schools, say).

    On a side note — I find it strange that Box 2 omits any reference to good old econometrics…presumably, if I have data without a natural experiment, and I run old-fashioned OLS to correct for a lot of observables (or, for that matter, I construct and estimate a structural model), this is still more valuable than expert opinion or mere conjecture. I agree with the value of randomisation — at least in many cases — but I think you’re selling short the value of careful econometric analysis in contexts where we don’t have clean identification. The alternative, I think, is that we allow not only our policy answers to be guided by the need for clean identification, but also the questions.

  6. conrad says:

    “I find it strange that Box 2 omits any reference to good old econometrics”
    .
    Actually, most of what we know about education comes from psychological models (the cognitive processes, including almost everything we know about the three Rs, learning and motivation, student-teacher interaction, etc.), not econometric ones. That’s why it’s always strange to see economists telling people how to fix educational problems, often without any consideration of well known factors and well accepted methodologies.

Comments are closed.