I’m attending a Productivity Commission roundtable in Canberra today on the topic ‘Strengthening Evidence-Based Policy in the Australian Federation’. In an attempt to provoke, my paper is titled Evidence-Based Policy: Summon the Randomistas?. Full text here. I’ll have a month to revise it, so all comments are welcome.
-
Recent Posts
Recent Comments
PJD on Turning Points PJD on Turning Points Clinton McMurray on Turning Points ChrisPer on Turning Points Daniel Waldenström on Turning Points Archives
- May 2010
- April 2010
- March 2010
- February 2010
- January 2010
- December 2009
- November 2009
- October 2009
- September 2009
- August 2009
- July 2009
- June 2009
- May 2009
- April 2009
- March 2009
- June 2008
- May 2008
- April 2008
- March 2008
- February 2008
- January 2008
- December 2007
- November 2007
- October 2007
- September 2007
- August 2007
- July 2007
- June 2007
- May 2007
- April 2007
- March 2007
- February 2007
- January 2007
- December 2006
- November 2006
- October 2006
- September 2006
- August 2006
- July 2006
- June 2006
- May 2006
- April 2006
- March 2006
- February 2006
- January 2006
- December 2005
- November 2005
- October 2005
- September 2005
- August 2005
- July 2005
- June 2005
- May 2005
- April 2005
- March 2005
- February 2005
- January 2005
- December 2004
- November 2004
- October 2004
- September 2004
- August 2004
- July 2004
Categories
- Australian issues
- Australian Politics
- Behavioural Economics
- Blogging
- Book launch stuff
- Books
- Coming Events
- Current Affairs
- Development Economics
- Eclectic Observations
- Econometrics
- Economics & Public Policy Course
- Economics for Government Course
- Economics Generally
- Economics of Education
- Economics of Elections
- Economics of National Security
- Economics of the Family
- Election
- Environmental Economics
- Film
- Finance
- Food and Drink
- From the Frontiers
- Games
- Global issues
- Health economics
- Indigenous Policy
- Inequality
- Interesting stuff
- Iraq
- Jobs
- Labour Economics
- Law
- Low Wage Work
- Macroeconomics
- Media
- Prediction Markets
- Randomisation
- Religion
- Social Capital
- Sport
- Sports
- Tax
- Television
- Thinktanks
- Trade & Development
- Travel
- Uncategorized
- Universities
- Urban Economics
- US Politics
- Web/Tech
- Weblogs
- What I'm Reading
Meta
I still think your view, at least for many (probably most) educational problems is far too biased towards the need for running truly-randomized trials. All you really need is (a) a good theoretical basis from which to work on; and (b) a longitudinal study. You are never going to be able to work out (a) via truly randomized designs in your lifetime (half your life would be spent fighting ethics committees…); and for (b) finding two or more groups (i.e., a control group [e.g., a program of similar effort in a different domain], experimental group 1, experimental group 2, …) that are matched at the first time-point is all that is necessary – if you find an interaction at the second time point (or more), the program works.
A good example of this is the Scottish phonics study, where the government there obviously finally got sick of sociologists telling them how to teach children to read. They they therefore ran a 7-year longitudinal study (see http://www.scotland.gov.uk/Publications/2005/02/20688/52452), and pretty much found the results everyone that knows anything about early reading development expected. The important point, however, is that this already a big hard study to run, and very few people are going to dispute the results. It should also have a major influence over a large chunk of the curriculum. Now, if this had been run as a truly randomized study, where they went out and found a truly random group of kids, stuck them into truly randomized classes and so on, it would have taken forever to run for much the same the results. So at least in this case, there is simply no need for these uber expensive perfectly randomized experiments which take forever to run.
The main challenge to improving the evidence base of public health policy is convincing people that not everything that looks like a parachute (health warnings on alcohol products, junk food ad bans, pro-breastfeeding campaigns etc) actually is a parachute.
Not to mention Jennifer the activist basis of public health campaigns leading to alarmism and the erosion of intellectual honesty, as well as unrelated activist causes co-opting the ‘public health paradigm’ to build credibility.
I think the piece would benefit from more discussion of the reasons for _not_ doing randomised controlled trials (RCTs). My take…
Most social/economic policy evaluations are what are commonly termed ‘process evaluations’. Investigators collect descriptive data about programs, gather the opinions of various participants, and synthesise this with existing knowledge to come up with recommendations. The problem with ‘outcome evaluations’ (of which RCTs are the gold standard) is that they can only test a small number of characteristics. Randomised trials can tell you if the program works, but not how to fix or improve it. Process evaluations, even if flawed, are often the most efficient way to address the latter question.
We need to know the answer to both questions. Those people implementing the program need to know how to improve it. Those deciding whether the program should continue to exist, or be expanded, need to know if it works.
Right now, the main people commissioning evaluations are working at the implementation level. For them, process evaluations make more sense. Randomised control trials are expensive, too narrow and too late in providing results.
Governments (and researchers) need to recognise that outcome evaluations are going to be of most value to those higher up in the decision-making process. If central agencies want to know if programs work, they need to take the lead in demanding randomised outcome evaluations.
A very interesting paper. My main suggestion is this: I think you should encourage policymakers in the paper to conduct randomised trials and the consequent analysis in a manner that tries to explain WHY a given policy works, not merely WHETHER it works. I thought this was an interesting aspect of Deaton’s paper — eg about looking for heterogeneous treatment in randomised trials, or building structural models to understand the theoretical/behavioural underpinnings of results from randomised trials. I think the danger otherwise is that policymakers may understand you to be advocating policy on the basis only of average treatment effects, with a one-size-fits-all interpretation of social policy. For example, we might spend lots to reduce class sizes in rural NSW merely because a randomised trial across the state showed that smaller class sizes increased test scores by x% (even if, for example, that result was driven by a heterogeneous effect in elite Sydney private boys’ schools, say).
On a side note — I find it strange that Box 2 omits any reference to good old econometrics…presumably, if I have data without a natural experiment, and I run old-fashioned OLS to correct for a lot of observables (or, for that matter, I construct and estimate a structural model), this is still more valuable than expert opinion or mere conjecture. I agree with the value of randomisation — at least in many cases — but I think you’re selling short the value of careful econometric analysis in contexts where we don’t have clean identification. The alternative, I think, is that we allow not only our policy answers to be guided by the need for clean identification, but also the questions.
“I find it strange that Box 2 omits any reference to good old econometrics”
.
Actually, most of what we know about education comes from psychological models (the cognitive processes, including almost everything we know about the three Rs, learning and motivation, student-teacher interaction, etc.), not econometric ones. That’s why it’s always strange to see economists telling people how to fix educational problems, often without any consideration of well known factors and well accepted methodologies.