Last year, the Productivity Commission ran an event on the topic ‘Strengthening Evidence-based Policy in the Australian Federation’, of which I was one of the participants (my contribution was titled: ‘Evidence-based policy: summon the randomistas?’). The PC has now produced a mighty two-volume set, which is also available on their website. If you’re short on time, go to the background report (volume two), which is likely to become a standard reference for anyone thinking about evidence-based policy in Australia. We’ll see if George Pell gets around to reading Appendix A.
-
Recent Posts
Recent Comments
PJD on Turning Points PJD on Turning Points Clinton McMurray on Turning Points ChrisPer on Turning Points Daniel Waldenström on Turning Points Archives
- May 2010
- April 2010
- March 2010
- February 2010
- January 2010
- December 2009
- November 2009
- October 2009
- September 2009
- August 2009
- July 2009
- June 2009
- May 2009
- April 2009
- March 2009
- June 2008
- May 2008
- April 2008
- March 2008
- February 2008
- January 2008
- December 2007
- November 2007
- October 2007
- September 2007
- August 2007
- July 2007
- June 2007
- May 2007
- April 2007
- March 2007
- February 2007
- January 2007
- December 2006
- November 2006
- October 2006
- September 2006
- August 2006
- July 2006
- June 2006
- May 2006
- April 2006
- March 2006
- February 2006
- January 2006
- December 2005
- November 2005
- October 2005
- September 2005
- August 2005
- July 2005
- June 2005
- May 2005
- April 2005
- March 2005
- February 2005
- January 2005
- December 2004
- November 2004
- October 2004
- September 2004
- August 2004
- July 2004
Categories
- Australian issues
- Australian Politics
- Behavioural Economics
- Blogging
- Book launch stuff
- Books
- Coming Events
- Current Affairs
- Development Economics
- Eclectic Observations
- Econometrics
- Economics & Public Policy Course
- Economics for Government Course
- Economics Generally
- Economics of Education
- Economics of Elections
- Economics of National Security
- Economics of the Family
- Election
- Environmental Economics
- Film
- Finance
- Food and Drink
- From the Frontiers
- Games
- Global issues
- Health economics
- Indigenous Policy
- Inequality
- Interesting stuff
- Iraq
- Jobs
- Labour Economics
- Law
- Low Wage Work
- Macroeconomics
- Media
- Prediction Markets
- Randomisation
- Religion
- Social Capital
- Sport
- Sports
- Tax
- Television
- Thinktanks
- Trade & Development
- Travel
- Uncategorized
- Universities
- Urban Economics
- US Politics
- Web/Tech
- Weblogs
- What I'm Reading
Meta
I was interested in your response to Patricia Rogers.
It’s certainly true that “Australia is in ‘no danger of over-relying on randomised trials’, or blindly accepting them as the only method of evaluation.” But there’s still the issue of whether other kinds of evidence should count.
Writing about the Moving to Opportunity experiments, sociologist Robert Sampson argued:
“Many causal conclusions, including the consensus that smoking causes cancer, have come about after years of careful observational research linked to rigorous thinking about causal mechanisms. The early discovery of penicillin and the cause of cholera outbreaks were similarly observation based.”
Click to access 2008_AJS_Moving_to_Inequality.pdf
Sampson’s interest is in neighbourhood effects. And since the unit of analysis here is an entire community, it would be very bad news indeed if RCT were the only way to know whether an intervention worked.
If we placed too much weight on experiments, we’d risk devoting too much effort to interventions that target individuals than those that target communities and social processes.
I think it’s worth looking at whether alternatives to experiments can deliver useful results.
Don, your concern is an important one – randomised experiments give precise causal inference, but suffer from drawbacks relating to spillovers and generalisability. So I agree that we’d never want 100% of our evaluations to be randomised.
In practice, with less than 1% of evaluations being randomised, there’s virtually no chance that we will place too much weight on experiments anytime in my lifetime.
“but suffer from drawbacks relating to spillovers and generalisabilit”
.
They’re also exceptionally expensive to run, so there’s a huge opportunity cost in running them (do you want a series of well controlled non-randomized experiments that are fast to run or a single randomized experiment that takes ages? I guess it depends both on what you are looking at and what funding agencies are willing to fund).
Seems to me that Chapter 4 of Volume 1 is the real keeper. 🙂