Summit Stuff

The Australia 2020 summit has asked participants for a big idea and an issue upon which they’ve changed their minds. On Monday, I posted big ideas and mindchanging experiences from Amy King and Joshua Gans. Here are mine.

What’s your big idea?

Randomised trials are the gold standard in policy evaluation. Overseas randomised trials have taught us much about early childhood intervention, improving school attendance, job training, health insurance and neighbourhood spillovers. Yet Australia does very few randomised policy trials. To remedy this, the federal government should establish a fund to support states and territories in conducting randomised policy trials. Indigenous policies, education policies, and social policies ought not be driven by rhetoric and ideology, but by bold, persistent, experimentation. Policymakers should be more modest about the limits of their knowledge, and more rigorous in putting policies to the test.

What have you changed your mind about?

A decade ago, I was confident that intuition would guide me to the right answer on many policy questions. Now, I am much more willing to be guided by the data. I used to think that that students benefited more from class size cuts than teacher pay rises; that the minimum wage was a better antipoverty tool than wage subsidies; that schools and hospitals should be allowed to keep their performance secret, and that water restrictions were good policy. On each of these issues, empirical evidence has convinced me that my intuition was wrong. I hope the next decade will bring better evidence on other policies, and that I will be open to changing my mind on them if the data disproves my intuitions.

Advertisements
This entry was posted in Australian Politics. Bookmark the permalink.

3 Responses to Summit Stuff

  1. conrad says:

    I’ll disagree with you on the randomized trials for perhaps most educational stuff (sorry to be repetitive). The opportunity cost is huge given the extra expenses and time involved. In addition, a lot of work isn’t suited to it, since progress has historically been achieved via small scale experimental designs which have been permuted and permuted over time. Even the bigger stuff, like for example, the recent Scottish study comparing different types of reading teaching methods (e.g. http://www.scotland.gov.uk/Publications/2005/02/20688/52449) haven’t needed such extra complexity to come to strong conclusions.

  2. AndrewN says:

    I think randomised trials are a great idea – as you say, the gold standard. Much policy analysis currently wouldn’t even get on the podium to claim bronze – I’m amazed at how little genuine policy evaluation occurs (I am a policy adviser and have at times fought it, and other times have been guilty of it).

    When policy is evaluated, it is generally against a false ‘control’ (e.g. ‘the average’), and is retro-fitted. Most commonly it is just longitudinal trends! The trouble is that a lot of policies haven’t got quantifiable success criteria, or design to facilitate their measure, embedded within design.

    I think a fund might help, and I’d be tempted to throw in expertise as well (an independent ‘policy auditor’ team at their disposal?).

    But your solution might overlook potential causes other than financial cost – namely the political cost (to the careers of public servants, and politicians) of failed policies. I think many public servants and politicians secretly think: “ignorance might not be bliss, but it’s safer than everyone knowing I’m wrong!”

    Somehow the solution needs to make it more costly to avoid transparent analysis, than create failed policies. Why not introduce minimum thresholds (setup that facilitates, and analysis that evaluates, the impact) in order to receive federal (COAG/SPP) funding? These would be lower than you were hoping (bronze?), but they do make rigorous analysis compulsory. Then your idea, extra money for those who do randomised trials, might be an incentive to go the the extra mile given you have to be out jogging anyway.

    You could also set up rules that reduce the cost of failure (not sure how to do this, but maybe gold-standard trials don’t have to publicly publish results for a while, or they’re anonymous or success isn’t linked to funding or the evaluation is only shared among the states … not sure how to reduce the cost of failure, but it’s key).

    The reality is that failure is difficult admit (for policy makers and, even more so, for politicians). We might not be able to do double blind trials in many situations, but until we can assess the actual impact of policies we can’t actually calculate the opportunity cost. So we’re all blind (or at least severaly short-sighted). Without a better understanding of what is working, and what isn’t, we’re destined to continue the ‘drunkard’s stagger’ from the left side of the road, to the right (depending on who is in power), hoping we’re taking the quickest way to the light on the hill.

  3. Andrew Leigh says:

    AN, I’ve also been thinking about whether there are ways that we can create more space for policy failure. I agree with you – it’s absolutely critical. On this theme, I now regret writing this oped, which argued that a randomised trial of P-plater training would show that it didn’t work. I think a better response would have been: great way to test policy, let’s wait and see what it shows.

Comments are closed.