Good news, you're in the control group

I like to think I’m as much a fan of randomised trials as anyone. But I’m not sure that even I would go so far as suggesting randomisation when it comes to working out the deterrent effects of the death penalty. From today’s NYT.

Professor Wolfers said the answer to the question of whether the death penalty deterred was “not unknowable in the abstract,” given enough data.

“If I was allowed 1,000 executions and 1,000 exonerations, and I was allowed to do it in a random, focused way,” he said, “I could probably give you an answer.”

Advertisements
This entry was posted in Law, Randomisation. Bookmark the permalink.

9 Responses to Good news, you're in the control group

  1. Randomised trials are great but this may be taking things a bit too far.

    He’s a joker that Wolfers.

  2. Kevin Cox says:

    I am not a fan of randomised trials and the execution example helps show why. Even if he did carry out his experiment and came to a conclusion then so what.

    The environment in which the experiment was carried out would have changed and there now would be different factors at work. Lots of people may have changed to a religion where being put to death was an honour and so they sort ways to be executed or etc. The society might have really banned the manufacture and ownership of hand guns, taking drugs may have been decriminalised – and in the end even if it was shown that executions were not a deterrent then there are lots of other reasons why people still want others to be executed. In other words the experiment is unlikely to make any difference and change policy on whether people were executed or not.

    If you think a policy is sensible and likely to work then implement it and measure what happens or as with the death penalty remove it and see what happens – it will tell you more if you measure things well and find out why people kill each other.

    Randomised trials were designed for “static” systems where there are “equilibrium” conditions. They are an inappropriate tool for adaptable dynamic learning systems that are in a constant state of adjustment. Even if you get “a result” then there will be a thousand and one arguments against taking any notice of the result because the system has changed.

    The argument is not whether randomised trials in dynamic systems tell you anything. The argument is whether it makes any practical difference to decision makers. I would suggest no and I would suggest you are better to go for pilot trials of something you think is a good idea and measure whether there are any differences rather than waste your time with randomised trials.

  3. Sinclair Davidson says:

    Chicken.

  4. Mark Cully says:

    In the spirit that one can take randomised trials too far, see this piece in the British Medical Journal recommending randomised trials on the effectiveness of parachutes: http://www.bmj.com/cgi/content/full/327/7429/1459

  5. Andrew Leigh says:

    Kevin, your points about the difficulty of policy evaluation in dynamic systems are well taken. But pilot trials are no better – and often worse.

    As to the death penalty, tons of research looks at the moratoriums, but in a country where lightning strikes kill more people than the death penalty, it’s very hard to discern effects. Justin’s trial would put to death in a single year about as many people as have been executed in the US over the past 30 years (the count from 1976-2007 is 1099).

  6. Sinclair Davidson says:

    and you’re suggesting those lighting strikes are not random. hmmm..

  7. Kevin Cox says:

    What I am trying to say is that the effectiveness of randomised trials is whether or not anyone takes any notice of the results. Because it is so easy to come up with arguments or reasons why you believe or don’t believe the results they are unlikely to change the mind of a decision maker or of the population in general.

    I am suggesting that social economists are more likely to get support to run experiments disguised as trials than you are to get anyone to fund a randomised trial. You may not be as confident in the trials “results” but you are more likely to get it run and you are more likely to get action taken from trial results than if you had a randomised experiment.

    Now perhaps we can devise a randomised experiment to test this hypothesis 🙂

  8. Andrew Leigh says:

    I am suggesting that social economists are more likely to get support to run experiments disguised as trials than you are to get anyone to fund a randomised trial. You may not be as confident in the trials “results” but you are more likely to get it run and you are more likely to get action taken from trial results than if you had a randomised experiment.

    It’s also easier to get politicians to give money to orangutans than randomised trials. But ‘what will politicians fund?’ is a different question from ‘what’s the gold standard in policy evaluation?’.

  9. Kevin Cox says:

    Andrew

    I agree that they are two different questions but:

    If you want to run some experiments it is best to do something than nothing at all.

    It is easier to convince people to put in measuring tools into something they are going to do anyway so that way you get many more experiments. My plea is to spend your efforts thinking of ways of putting in measuring tools into trials and pilots rather than take on the very difficult task to get randomised trials. If you look at what you are trying to do as a “marketing exercise” then your marketing advisers will tell you that selling randomising trials is a hard sell and it is an easier sell to put in good measuring tools.

    Are there any evaluations to show that randomised trials for social experiments give “better” results than well measured trials? My guess is no or if there are they are controversial:)

    If you run trials where people volunteer to join or not join then you will still get subjects for both or multiple treatments and with appropriate statistical efforts you can get pretty close to your gold standard.

Comments are closed.