Random Thoughts

The late Senator Peter Cook used to tell me that your message is only starting to get out there when you feel like a broken record. I did a talk at a Victorian State Services Authority "futures conference" today (chaired by energetic futurist Richard Neville), and discussed the Australian project, randomised trials, strategic foresight and trust in government. Afterwards, a couple of people commented that I should spend more time spruiking randomised trials among public servants. It seems a good idea, though I don’t feel I’ve had anything new to say on the topic since first writing about it in 2003.

About these ads
This entry was posted in Uncategorized. Bookmark the permalink.

12 Responses to Random Thoughts

  1. I used to say to people that randomised or DBX trials are seen as un-australian as a throwaway line but I’m beginning to think there might be some deeper meaning to be dredged from the statement.

    Even well trained scientists will accept DB and triple blind as useful for many new treatments or drugs for most others but refuse to let that interfere with their own favourite hobby horse. Even within clinical medical fields where some treatments are completely able to be tested under almost “laboratory” conditions I have heard clinicians dismissing meta studies and Cochrane as “not real life” when it applied to a rusted on favourite procedure of theirs.

    Take pharmacists (Chemists), scientifically well trained, openly flogging Herbal remedies and Aromatherapy and Weight Loss pills.

    The Federal Government’s Emergency Call Centre – all the evidence from UK, WA and Hunter Valley shows that you would NOT have a National Call Centre or even a statewide one, but you MIGHT have a localised one tightly tied to local services. Un-australian to look at the evidence, we want grand gestures that play well on Current Affair, and make us feel “relaxed and comfortable” but achieve little.

  2. Don says:

    It’s not hard for an intelligent person to understand why randomised control trials are the gold standard in evaluation design. I don’t think that ignorance is the major reason public servants avoid policy experiments.

    The question to ask is this — What incentives are there for senior public servants to engage in policy experiments and rigorous impact evaluations?

    Policy experiments have obvious costs. They are:

    * administratively difficult — for example, street level staff will often ignore instructions and compromise the evaluation design in order to help clients;

    * ethically controversial — people will ask “what gives bureaucrats the right to use disadvantaged citizens as guinea pigs?”;

    * politically risky– experiments report net impacts rather than gross outcomes. The results usually end up sounding dismal. If the result for a politically popular program turns out to be zero or negative what does a bureaucrat do?

    You might argue that governments ought to be interested in programs that get results. After all, if you could get the unemployment rate down surely that would improve their re-election chances. But the truth is, most programs are so small in scale and produce such modest impacts that the public is unlikely to notice the results (small impacts relative to error is exactly the reason true experimental design is so much better than alternatives like quasi-experimental designs.

    So rather than targeting public servants, maybe you should be educating people like journalists, think tank researchers, and influential academics. When the attentive public stops accepting gross outcome measures and starts insisting on net impact maybe the incentives will change.

  3. Don, In software there is a methodology that uses unit tests. The test is written before the code is. Basically the outcome is being written before the mechanism that produces that outcome is. It demands rigour and discipline to do, but when it is done, the quality goes through the roof.

    Different analysis methodologies might be more palatable if we forced the politicians and policy makers to write a unit test first. Basically stating (in empircally testable terms/language) what the expected outcome of the policy is.

    It will also make bad policy and programs easier to kill if there is an empirical trail of failure.

  4. Andrew Leigh says:

    Thanks for those 3 comments. Sounds like the blogosphere is well across the issues involved in randomised trials! Don, you make a good point – I can feel an oped coming on…..

  5. Don says:

    Cameron – People have certainly tried to force policy makers to specify measurable objectives and set out an evaluation strategy in advance.

    Part of the trouble is political. To build support for a new program you often need to form a coalition — a cluster of groups who agree about what to do. The trouble is that’s just about the only thing they’ll agree on. These groups generally won’t agree on why the policy is a good idea or on what problem it’s meant to solve. This makes vague objectives and fuzzy (or non-existent) program logics highly desirable.

    Andrew – How about an op-ed on the pay-for-outcomes fad. I’d like a contract for a rain-making program. I’ll undertake to provide rain to drought affected areas. It’s very low risk to the taxpayer — I’ll only get paid if I achieve a result. No rain no pay.

  6. I’m on the run – abou to go out and look at evaluating a program (true). I’m not sure if my rant above was intelligable.

    But andrew – wasn’t it you who was sneering at the experiment to supply grog to alcoholics in canada or somewhere a while back? Or was it on another blog?
    Seems relevant to this issue to me. Perhaps more from me later in the weekend. I am very interested in evidence based interventions (of all sorts) as opposed to warm fuzzy feeling based decisions.

  7. My stretch limo hasn’t arrived yet but check out this thread at LP for some insight into the evidence dilemma.
    http://larvatusprodeo.net/2006/01/26/catch-my-disease/#comments

  8. Don says:

    FXH – Andrew’s exact words were “The sample size is 17, and there is no control group”. Dr L was sneering but the study he was sneering at wasn’t an experiment.

  9. don – nothing wrong with sneering. Without a good sneer there would be no blogging, no Anarchy in UK and Sex Pistols indeed perhaps no Rolling Stones and the world would be a poorer place.

    From memory the Canadian “experiment” gave 17 homeless alcoholics some controlled dose daily dose and found that their life improved with less health problems and illegal activities. Pretty much what I would have expected.

    Yes sample size is small, there is no control and perhaps bias, expectation bias as well as “Hawthorne effect” type bias. But with this this group of people there is likely to be little alternative than to “have a go” – or “poke it with a stick and see what happens”.

    Homeless agencies have few funds and even less funds for research as well as a dearth of researchers. What chances of a NHMRC grant to give 100 homeless alcoholics grog and set up another control of 100 homeless alcoholics? Bugger all I’d say. Imagine Brendan Nelson or Tony Abbott seeing this come before them, or Andrew Bolt writing it up.

    The only real possibility is someone doing this sort of trial out of petty cash, writing it up and perhaps stimulating something more rigourous.

    After all our H. Pylori Nobel Prize winner Dr Marshall made a breakthrough when he swallowed the bacteria himself.

    Whilst the grog study is clearly a lower level of evidence it is at least on step up from individual case studies.

  10. Andrew Leigh says:

    FXH, Don’s defence said pretty much what I was going to say. Sure, trials without control groups are better than nothing – but let’s not write newspaper articles about studies without control groups (incidentally, my recollection of the Marshall incident was that it was about experimentation, and that he never tried to publish on the basis of it).

  11. andrew – I’m not sure I really disagree on anything.

    One issue is the way the media, mainly newpapers and TV (with radio to a lesser extent) report “breakthroughs”. I’m particularly unhappy about cancer cures and such that are announced when sometimes all that has occurred is a succesful application for an NHMRC grant.

    Getting back to your point about policy and Don’s point about net impact. I have no idea what journalists are taught formally about such things but I’d agree that a target for “re-education” would be print journalists (I’d assume tabloid TV and radio people are beyond help)and various mobs of academics. Think tank mobs seem to work from ideology backwards so it’s hard to see how to influence them.

    I’d think the easiest entry to influence in Australia is the large NGO’s with a few well placed and timed short sharp op eds or conference or industry journal articles. One could avoid the obvious head against brick wall targets like say convincing the Salvos about harm reduction rather than abstinence in drug and alcohol.

    If you want noise and publicity how about an article on last year australian of the year and the burns treatment. Issues (possibly) of: evidence levels, peer review, publication, private IP of publically funded research findings, cost /benefit, economic studies, comparative outcomes to existing treatments etc.

  12. Andrew Leigh says:

    FXH, between following up on all your posts, I think you’re offering me a full-time research agenda!

Comments are closed.