Correlation, causation and c-sections

A recent report from the New York Times (subscription) showed how a top-class newspaper clearly distinguishes correlation from causation:

A much-anticipated report from the largest and longest-running study of American child care has found that keeping a preschooler in a day care center for a year or more increased the likelihood that the child would become disruptive in class — and that the effect persisted through the sixth grade.

The effect was slight, and well within the normal range for healthy children, the researchers found. And as expected, parents’ guidance and their genes had by far the strongest influence on how children behaved. …

The research, being reported today as part of the federally financed Study of Early Child Care and Youth Development, tracked more than 1,300 children in various arrangements, including staying home with a parent; being cared for by a nanny or a relative; or attending a large day care center. Once the subjects reached school, the study used teacher ratings of each child to assess behaviors like interrupting class, teasing and bullying. … 

Others experts were quick to question the results. The researchers could not randomly assign children to one kind of care or another; parents chose the kind of care that suited them. That meant there was no control group, so determining cause and effect was not possible.

By contrast, a report in today’s Sydney Morning Herald shows what happens when you assume that correlation equals causation.

Women will not be allowed to insist on caesarean deliveries in NSW public hospitals without a medical reason under a new health department policy. …

The policy cites a US study of more than 5 million births, which found last year that babies born by medically unnecessary caesarean were three times as likely to die in the newborn period as those born vaginally. The death rate for the caesarean babies was 1.77 for every 1000 live births, compared with 0.62 from normal delivery.

To be fair to SMH journalist Julie Robotham, her report was principally about the policy announcement, not the study. But she should at least have reminded readers that a study that merely looks at correlations cannot tell us whether c-sections are more dangerous – regardless of the sample size. To answer the causal question, we would need to have some random (or quasi-random) assignment of women to c-sections and vaginal births. In the absence of that kind of study, the question of whether c-sections are more dangerous remains open.

This entry was posted in Health economics. Bookmark the permalink.

8 Responses to Correlation, causation and c-sections

  1. Verdurous says:

    Andrew, that’s a really good observation you’ve made. I definitely agree that in this day and age journalists would be best to learn more about the nature of studies, statistics and notions of cause and effect. I think there was a study last year (reported in the papers) that noted a correlation between teens who lisen to music with explicit lyrics and early sexual activity/multiple sexual partners. I remember thinking that the causal relationship might work in reverse or indeed that that both activites are influenced by some unmentioned causal factor. I don’t remember all the details however.

    Having said that, in the example you’ve given there may be some barriers to randomisation, although I think it is probably achievable. It is certainly not possible to use placebos or blinding though this is probably less important. Sometimes we must use the best available methods and studies to draw inferences. There are the jaundiced few in medicine who resist evidence-based medicine to some degree. They often use arguments such as “well, no-one’s going to do a double-blinded placebo controlled randomised study of whether parachutes are a life-saving measure”. Hmmm.

  2. derrida derider says:

    Verdurous, in the social sciences controlled experiments are often simply impossible even leaving aside ethical considerations (of course, they sometimes are in the natural sciences too but this is definitely rarer).

    Non-experimental methods (ie fancy statistics) have then to be used but these often, in spite of all their sophisticated attempts to do so, cannot properly isolate causal relations. Still, some evidence is better than none.

    But yeah, all we can do is once again bemoan the lack of analytic and quantitative skills amongst journalists. These skills simply aren’t in their selection criteria – the ability to deal with people and to write quickly and well are rated much higher.

  3. David Lynch says:

    The following article by Daniel Trefler sparked an interest in this stuff for me a few years ago:

    There’s also a .ppt version on his website:

    The recent study by Gruber and others was also quite striking and also covered in the NYTimes:

    Other stuff that might interest you and your readers:

    I sometimes think the researchers take this stuff a bit too seriously – Baby Einstein is often used as a means of distracting one’s child long enough so that a parent can take a shower, do the dishes, etc., etc. Personal experience suggests Baby Einstein holds their attention in a way that other TV programs don’t!

  4. Andrew Leigh says:

    “They often use arguments such as “well, no-one’s going to do a double-blinded placebo controlled randomised study of whether parachutes are a life-saving measure”.

    If anyone thought there was some doubt over whether parachutes saved lives, I’d be happy to do such a trial. I just wouldn’t use human subjects.

  5. derrida derider says:

    But Andrew, parachutes cost lives. Ya gotta get the causal relations right – people wouldn’t jump out of planes if they didn’t exist.

  6. Verdurous says:

    Fair points both. I think the nub of the issue though is two-fold. Firstly, causality questions are not only a feature of which type of study is used.

    Secondly, there are examples in the real world of hypotheses which are difficult or impossible to test through experiment. However as AB Hill said:

    “All scientific work is incomplete – whether it be observational or experimental. All scientific work is liable to be upset or modified by advancing knowledge. That does not confer upon us a freedom to ignore the knowledge we already have, or to postpone the action that it appears to demans at a given time.”

    However, this also doesn’t allow journo’s, statisticians or politicians off the hook when they infer that all evidence is equal.

  7. See the British Medical Journal systematic review of parachute effecitiveness:

    Their conclusions:
    “As with many interventions intended to prevent ill health, the effectiveness of parachutes has not been subjected to rigorous evaluation by using randomised controlled trials. Advocates of evidence based medicine have criticised the adoption of interventions evaluated by using only observational data. We think that everyone might benefit if the most radical protagonists of evidence based medicine organised and participated in a double blind, randomised, placebo controlled, crossover trial of the parachute.

    I also like their discussion of the ‘healthy cohort effect’:
    “One of the major weaknesses of observational data is
    the possibility of bias, including selection bias and
    reporting bias, which can be obviated largely by using
    randomised controlled trials. The relevance to
    parachute use is that individuals jumping from aircraft
    without the help of a parachute are likely to have a
    high prevalence of pre-existing psychiatric morbidity.
    Individuals who use parachutes are likely to have less
    psychiatric morbidity and may also differ in key demographic
    factors, such as income and cigarette use. It
    follows, therefore, that the apparent protective effect of
    parachutes may be merely an example of the “healthy
    cohort” effect.

    The ‘rapid responses’ are also interesting.

  8. On the original topic (pre school), I have read a related paper Magnuson, Ruhm and Waldfogel (2005) Does prekindergarten improve school preparation and performance? EconEducRev

    This paper does address the causation questions seriously. They also use state variations in pre-kindergarden spending as an ‘instrument’ to identify the effect. (Though from memory these statistically preferred estimates were not very precise)

Comments are closed.