All early childhood interventions are not created equal

A 2003 post by John Quiggin sent me to a 1995 review of The Bell Curve by James Heckman. One line jumped out:

Chapter 17 of their book discusses the mixed evidence on the success of early childhood interventions designed to boost IQ. By no means does the evidence they discuss rule out the possibility of boosting IQ through programs that enrich the learning environments of young children. Indeed, the authors acknowledge that there are strong indications that very intensive programs can be effective. Half-hearted interventions like Head Start are definitely not effective. [My emphasis]

Yet funnily enough, Australian policymakers are falling over themselves to invoke Heckman as the reason for implementing low-impact, universal, HeadStart-style, early childhood intervention programs. Such programs probably do no harm, and they may even do some good. But it’s very unlikely they have the same high cost-benefit ratios as the super-intensive, super-targeted programs that Heckman spends his time advocating. Programs like Elmira, Perry Preschool, and Abecedarian had highly-trained early childhood staff working with disadvantaged kids for up to 8 hours a day, from an extremely young age.

So next time you hear a politician or bureaucrat citing Heckman in support of a new initiative, why not ask him or her: are you advocating a universal light-touch program, of the kind that Heckman hates? Or the kind of targeted, expensive program that won’t affect most voters’ children, but might make a real difference to the life chances of the very poorest?

This entry was posted in Economics of Education. Bookmark the permalink.

15 Responses to All early childhood interventions are not created equal

  1. Andrew – spot on.

    The State government adoption of British early childhood policy makes for good PR, but in terms of doing anything significant for

    I’ve discussed this issue in my chapter on childcare in my upcoming book “Idolising Children”.

    Part of me feels early childhood advocates in Australia have become so carried away with their success in alerting government to the importance of the early years they have lost sight of the real issues in regard to human capital.

    Graham Vimpani is one who is doing well in continually pointing to the need for intensiove programs for the most disadvantaged children.

    My concern is with projects like the Raising Children Network ( – a program that recieved Federal government funding, and can be used by the government to say – “We are giving the experts the money, they recommended this website” – when really the parents and children who most need the support are unlikely to access such a site. But, it is a broad program, accessible to all taxpayers who have access to the net.

    Heckman’s programs are very targeted, and very expensive…neither of these help politicans come election time…excuse my cynicism.

  2. Fred Argy says:

    Thanks for that andrew. As I have always argued that social investment (including childhood intervention) should be well targeted at the most disadvantaged, I am heartened by what you say. Unfortunately, Andrew, targeted benefits, while more cost-effective, are less popular with the electorate than universal benefits. Incidentally you and Daniel say targeted programs are ‘expensive’. I don’t quite get that. Are they not cheaper than universal schemes for any given desired social outcome?

  3. Paul Frijters says:

    Early childhood intervention programs are definitely the talk of the town and the interesting question is indeed why intensive programs seem to struggle to find political support. That politicians use any fig-leave they can find to support whatever they choose is normal. We all use whatever figleaves we can pretend are big enough and so I would not chide anyone for hiding bogusly behind Heckman. That at least gives you a handle to pressure them with and furthers the prestige of economists which can only benefit the likes of Andrew.

    No, the question of why politicians dont go for intensive programs is the biggy. I’m not sure that Daniel is right when he says its due to the expense and the limited number of votes in it (though these are undoubtedly factors). I am generally skectical that these intensive intervention programs are actually as good value-for-money as they are purported to be. Just think about the trade-offs involved: having a one-on-one ‘surrogate parent’ relationship with a disadvantaged child basically means paying a full salary to a highly trained and therefore expensive) teacher now for a couple of years with a payback decades into the future. One for instance needs to increase the wages of the eventual adult by something like 60,000 dollars every year of that person’s life to justify an expense in the first 5 years of life of 25,000 each year (this effectively takes a discount rate of about 6 percent). This is not just a number plucked out of the air: a highly skilled teacher can maybe be assigned to 4 ‘problem children’ and would – including oncosts – cost in the region of 100,000 dollars a year.
    One can only get that kind of improvement in expected earnings for very gifted disadvantaged children and my sneaking suspicion is that the small programs in the US with the high cost-benefit ratios took exceptionally gifted poor children. The notion that the same bang-for-the-buck could be achieved for any major proportion of the disadvantaged strikes me as highly improbable: the improvements needed to justify the costs of intensive costs are just too big to be believable.

    I’m also more positive about the general early-start programs than Andrew because they create the infrastructure to try more things and to gradually work out what the most effective way is to help disadvantaged children without stigmatising them by having a separate program just for them.

  4. Steve Edney says:


    Heckman’s advocacy of these programs is more nuanced than you state. While he agrees as, you point out, that head start made no different to IQ, and you would need a differnent program he actually points out that this doesn’t mean it makes no different to life outcomes.

    As he points out here, its been demonstrated that these programs can raise the life outcomes through increase in non-cognitive skills even though there is no impact on IQ.

    A great deal of American public policy discussion judges the success or failure of education programs by their effects on cognitive test-score measurements. Head Start, for instance, was deemed a failure because it did not raise IQ scores. But such judgments are unwise. Consider the Perry Preschool Program, a family environment enrichment given to disadvantaged minority children that was evaluated by a randomized trial. The Perry intervention group had no higher IQ test scores than the control group. Yet, in a follow up to age 40, the Perry treatment children had higher achievement test scores than did the control children and on many dimensions the Perry children are far more successful than the controls. In terms of employment, schooling and participation in crime, among other measures, early interventions can partially compensate for early disadvantage. The Perry program’s economic benefits are substantial: Rates of return are 15 percent to 17 percent. The benefit-cost ratio is eight to one. Participant noncognitive skills were raised even if their IQs were not.

  5. backroom girl says:

    One thing I’ve always wondered about in this area is whether the degree to which the intervention involves parents as well as the children makes a difference. It seems to make sense to me that if parental attitudes to various things (eg education, parental employment, even reading books) can be changed, or at least parents are enlisted in trying to reinforce at home some of the lessons being taught at school, early intervention would have a greater prospect of long-term success. On the other hand, if nothing changes in the disadvantaged home environment, there would seem to be a reasonable chance of any early gains being lost. I understand that for many US programs, this is what happened – they seemed to make a difference in the short, but not in the long, term.

    Does anyone know whether this aspect has ever been looked at in evaluations of these programs?

  6. Thinking in old ways says:

    Andrew you suggest “So next time you hear a politician or bureaucrat citing Heckman in support of a new initiative, why not ask him or her: are you advocating a universal light-touch program, of the kind that Heckman hates? Or the kind of targeted, expensive program that won’t affect most voters’ children, but might make a real difference to the life chances of the very poorest?”

    I would suggest that you might need to go one step further and ask also just how cost-effective the targeted expensive programs are – and not just take the spin that is being put on the findings of the Perry and Abecedearian projects.

    Looking at the Perry pre-school project we find that around half (in some studies 2/3) the benefit is as a result of reduced criminality by participants. (That is compared to the control group who had an average of 4.6 arrests the participants only had 2.3). In contrast the Abecedarian project had no impact on criminality – but generated almost all of its savings from increased maternal employment (that is the mothers of the participants) and reduced smoking by the participants in adult life. While I think Abecedarian found some ongoing improvement on IQ, this was not found in Perry.

    That is these interventions appear to have generated outcomes through quite different mechanisms. Further in areas where one has had a positive outcome the other has failed. A consequence is they provide little information as to what it is that works.

    Also we have to understand what these programs were – Perry was a program involving 58 African American kids in 1962-1967 who had recorded low IQ scores – the Carolina Abecdearian project was pitched at kids “whose family situation were believed to put the children at risk of retarded intellectual and social development”. There are very clear social contexts around these in the US which may not be relevant to Australia – and to say nothing about questions of scalability.

    In some ways I think these are projects which we want to work (because the alternative that no sort of intervention will assist kids born into adverse family situation is perhaps too frightening to contemplate) – but where the available evidence is quite weak – and certainly massively overspun.

  7. derrida derider says:

    While I have great respect for your econometric finesse, Paul, you’re being pretty courageous in taking on Heckman’s program evaluation – its most unlikely that he would not have picked up serious residual selection bias.

    On less intensive, more broadly based, early childhood interventions its true that the evidence is not as clear cut – but that’s partly because its inherently harder to do a proper long-term evaluation of a very broad based program, and partly because few have tried to do so. Heckman’s distaste for this approach stems more from his Chicago views, disliking large scale government interference, than from hard evidence about them.

  8. backroom girl says:


    You are probably quite right that reduced criminality or adult smoking were not specific objectives of the Perry and Abecedarian programs, but surely they are still positive outcomes producing net benefits for participants. I guess they are also a reminder that we often get outcomes from social programs that we didn’t intend at the beginning – some good, some bad, but I guess all of them have to go into the cost/benefit equation.

    It really is a pity that we have such an entrenched bias against random assignment trials in Australia, though. It would be good to see some of these ideas trialled in a rigorous way here, rather than having to rely on evidence from the US that, as you quite rightly point out, may have limited applicability here. Especially as we have one a sub-population (indigenous) that is probably at least as disadvantaged as those in the US that were targeted for these programs and is crying out for effective interventions.

  9. Thinking in old ways says:


    I accept that these broader social outcomes might be beneficial. I was though trying to make some other points:

    a) The fact that these interventions appear to have had quite different outcomes in some of the core areas (one has an impact on criminality and one does not, one has an impact on IQ and one does not, etc) suggests that if early childhood interventions are effective then the specific outcome is somehow linked to some specific issues of content – whereas most of the debate is around the concept of intervening, with very little reference to the nature of the intervention;

    b)cost-effectiveness through

  10. Paul Frijters says:

    🙂 In answer to Derrida, I wouldnt think Heckman is overly sure about these rates of return either. He has changed his mind before on how to deal with selection issues – i.e. he’s moved away from functional form assumptions (leading to the famed heckman selection procedure) to relying on the perfect instrument (i.e. believable random variation). Perfect instruments dont exist in this type of program though.

    A good example of how even randomised experiments are bedevilled by selection problems is the STAR experiment on schooling in the US where they attempted to randomly split thousands of students into various groups, giving different groups different types of schooling. It was a big, prestigious, and costly experiment on which many a top-tier paper has been written. And guess what? Maybe up to 30% of the students switched classes and treatments, and certainly not in a randomised way. The notion that educators in the field are gonig to stick to a given treatment and a given population simply to inform smoe far-away researcher is wrong. Such people will stop educatnig this kid and extend their efforts into bringing another one around on the basis of where they think their effort is best spent and where their time has greatest use. Like doctors, they wont just do experiments willy-nilly. Very understandable but also implying that you shouldn’t put too much faith in rates of return of small programs (usually run by dedicated and self-disctated people) to targeted groups. I think policy makers are quite right to ignore such programs at the moment, even if Heckman officionados applaud them.

  11. Thinking in old ways says:

    .. sorry hit the wrong button


    I accept that these broader social outcomes might be beneficial. I was though trying to make some other points:

    a) The fact that these interventions appear to have had quite different outcomes in some of the core areas (one has an impact on criminality and one does not, one has an impact on IQ and one does not, etc) suggests that if early childhood interventions are effective then the specific outcome is somehow linked to some specific issues of content – whereas most of the debate is around the concept of intervening, with very little reference to the nature of the intervention let alone identifying this as being the critical component;

    b) The same way in which the unanticipated outcomes might be important in terms of cost effectiveness it might be the unanticipated impact of the program that is effective in achieveing the outcome. Various mechanisms can be suggested getting the children away from their parents, or the fact that parents were aware that if the kid had been bashed at home this could be seen by teachers the next day.

    c) If the social benefits were obtained through unexepected mechanisms – it might be even more cost-effective to target these directly – such as whether the maternal employment gains could have been achieved without the intensity of the program.

    While you may be correct in decrying the lack of random assignment to experimental interventions – surely there is a wealth of data out there – there have been hundreds of different programs tried – especially in the US. There are different school starting ages, different childcare centres have differing degrees of pedagogy versus play etc. If there are big effects surely these have and can be observed. (At the same time I have some reservations about the claim

    DD on Heckman and evaluation – I really wonder in this debate whether Heckman should stick to the two step on his own data rather than waltzing along to other people’s analysis.

  12. derrida derider says:

    Well, Paul, I’m not about to argue the relative merits of nonexperimental vs random methods with you (beyond noting as an aside that the Heckman procedure is hardly the last word in nonexperimental methods, even for Heckman). Your expertise in these things surpasses mine.

    It seems to me, though, that what you’re saying is not merely that you’re agnostic about the particular issue at hand – the effects of selective early childhood intervention – but that you have a radical agnosticism about all program evaluation. Which leaves those of us who actually have to try and formulate policy in a bit of a hole.

    For myself I prefer evidence-based policy, even where the evidence is highly imperfect, to the alternative we usually see – faith based policy justified by policy-based evidence (vide Iraq).

  13. Paul Frijters says:

    The issue of what to do when experimetnal evidence on longitudinal social interventions is exceptionally weak is important. I’m not that sympathetic to Derrida’s argument of ‘well it may be weak, but its the best we’ve got so we’ll run with the outcomes of these experiments anyway’. Mainly I think it understates how much knowledge and information we have on exactly the same issue from other sources.
    In particular, we know from both historical ‘events’ how easy or difficult it is to change the behaviour of whole swathes of young children. The Ottomans tries to change the behaviour of the kids in their possession (they simply took some of the kids of the cnoquered with them) and Australia too has had recent experiences of actively changing children by essentially switching parents. Many more such historical ‘experiemtns’ have been tried and they give much larger numbers than anything in a recent journal. Also, as human being ourselves, we experiment all the time in terms of manipulating and changing the children and adults around us, either as parent, boss, or friend. The many thousands of experiments we’ve been part of as children and adults hence give us a lot of information as to how much and fast others can be changed in variuos circumstances. hence it is simply not true that ‘all we have’ is a couple of small studies that suffer from dozens of problems. In the case that the information from official small scale experiments is subject to extreme uncertainty, it simply makes sense to revert to historical knowledge and personal experience.
    A good example of how nearly all of us discount some ‘scientific information’ if it violates our other sources of knowledge is given by the example of the effect of class sizes on schooling outcomes. If you take the ‘available evidence’ seriously, you’d not bother with reducing class-sizes. Yet policy makers in Australia, backed up by their population, simply dont believe that available evidence and go with their ‘gut feeling’ that it surely helps a child substantially to get more attention. As parents too we complain when there are too many children in a class. And as scientists we thus shrug our shoulders when yet another papers says the authors cant find an effect and simply presume that selection somewhere has perverted the outcomes.

    And if you dont like the example of class-size, try the effect of negative amenities on wages. You almost invariably (though not quite quite always) find that the dirtiest, least healthy jobs, are the ones paying less whilst any economic theory will tell you they should pay more to compensate for the risks. Our gut feeling that surely selectino was to blame nevertheless prevented scholars for decades of taking these ‘preliminary findings’ seriously.
    Hence its folly for a policy maker or a politician to use ‘official’ information in the scientific journals as the only source of information to base decisions on. He or she would quickly be booted out. And rightly so.

  14. derrida derider says:

    But Paul, your examples convincingly demonstrate that policy in fact is often not evidence based, not that it shouldn’t be evidence based.

    It seems to me in fact that the cases you cite support my position, not yours. Hard, if imperfect, evidence was ignored in favour of “historical knowledge and personal experience”. Such “historical knowledge and personal experience” is much more subject to the postmodern critique than more systematically collected data (Personal experience differs for everyone. Historical knowledge is innately highly selective. The consequence is that the version of “knowledge” from these sources that comes to be accepted is that which is the view of those with power to impose it).

    I’ve had plenty of chances over my career to observe first-hand the results of policymakers “knowing” what they should do and not allowing their moral clarity to be blurred by inconvenient evidence. And as I noted, 600k Iraqis are dead because of just such an approach.

  15. Pingback: CoreEcon » Blog Archive » A tale of two graphs

Comments are closed.