Research, evaluation, analysis

Perils of Meta-Analysis

An enormous amount of research is being published these days, and one of the ways in which people are trying to deal with it is by performing meta-analysis. Meta-analysis takes a large number of studies of the same topic, converts their findings to a standard form (usually an odds ratio), and then analyzes the trend in these converted findings.

While meta-analysis is formally an evaluative technique (since you can formulate null hypotheses for meta-analyses), I believe you are better off to treat it as an exploratory method. The main reason I believe that is that the consumer of meta-analysis has to put too much faith in the judgment of the meta-analyst.

First of all, unless you're willing to put in a lot of work to verify that the selection of findings in a meta-analysis is comprehensive and representative of the literature, you have to take the meta-analysts' word that it is. Many years ago a study of the effects of class size on which I worked was included in one of the first modern meta-analyses. However, the report from which data were taken was a conference paper which provided data mainly for comparisons where significant differences between classes of different sizes were observed. Data about the great majority of non-significant comparisons were provided only in the project report, which the meta-analysts failed to include. They also failed to include a large number of other findings which had been reported in a conventional literature review done only a couple of years before.

You are also dependent on the meta-analysts' judgment about the adequacy of the research studies from which the findings are taken. Recently I was reviewing articles about rehabilitation of the sequelae of brain injury. One article reported that two different methods of rehabilitation did not differ in effectiveness. A few months later a letter appeared in the same journal from the originator of one of the methods, who pointed out that it had been used at a stage in recovery for which it was not recommended. If that study were considered for inclusion in a meta-analysis (and the medical research literature is full of meta-analyses), you would have to hope that the meta-analysts noticed the letter.

Originally meta-analysts argued that poorly designed studies still provided some information, and that they should therefore be included in meta-analyses. The problem is that poorly designed studies may incorporate two types of error – unnecessarily large random error and systematic error. The aggregation of studies will help overcome the problem with the random error, but it will not solve the problem with the systematic error. I think we can see from the last example that a poorly designed study can be completely uninformative. A comparison of well and poorly designed studies should at least be provided.

Meta-analysis is an effective way to reach tentative conclusions about a topic which can then be assessed by further research or by a more conventional literature review. In my experience the most informative literature reviews are those written by someone who started by consulting recent review articles and then worked backwards through the articles cited in it.

Perils of Meta-Analysis © 2001, 2006 John FitzGerald
Home page | Decisionmakers' index | E-mail