The Meta Analysis of Research Studies         

 

Overviews

     Meta-Analysis in Educational Research [1991] - Robert L. Bangert-Drowns & Lawrence M. Rudner


Online Reports

Meta-Analysis at 25 [1999] - Gene V Glass [Conference Address]

Meta-Analysis: Methods of Accumulating Results Across Research Domains [1998] - Larry C. Lyons [Online Book]

Meta-Analysis Gaining Status In Science And Policymaking[1997] -Stephen P. Hoffert [Journal Article; from The Scientist, Vol. 11, n.18, p. 1,5]

In Praise of Failure: Failed Meta-analysis [1996] - Bandolier Publishers [Editorial policy statement]

Meta-analysis: potential and promise (1998)Matthias Egger, George Davey Smith, British Medical Journal

Meta-analyses: principles and procedures (1998).       Matthias Egger, George Davey Smith, Andrew N Phillips, British Medical Journal
 
Meta-analysis: beyond the grand mean  (1998).  George Davey Smith, Matthias Egger, Andrew N Phillips, British Medical Journal
      
Meta-analysis: bias in location(1998). Matthias Egger, George Davey Smith, British Medical Journal

Meta-analysis: spurious precision(1998). Matthias Egger, Martin Schneider, George Davey Smith, British Medical Journal

Meta-analysis: Unresolved issues and future developments
(1998). George Davey Smith, Matthias Egger,
British Medical Journal 


Software

MetaStat (free, Rudner, Glass, Evartt & Emery)

Meta analysis (free; Schwarzer)

Meta-Analysis Easy to Answer (free; Kenny)

MetaWin

Comprehensive Meta-Analysis

Meta-Analysis Software [comparative software review from British Medical Journal, 1998]


Bibliographies

NIH (static)
ERIC(static)
ERIC (Dynamic)


Links

Will Shadish's meta analysis page
 

 

Meta-analysis refers to the analysis of analyses...the statistical analysis of a large collection of analysis results from individual studies for the purpose of integrating the findings. (Glass, 1976, p. 3) 

This page is intended to help you learn more about meta-analysis by providing you with an overview; links to relevant documents and resources; and free, high-quality software

 

 

Research literature, it is often pointed out, is growing at an exponential rate. One study estimated that there are 40,000 journals for the sciences, and that researchers are filling those journals at the rate of one article every 30 seconds, 24 hours a day, seven days a week (Mahoney, 1985). No matter what the topic—from computer-aided instruction to sex differences to the effects of medication on hyperactivity—researchers can, in just a few years, add dozens and even hundreds of studies to the literature.

As research results accumulate, it becomes increasingly difficult to understand what they tell us. It becomes increasingly difficult to find the knowledge in this flood of information.

In 1976, Gene Glass proposed a method to integrate and summarize the findings from a body of research. He called the method meta-analysis. Meta-analysis is the statistical analysis of a collection of individual studies.

Meta-analysis refers to the analysis of analyses. I use it to refer to the statistical analysis of a large collection of results from individual studies for the purpose of integrating the findings. It connotes a rigorous alternative to the casual, narrative discussions of research studies which typify our attempts to make sense of the rapidly expanding research literature. (Glass, 1976)

Using the traditional method of integrating research studies, a reviewer provides a narrative, chronological discourse on previous findings. Yet this method is flawed and inexact:

Unable to deal with the large number of studies on a topic, reviewers focus on a small 
subset of studies, often without describing how the subset was selected.
Reviewers often cite the conclusions of previous reviews without examining those reviews critically.
Reviewers are usually active and prominent in the field under review. Therefore, they 
might not be inclined to give full weight to evidence that is contrary to their own positions.

In a meta-analysis, research studies are collected, coded, and interpreted using statistical methods similar to those used in primary data analysis. The result is an integrated review of findings that is more objective and exact than a narrative review.

The human mind is not equipped to consider simultaneously a large number of alternatives. (This is true even for bright, energetic researchers.) Confronted with the results of 20 similar studies, the mind copes only with great difficulty. Confronted with 200, the mind reels. Yet that is exactly the scope of the problem faced by a researcher attempting to integrate the results from a large number of studies. As a result,

The typical review concludes that the research is in horrible shape; sometimes one gets results, sometimes one doesn't. Then the call is sounded for better research designs, better measures, better statistical methods—in short, a plaintive wish that things were not so complicated as they are. (Glass, 1976)

When performed on a computer, meta-analysis helps the reviewer surmount the complexity problem. The reviewer can code hundreds of studies into a data set. The data set can then be manipulated, measured, and displayed by the computer in a variety of ways.

Researchers can tolerate ambiguity well. Policy makers, however, particularly elected policy makers, have a limited time in which to act. They look to research to provide information that will help them choose among policy options.

Unfortunately, original research, and narrative reviews of the research, often do not provide clear options to policy makers. Senator Walter Mondale expressed his frustration to the American Psychological Association in 1970:

What I have not learned is what we should do about these [educational] problems. . . . For every study, statistical or theoretical, that contains a proposed solution or recommendation, there is always another equally well–documented study, challenging the assumptions or conclusions of the first. No one seems to agree with anyone else's approach. But more distressing: no one seems to know what works.

A scientific study should be designed and reported in such a way that it can be replicated by other researchers. However, researchers seldom attempt to replicate previous findings. Instead, they pursue funding for the new, the novel, or—at the very least—they attempt to extend what is considered to be the current state of knowledge in their field. The result can be an overwhelming number of studies on a given topic, with no two studies exactly alike.

In such circumstances, it is difficult to determine if the differences between the study outcomes are due to chance, to inadequate study methods, or to systematic differences in the characteristics of the studies.

Meta-analysis can help you investigate the relationship between study features and study outcomes. You code the study features according to the objectives of the review. You transform the study outcomes to a common metric so that you can compare the outcomes. Last, you use statistical methods to show the relationships between study features and outcomes.

from Rudner, Glass, Evartt, & Emery (2002). A user's guide to the meta-analysis of research studies         

Regardless of the software package (if any) you use to meta-analyze research findings, I encourage you to look at the manual for Meta-Stat - A user's guide to the meta-analysis of research studies, by Lawrence Rudner, Gene Glass, David Evartt, and Patrick Emery. This on-line manual provides step-by-step instructions on the design, coding, and analysis of meta-analytic studies.

This web site, Meta-Stat, and on-line manual are made available through the auspices of the ERIC Clearinghouse on Assessment and Evaluation, Department of Measurement, Statistics and Evaluation, University of Maryland, College Park.