The Meta Analysis of Research Studies |
|
|
| |||||||||||||||||||||||||
Research literature, it is often pointed out, is growing at an exponential rate. One study estimated that there are 40,000 journals for the sciences, and that researchers are filling those journals at the rate of one article every 30 seconds, 24 hours a day, seven days a week (Mahoney, 1985). No matter what the topic—from computer-aided instruction to sex differences to the effects of medication on hyperactivity—researchers can, in just a few years, add dozens and even hundreds of studies to the literature. As research results accumulate, it becomes increasingly difficult to understand what they tell us. It becomes increasingly difficult to find the knowledge in this flood of information. In 1976, Gene Glass proposed a method to integrate and summarize the findings from a body of research. He called the method meta-analysis. Meta-analysis is the statistical analysis of a collection of individual studies.
Using the traditional method of integrating research studies, a reviewer provides a narrative, chronological discourse on previous findings. Yet this method is flawed and inexact:
In a meta-analysis, research studies are collected, coded, and interpreted using statistical methods similar to those used in primary data analysis. The result is an integrated review of findings that is more objective and exact than a narrative review. The human mind is not equipped to consider simultaneously a large number of alternatives. (This is true even for bright, energetic researchers.) Confronted with the results of 20 similar studies, the mind copes only with great difficulty. Confronted with 200, the mind reels. Yet that is exactly the scope of the problem faced by a researcher attempting to integrate the results from a large number of studies. As a result,
When performed on a computer, meta-analysis helps the reviewer surmount the complexity problem. The reviewer can code hundreds of studies into a data set. The data set can then be manipulated, measured, and displayed by the computer in a variety of ways. Researchers can tolerate ambiguity well. Policy makers, however, particularly elected policy makers, have a limited time in which to act. They look to research to provide information that will help them choose among policy options. Unfortunately, original research, and narrative reviews of the research, often do not provide clear options to policy makers. Senator Walter Mondale expressed his frustration to the American Psychological Association in 1970:
A scientific study should be designed and reported in such a way that it can be replicated by other researchers. However, researchers seldom attempt to replicate previous findings. Instead, they pursue funding for the new, the novel, or—at the very least—they attempt to extend what is considered to be the current state of knowledge in their field. The result can be an overwhelming number of studies on a given topic, with no two studies exactly alike. In such circumstances, it is difficult to determine if the differences between the study outcomes are due to chance, to inadequate study methods, or to systematic differences in the characteristics of the studies. Meta-analysis can help you investigate the relationship between study features and study outcomes. You code the study features according to the objectives of the review. You transform the study outcomes to a common metric so that you can compare the outcomes. Last, you use statistical methods to show the relationships between study features and outcomes. from Rudner, Glass, Evartt, & Emery (2002). A user's guide to the meta-analysis of research studies Regardless of the software package (if any) you use to meta-analyze research findings, I encourage you to look at the manual for Meta-Stat - A user's guide to the meta-analysis of research studies, by Lawrence Rudner, Gene Glass, David Evartt, and Patrick Emery. This on-line manual provides step-by-step instructions on the design, coding, and analysis of meta-analytic studies. This web site, Meta-Stat, and on-line manual are made available through the auspices of the ERIC Clearinghouse on Assessment and Evaluation, Department of Measurement, Statistics and Evaluation, University of Maryland, College Park.
|