Improving Meta-Analysis in Educational Research

Reflections on my Research

With the publication of our research article in Educational Research Review titled “Examining research on the impact of distance and online learning: A second-order meta-analysis study,” I have now officially published eight different meta-analyses in educational technology or computer science education on topics related to my research interests with several more on the way. About seven years ago, I attended Dr. Borenstein’s workshop on meta-analysis and took off running ever since. I strongly encourage you to attend his workshop and read his book if you are interested in this research method. He is a gifted teacher and author, and still answers my email questions to this day. However, now that I might label myself as more experienced in this research methodology, I have learned there are several limitations to the approach that are driven by two factors: 1) the quality of the research methods employed in the primary studies, and 2) the depth of reporting about salient features within a primary research study.

Quality of Research Methods Employed

I cannot remember where I read this or heard this, but one of the primary criticisms of the meta-analysis approach is that we are “comparing apples and oranges.” This speaks to the differences in the interventions, measures, implementations, and other facets of the research studies that might qualify in a given meta-analysis. However, if the research methods are implemented with quality with concerns to various validity factors, the results from a meta-analysis result in more meaningful and robust findings. Thus, while we might be “comparing apples and oranges,” at the end of the day, they are all “fruit.” One problem I have repeatedly faced is that I fear overly criticizing an authors’ primary study as a moderator or exclusion criterion, may offend the original authors, or inadvertently disregard useful information in building a model. I have concluded that I need to not be afraid of this reality: not all educational research is created equally.

There are several ways to address this concern in your research methods using a meta-analysis approach. One way is to simply exclude articles that do not meet the minimal requirements one might set for quality (e.g., controlling for confounding variables, providing sufficient evidence of reliability or validity for dependent measures, or analyzing data with the appropriate statistical methods). Yet another strategy that is often recommended to avoid losing potentially useful information, is to treat the study quality as a moderator in the model and report effect sizes for the varying level of study quality identified in the meta-analysis (e.g., high, medium, or low). To illustrate the point with one of my earlier and more frequently cited meta-analyses on the flipped classroom, we intentionally did not study quality as a moderator because we feared insulting the primary study authors. Almost all of the studies in the flipped classroom meta-analysis were quasi-experimental in nature and not all of these studies equally addressed the issues of validity noted. In hindsight, I now believe that this is not an insult to the primary study authors, but more of an honest account of where we stand in the field and the integrity of the effect sizes used in the meta-analysis. Lesson learned!

Depth of Reporting on Salient Features

Perhaps one of the most disturbing parts of conducting a meta-analysis, particularly one that results in a lot of literature that potentially qualifies for your meta-analysis, is the issue of reporting practices among the authors of the primary studies. For example, in my first meta-analysis on the use of pair-programming in introductory programming courses, we discovered that very few of the qualified manuscripts reported things that might be of relevance to readers, such as which Integrated Development Environment was used in the course, which programming language was used for instruction, or how the researchers assured that there was accountability between the two members in each pair doing the work and programming assignments. I see this as two related problems: 1) the journals or conference proceedings have strict word limits and many reviewers and editors favor the statistics of the pedagogy used in the implementation, and 2) in the case of the pair-programming meta-analysis, many researchers in the case of computer science, are not prepared to conduct and report education research. In both cases, I believe education is the key to addressing these issues long-term. First, editors and reviewers need to recognize that there are important details that should be shared about the individual studies themselves. Further, those individuals conducting educational research should partner with an educational researcher to ensure these issues are addressed with care. There are some ways around these issues, such as contacting the authors for details, but this is no guarantee in that some may or may not respond and in other cases, they simply may not remember.

Looking Forward

Though I have been using meta-analysis to address the various research questions I am interested in pursuing and have a track record for successful dissemination of these projects, I still do not qualify myself as an expert. There is always more to learn and gain from project to project. My primary rationale in writing this entry is to call upon our communities to address these concerns in our work and to improve the potential of meta-analysis in our respective work.