Newswise — Depending on the analysis strategy used, estimating treatment outcomes in meta­analyses may differ and may result in major alterations in the conclusions derived from the analysis, according to a study in the August 13 issue of JAMA.

Meta-analyses of randomized clinical trials (RCTs) are generally considered to provide among the best evidence of efficacy of medical interventions. They should be conducted as part of a systematic review, a scientifically rigorous approach that identifies, selects, and appraises all relevant studies. Which trials to combine in a meta­analysis remains a persistent dilemma. Meta-analysis of all trials may produce a precise but biased estimate, according to background information in the article.

Agnes Dechartres, M.D., Ph.D., of the Centre de Recherche Epidemiologie et Statistique, INSERM U1153, Paris, and colleagues compared treatment outcomes estimated by meta-analysis of all trials and several alternative strategies for analysis: single most precise trial (i.e., trial with the narrowest confidence interval), meta-analysis restricted to the 25 percent largest trials, limit meta-analysis (a meta-analysis model adjusted for small-study effect), and meta-analysis restricted to trials at low overall risk of bias. The researchers included 163 meta-analyses published between 2008 and 2010 in high-impact-factor journals and between 2011 and 2013 in the Cochrane Database of Systematic Reviews: 92 (705 RCTs) with subjective outcomes and 71 (535 RCTs) with objective outcomes.

The researchers found that treatment outcome estimates differed depending on the analytic strategy used, with treatment outcomes frequently being larger with meta-analysis of all trials than with the single most precise trial, meta-analysis of the largest trials, and limit meta­analysis. The difference in treatment outcomes between these strategies was substantial in 47 of 92 (51 percent) meta-analyses of subjective outcomes and in 28 of 71 (39 percent) meta-analyses of objective outcomes. The authors did not find any difference in treatment outcomes by overall risk of bias.

“In this study, we compared meta-analysis of all trials with several ‘best­evidence’ alternative strategies and found that estimated treatment outcomes differed depending on the strategy used. We cannot say which strategy is the best because … we cannot know with 100 percent certainty the truth in any research question. Nevertheless, our results raise important questions about meta-analyses and outline the need to re­think certain principles,” the researchers write.

“We recommend that authors of meta-analyses systematically assess the robustness of their results by performing sensitivity analyses. We suggest the comparison of the meta-analysis result to the result for the single most precise trial or meta­analysis of the largest trials and careful interpretation of the meta-analysis result if they disagree.”(doi:10.1001/jama.2014.8166; Available pre-embargo to the media at http://media.jamanetwork.com)

Editor’s Note: Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

Editorial: Meta-analysis as Evidence - Building a Better Pyramid

Jesse A. Berlin, Sc.D., of Johnson & Johnson, Titusville, N.J., and Robert M. Golub, M.D., Deputy Editor, JAMA, write in an accompanying editorial that “findings such as those in the study by Dechartres et al reinforce concerns that journals and readers have about meta­analysis as a study design. Those findings deserve consideration not only in the planning of the studies but in the journal peer review and evaluation. They also reinforce the need for circumspection in study interpretation.”

“Meta-analysis has the potential to be the best source of evidence to inform decision making. The underlying methods have become much more sophisticated in the last few decades, but achieving this potential will require continued advances in the underlying science, parallel to the advances that have occurred with other biomedical research design and statistics. Until that occurs, an informed reader must approach these studies, as with all other literature, as imperfect information that requires critical appraisal and assessment of applicability of the findings to individual patients. This is not easy, and it requires skill and intelligence. Whatever clinical evidence looks like, and wherever it is placed on a pyramid, there are no shortcuts to truth.”(doi:10.1001/jama.2014.8167; Available pre-embargo to the media at http://media.jamanetwork.com)

Editor’s Note: The authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest and none were reported.

# # #

MEDIA CONTACT
Register for reporter access to contact details
CITATIONS

JAMA