Reporting scientific research is an essential component of the research and knowledge translation process (KT). Knowledge translation is facilitated when research is reported and communicated with sufficient depth and accuracy for readers to interpret, synthesize, and utilize the study findings. KTDRR conducts activities, and offers numerous resources and services to enhance and facilitate the reporting of high quality research.
Standardized checklists for reporting research have been developed by various—often self-appointed—groups of researchers, clinicians and editors, to improve the clarity and completeness of research reporting in the professional and research literature.
Reporting checklists are not binding on anyone, although there are efforts to convince journal editors that they should require that authors adhere to applicable guidelines. The CONSORT statement has been most successful in this regard, having been adopted by many journals.
The KTDRR has collected a number of reporting checklists, including some still under development, which are presented below. Disability and rehabilitation researchers may want to follow these guidelines even if adherence is not required by the journal in which they plan to publish, because a more complete report will be more informative to readers, and will enhance the chances that the report will be included in systematic reviews.
In addition to the research reporting checklists, other groups and individual researchers have developed assessment criteria and checklists to evaluate the quality of published reports, including systematic reviews.
We acknowledge and thank the NCDDR's Task Force on Systematic Review and Guidelines for help in identifying some of these checklists, including those under development.
CHERRIES (Checklist for Reporting Results of Internet E-Surveys)
Use of the CHERRIES statement will give peer reviewers and readers a better understanding of Web-based surveys.
Eysenbach, G. (2004). Improving the quality of web surveys: The checklist for reporting results of internet E-surveys (CHERRIES). Journal of Medical Internet Research, 6(3), e34. Full-text retrieved July 17, 2007: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1550605
CONSORT (Consolidated Standards for Reporting Trials)
An evidence-based approach to examine the quality of reports of randomized controlled trials (RCTs). The 22-item checklist is designed to help users appropriately evaluate the validity of RCTs.
The Checklist (PDF formats):
Begg, C., Cho, M., Eastwood, S., Horton, R., Moher, D., Olkin, I., Pitkins, R., Rennie, D., Schulz, K.F., Simel, D., & Stroup, D.F. (1996). Improving the quality of reporting of randomized controlled trials: The CONSORT statement. JAMA, 276(8), 637-639. Full-text retrieved July 17, 2007: http://jama.ama-assn.org/cgi/content/citation/276/8/637
Endorsement of CONSORT and additional expansions, identified by the NCDDR's Task Force on Systematic Review and Guidelines:
Altman, D. G. (2005). Endorsement of the CONSORT statement by high impact medical journals: Survey of instructions for authors. British Medical Journal, 330, 1056-1057. Full-text retrieved July 17, 2007: http://www.bmj.com/cgi/content/full/330/7499/1056
Altman, D. G., Schulz, K. F., Moher, D., Egger, M., Davidoff, F., Elburne, D., et al. (2001). The revised CONSORT statement for reporting randomized trials: Explanation and elaboration. Annals of Internal Medicine, 134 (8), 663-694. Abstract with link to free full-text retrieved July 17, 2007 from PubMed.
Moher, D., Schulz, K. F., Altman, D., & CONSORT. (2001). The CONSORT statement: Revised recommendations for improving the quality of reports of parallel-group randomized trials. The Journal of the American Medical Association, 285, 1987-1991. Full-text retrieved July 7, 2007: http://www.biomedcentral.com/1471-2288/1/2
Expanded to cluster trials:
Campbell, M. K., Elbourne, D. R., Altman, D. G., & CONSORT group. (2004). CONSORT statement: Extension to cluster randomised trials. British Medical Journal, 328, 702-708. Full-text retrieved July 7, 2007: http://www.bmj.com/cgi/content/full/328/7441/702
Adapted for noninferiority/equivalence trials:
Piaggio, G., Elbourne, D. R., Altman, D. G., Pocock, S. J., Evans, S. J., & CONSORT Group. (2006). Reporting of noninferiority and equivalence randomized trials: An extension of the CONSORT statement. The Journal of the American Medical Association, 295, 1152-1160. Full-text retrieved July 17, 2007: http://jama.ama-assn.org/cgi/content/full/295/10/1152
Expanded for herbal medicine trials:
Gagnier, J.J., Boon, H., Rochon, P., Barnes, M.D., Bombardier, C., et al. (2006). Recommendations for reporting randomized controlled trials of herbal interventions: Explanation and elaboration. Journal of Clinical Epidemiology, 59, 1134-1149. Abstract with link to fee-based full-text retrieved July 17, 2007: http://www.ncbi.nlm.nih.gov/sites/entrez?db=pubmed&list_uids=17027423
Gagnier, J.J., Boon, H., Rochon, P., Moher, D., Barnes, J., Bombardier, C., et al. (2006). Reporting randomized, controlled trials of herbal interventions: An elaborated CONSORT statement. Annals of Internal Medicine, 144(5), 364-367. Abstract with link to fee-based full-text retrieved July 17, 2007: http://www.annals.org/cgi/content/abstract/144/5/364
Supplemented for homeopathic trials:
Dean, M.E., Coulter, M.K., Fisher, P., Jobst, K., & Walach, H. (2007). Reporting data on homeopathic treatments (RedHot): A supplement to CONSORT. Homeopathy, 96, 42-45. Abstract with lint to fee-based full-text retrieved September 25, 2007 from PubMed.
Expanded for occupational therapy:
Moberg-Mogren, E., & Nelson, D.L. (2006). Evaluating the quality of reporting occupational therapy randomized controlled trials by expanding the CONSORT criteria. American Journal of Occupational Therapy, 60(2), 226-235. Abstract retrieved July 17, 2007 from PubMed.
Expanded for reporting on side effects/harms:
Ioannidis, J.P.A., Evans, S.J.W., Gotzsche, P.C., O'Neill, R.T., Altman, D.G., Schulz, K.,
et al. (2004). Better reporting of harms in randomized trials: An extension of the CONSORT statement. Annals of Internal Medicine, 141(10), 781-788. Full-text retrieved July 17, 2007: http://www.annals.org/cgi/content/full/141/10/781
MOOSE (Meta-Analysis of Observational Studies in Epidemiology)
This checklist for reporting observational studies was developed following a workshop convened to address the problem of increasing diversity and variability that exist in reporting meta-analyses of observational studies. (Stroup et al., 2000).
Stroup, D.F., Berlin, J.A., Morton, S.C., et al. for the MOOSE Group. (2000). Meta-analysis of observational studies in epidemiology: A proposal for reporting. JAMA, 283(15), 2008-2012. Full-text retrieved July 17, 2007: http://jama.ama-assn.org/cgi/content/full/283/15/2008
PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses)
PRISMA is an evidence-based minimum set of items for reporting in systematic reviews and meta-analyses. The PRISMA Statement is an update and expansion of the now-out dated QUOROM Statement. The aim of the PRISMA Statement is to help authors improve the reporting of systematic reviews and meta-analyses. We have focused on randomized trials, but PRISMA can also be used as a basis for reporting systematic reviews of other types of research, particularly evaluations of interventions. PRISMA may also be useful for critical appraisal of published systematic reviews, although it is not a quality assessment instrument to gauge the quality of a systematic review.
The PRISMA Statement is an evolving document that is subject to change periodically as new evidence emerges.
QUOROM (Quality of Reporting of Meta-Analyses)
A conference was held that resulted in the QUOROM statement, a checklist, and a flow diagram for reporting on meta-analysis and systematic reviews. This was replaced by PRISMA in 2009.
Moher, D., Cook, D.J., Eastwood, S., Olkin, I., Rennie, D., & Stroup, D.F. (1999). Improving the quality of reports of meta-analyses of randomised controlled trials: The QUOROM statement. The Lancet, 354, 1896-1900. Abstract retrieved September 25, 2007 from PubMed.
REMARK (REporting recommendations for tumor MARKer prognostic studies)
Guideline purpose is to encourage transparent and complete reporting so that the relevant information will be available to others to help them to judge the usefulness of the data and understand the context in which the conclusions apply.
McShane, L.M., Altman, D.G., Sauerbrei, W., Taube, S E., Gion, M., & Clark, G.M. (2005). Reporting
recommendations for tumor marker prognostic studies (REMARK). Journal of the National Cancer
Institute, 97(16), 1180-1184. Abstract retrieved September 25, 2007 from PubMed. Full-text retrieved July 17, 2007: http://jnci.oxfordjournals.org/cgi/content/full/97/16/1180
STARD (Standards for Reporting of Diagnostic Accuracy)
A 25-item checklist was developed to improve the quality of reporting of studies of diagnostic accuracy.
Bossuyt, P.M., Reitsma, J.B., Bruns, D.E., Gatsonis, C. A., Glasziou, P.P., Irwig, L.M., et al. (2003). The STARD statement for reporting studies of diagnostic accuracy: Explanation and elaboration. Clinical Chemistry, 49, 7-18. Abstract retrieved September 25, 2007 from PubMed. Full-text retrieved July 17, 2007: www.clinchem.org/content/49/1/7.full.pdf
STARLITE (Sampling strategy, Type of study, Approaches, Range of years, Limits, Inclusion and exclusions, Terms used, Electronic sources)
A recommendation for reporting of literature searches in qualitative systematic reviews.
Booth, A. (2006). "Brimful of STARLITE": Toward standards for reporting literature searches. Journal of the Medical Library Association, 94(4), 421-9, e205. Full-text retrieved July 17, 2007: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1629442
STRICTA (STandards for Reporting Interventions in Controlled Trials of Acupuncture)
STRICTA is a set of recommendations for better reporting of acupuncture trial interventions, designed to be a supplement to CONSORT.
MacPherson, H., White, A., Cummings, M., et al. (2001). Standards for reporting interventions in controlled trials of acupuncture: The STRICTA recommendations. Complementary Therapies in Medicine, 9, 246-249. Full-text PDF retrieved July 17, 2007: http://www.stricta.info/STRICTA CTM 2001 9 246-9.pdf
STROBE (STrengthening the Reporting of OBservational studies in Epidemiology)
STROBE is an international, collaborative initiative of epidemiologists, methodologists, statisticians, researchers and editors involved in the conduct and dissemination of observational studies, with the common aim of STrengthening the Reporting of OBservational studies in Epidemiology.
von Elm E., Altman D. G., Egger, M., Pocock S. J., Gøtzsche P. C., Vandenbroucke J. P., for the STROBE Initiative (2007). The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: Guidelines for reporting observational studies. PLoS Medicine 4(10): e296
Vandenbroucke J. P., von Elm, E., Altman, D. G., Gøtzsche, P. C., Mulrow, C. D., Pocock, S. J., et al. (2007). Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): Explanation and elaboration. PLoS Medicine, 4(10): e297.
TREND (Transparent Reporting of Evaluations with Nonrandomized Designs)
The mission of the Transparent Reporting of Evaluations with Nonrandomized Designs (TREND) group is to improve the reporting standards of nonrandomized evaluations of behavioral and public health interventions.
Des Jarlais D.C., Lyles C., Crepaz, N., & the TREND Group. (2004). Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: The TREND statement. American Journal of Public Health, 9, 361-366. Full-text PDF retrieved July 17, 2007
Assessment of quality of life in clinical trials:
Staquet, M., Berzon, R., Osoba, D., & Machin, D. (1996). Guidelines for reporting results of quality of life assessments in clinical trials. Quality of Life Research, 5(5), 496-502. Abstract retrieved July 17, 2007 from PubMed.
Bayesian analysis of clinical studies:
Sung, L., Hayden, J., Greenberg, M.L., Koren, G., Feldman, B M., & Tomlinson, G.A. (2005). Seven items were identified for inclusion when reporting a Bayesian analysis of a clinical study. Journal of Clinical Epidemiology, 58(3), 261-268. Abstract with link to fee-based full-text retrieved July 17, 2007 from PubMed.
Momentary assessment self-report data:
Stone, A. A. & Shiffman, S. (2002). Capturing momentary, self-report data: A proposal for reporting guidelines. Annals of Behavioral Medicine, 24(3), 236-243. Abstract retrieved July 17, 2007 from PubMed.
ACP Journal Club
The American College of Physicians publishes ACP Journal Club content, selected from over 100 clinical journals through reliable application of explicit criteria for scientific merit, followed by assessment of relevance to medical practice by clinical specialists (subscription required).
AMSTAR (Assessment of Multiple Systematic Reviews)
Shea et al. (2007) created a new tool for measuring the methodological quality of systematic reviews, by building upon previous tools, empirical evidence, and expert consensus.
Shea, B.J., Grimshaw, J.M., Wells, G.A., Boers, M., Andersson N., Hamel, C., Porter, A.C., Tugwell, P., Moher, D., & Bouter, L.M. (2007). Development of AMSTAR: A measurement tool to assess the methodological quality of systematic reviews. BMC Medical Research Methodology, 7(10). Full-text retrieved July 17, 2007: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1810543
ASSERT (A Standard for the Scientific and Ethical Review of Trials)
ASSERT proposes a structured approach for the review and monitoring of randomized controlled clinical trials.
CHEC (Consensus on Health Economic Criteria)
The aim of the CHEC-project is to develop criteria for economic evaluations, for use when carrying out systematic reviews.
Evers, S., Goossens, M., de Vet, H., van Tulder, M., & Ament, A. (2005). Criteria list for assessment of methodological quality of economic evaluations: Consensus on health economic criteria. International Journal of Technology Assessment in Health Care, 21(2), 240-245. Abstract retrieved July 17, 2007 PubMed.
Fundamental tools for understanding and applying the medical literature and making clinical diagnoses.
Jadad Scale for Quality of RCTs
The Jadad scale assesses the quality of clinical trials. Among its criteria are whether the sample is randomized or adequately described, discussions double blinding, and including a description of withdrawals.
Jadad A.R., Moore, R.A., Carroll, D., Jenkinson, C., Reynolds, D.J., Gavaghan, D.J., et al. (1996). Assessing the quality of reports of randomized clinical trials: Is blinding necessary? Controlled Clinical Trials, 17(1),1–12. Abstract retrieved July 17, 2007 from PubMed.
Newcastle-Ottawa Scale (NOS)
The NOS was developed to assess the quality of nonrandomized studies for the purpose of incorporating quality assessments in the interpretation of meta-analytic results.
OQAQ (Overview Quality Assessment Questionnaire)
This is a validated instrument consisting of 9 questions about the quality of the studies reviewed. Each question is answered 1 - no; 2 - partially/can't tell, or 3 – yes. Item 10 judges the overall responses to the 9 questions on a scale of 1-7, where 1 indicates extensive flaws and 7, minimal flaws.
Oxman, A.D., & Guyatt, G.H. (1991). Validation of an index of the quality of review articles. Journal of Clinical Epidemiology, 44(11), 1271–1278 (b). Abstract retrieved July 17, 2007 from PubMed.
QUADAS (Quality Assessment Instrument for Diagnostic Studies)
QUADAS is an evidence based quality assessment tool to be used in systematic reviews of diagnostic accuracy studies.
The QUADAS Tool: http://www.biomedcentral.com/1471-2288/3/25/table/T1
Whiting, P., Rutjes, A.W.S., Reitsma, J.B., Bossuyt, P.M.M., & Kleijnen, J. (2003). The development of QUADAS: A tool for the quality assessment of studies of diagnostic accuracy included in systematic reviews. BMC Medical Research Methodology, 3:25. http://www.biomedcentral.com/1471-2288/3/25
SORT (Strength of Recommendation Taxonomy)
SORT is used to evaluate the strength of a recommendation based on quality of study design, quantity of studies included in the review and consistency with outcomes reported. The scale includes a determination of whether the outcomes are patient-oriented or disease-related, and provides "walkovers" to other taxonomies.
Ebell, M. H., Siwek, J., Weiss, B. D., Woolf, S. H., Susman, J. Ewigman, B., & Bowman, M. (2004). Strength of recommendation taxonomy (SORT): A patient-centered approach to grading evidence in the medical literature. American Family Physician, 69(3), 548-556. Full-text retrieved May 3, 2007: http://www.aafp.org/afp/20040201/548.pdf
Systems to Rate the Strength of Scientific Evidence Summary
More than 100 sources of information on systems for assessing study quality and strength of evidence for systematic reviews and technology assessments were summarized. After applying evaluative criteria based on key domains to these systems, 19 study quality and 7 strength of evidence grading systems were identified.
Agency for Healthcare Research and Quality (AHRQ). (2002). Systems to Rate the Strength of Scientific Evidence. Evidence Report/Technology Assessment: Number 47. AHRQ Publication No. 02-E015. Rockville, MD: Author. Full-text retrieved July 17, 2007: http://www.ahcpr.gov/clinic/epcsums/strengthsum.htm
User's Guides to the Medical Literature: A Manual for Evidence-based Clinical Practice 2nd ed. (Gordon Guyatt, Drummond Rennie, Maureen O. Meade, and Deborah J. Cook)