FOCUS
TECHNICAL BRIEF NO. 19
2008
PDF version

Getting Published and Having an Impact: Turning Rehabilitation Research Results Into Gold

Marcel P.J.M. Dijkers, PhD, FACRM, Margaret Brown, PhD, and Wayne A. Gordon, PhD, FACRM
Mount Sinai School of Medicine, Department of Rehabilitation Medicine, New York, New York

In the rehabilitation research world in which "knowledge translation" and "evidence-based practice" have become ascendant (at least as catch phrases), researchers are asked to take on the challenge—in addition to methodological excellence—of ensuring that their work has an impact beyond its appearance in a peer-reviewed journal. Today's researchers have many techniques at hand that will help them succeed in that quest, unlike the medieval alchemists whose challenge it was to turn base metals into gold (none of whom succeeded to our best knowledge). The "gold" that research results should lead to is a change in practice by a target audience—more importantly, a change that leads to improved lives of people with disabilities.

Researcher efforts at knowledge translation vary widely, both in methods adopted and in successes achieved. Only rarely does a single paper directly lead to changes in treatments, new approaches to outcome measurement, or altered paths and strategies within research venues. More common is the paper that adds significantly to a larger base of evidence, which results in changes in practice, policy, or subsequent research. Unfortunately, the most common result of disseminating research results is that a paper is read by relatively few, with its insights (if any) left to wither on the vine, leading to no useful outcomes. Thus, for utility to be achieved, clearly it is not enough that study results get published. Just as critically, they need to be brought to the attention of potential adopters of innovations, and they must be judged by these adopters to be of high quality and relevance.

The purpose of this issue of FOCUS is to discuss some means that rehabilitation researchers should consider to maximize the impact of their work—to "turn research results into gold," particularly in terms of being effective in reaching and convincing those with potential for adopting study results. Although the primary emphasis is on getting published results used, some comments will also focus on increasing one's chances of getting published, which is the first critical step.

Maximizing impact has taken on special import in the age of "evidence-based practice" or "empirically supported treatment." Evidence-based practice is an approach to professional practice in health care and other service fields that stresses "the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence-based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research" (Sackett, Rosenberg, Gray, Haynes, & Richardson, 1996, p. 71). Although this definition emphasizes clinical practice, "evidence-based" equally applies to the practices of researchers and policymakers.

One way that evidence is corralled to serve clinical, research, and policymaking practices is through systematic reviews, which aim to bring together and combine, qualitatively or quantitatively in a metaanalysis, all the evidence that is relevant to answer a specific question a clinician or other professional may raise (Schlosser, 2006). Although the questions addressed in systematic reviews often focus on treatments/interventions (e.g., What works in treating problem X in population T?), systematic reviews have also addressed questions relevant to diagnosis, prognosis, and other issues (e.g., To what degree does gender correlate with post-injury outcomes in population T?).

With the number of published research papers ever increasing and the typical professional having limited time to read primary research reports, reviews—especially systematic reviews—have become increasingly important in the dissemination of research findings. Thus, if researchers seek to enhance the chances of their research contributing to the clinician's trove of information on the diagnosis and treatment of patients, or to the scientific enterprise down the road, they need to increase the odds of their papers being included in systematic reviews. This requires (1) understanding the nature of systematic reviews and the standards that systematic reviewers use in deciding which papers to include or exclude and the standards they will apply to rate the quality of evidence contained in research reports, (2) adopting research approaches that adhere as much as possible to such standards, (3) publishing research reports that are clearly written and that include all information that systematic reviewers will need to assess the relevance of the paper in question as well as evaluate its contribution to producing credible evidence, and (4) providing a title, abstract, and keywords that optimize retrieval of the report by systematic reviewers. Each of these opportunities to "turn results into gold" is discussed in turn below.

What Is a Systematic Review?

While there is no standard procedure for doing a systematic review, the following steps are suggested by many authors (Bhandari et al., 2004; Sargeant, Rajic, Read, & Ohlsson, 2006; Macbeth & Overgaard, 2002; Wright, Brand, Dunn, & Spindler, 2007; Feldstein, 2005):

  1. Formulate a focused question of interest. Ideally, particularly with respect to systematic reviews of intervention research, such questions specify
    • a setting and population (e.g., patients with a spinal cord injury undergoing acute rehabilitation);
    • a condition or deficit of interest (e.g., reduced manual strength);
    • a treatment or test being considered (e.g., splinting or a hand function test); and
    • one or more specific outcomes (e.g., ability to write, or selection of a test with good psychometric qualities).

  2. Develop a protocol for answering the question, including
    • a method of locating relevant evidence, which may include unpublished research and "gray literature" but generally is limited to peerreviewed published papers;
    • explicit criteria to evaluate for each paper the quality of its research methods and research reporting;
    • methods for abstracting information and for summarizing or synthesizing the information or evidence; and
    • rules for making recommendations based on the quality, quantity, and consistency of the evidence.

  3. Locate the relevant studies and assess their methodological validity or quality.
  4. Abstract and synthesize the relevant information.
  5. Draw conclusions for practice, policy, and/or needs for future research.

For any research paper to be included in a systematic review, it must address the issue relevant to the question at hand and satisfy whatever quality standards the systematic reviewers apply.

Quality of Research Design and Implementation

Systematic reviewers generally do not average findings over all the relevant studies that they identify; instead, in reaching their conclusions, they carefully consider each study's quality of research design and implementation. They disregard the weaker studies or accord them less weight in drawing conclusions. For instance, for intervention/treatment studies, the guidelines for systematic reviews promulgated by the American Academy of Neurology (AAN) (Edlund, Gronseth, So, & Franklin, 2004), which have been applied in systematic reviews of rehabilitation topics (Esselman, Thombs, Magyar-Russell, & Fauerbach, 2006; Sipski & Richards, 2006; Gordon et al., 2006), adopt the following hierarchy of study designs:

In a systematic review using AAN guidelines, a research report that is rated as Class I will be given strongest consideration, while Class IV studies may be excluded or their value downgraded in reaching study conclusions. Similar hierarchies for other study purposes (diagnosis, screening, prognosis, cost-benefit assessment) have been developed (Edlund et al., 2004).

As is the case with all similar grading schemes, the AAN hierarchy is primarily focused on the internal validity of the research design (i.e., maximizing the likelihood that the hypothesis is correct if accepted and false when it is rejected) but has little relevance to issues of external validity (i.e., the potential to generalize the findings to subgroups of the population not included in the study sample). An additional problem with such hierarchies arises because much of rehabilitation research, even RCTs, may not qualify for the highest grade in these hierarchies due to the unique features of rehabilitation (i.e., individualized mixtures of interventions that are difficult to mask, simultaneously delivered by a team of professionals, and aimed to affect medium- and long-term outcomes that may be influenced strongly by a number of factors minimally or not at all under the control of the clinician or researcher). Efforts are underway to develop hierarchies that are more appropriate to the research designs used by rehabilitation researchers. For instance, NCDDR's Task Force on Standards of Evidence and Methods is exploring the grading of "strong designs" other than RCTs and the proper weighting of internal and external validity issues.

Whatever the outcomes of these efforts may be, rehabilitation researchers should always attempt to use the strongest design that is appropriate to their research question. In the case of intervention research, given ethical and logistical issues and the frequent impossibility of blinding subjects and providers, the randomized double-blind clinical trial, which serves as the yardstick in much systematic reviewing (e.g., by the Cochrane groups), may not be possible (Henkel et al., 2006; Altman, 1996; Hernandez, Boersma, Murray, Habbema, & Steyerberg, 2006).

The quality of design needs to be fully and precisely communicated in the published research report. This requires a clear understanding of those features that are expected to be present in a particular type of research. While research quality and research report quality are closely connected, researchers sometimes omit details in their reports that may affect the grading of the quality of the research by a systematic reviewer. For instance, Hill, LaValley, and Felson (2002) contacted the authors of 50 papers reporting on RCTs. Of the 40 who responded, the majority provided information suggesting that the research was stronger than they had indicated in the paper. For instance, of the 29 who had not stated how the random sequence assigning subjects to study arms had been generated, 22 (76%) provided details indicating that this had in fact been done using acceptable methods. In a similar investigation, Deveraux et al. (2004) contacted the authors of 105 reports on RCTs, of whom 98 responded. Of the 54 who had not originally reported information indicating that allocation concealment was performed adequately, 52 (96%) provided detail indicating that this indeed had been handled in a manner that satisfied high standards. These data suggest that authors of research reports often fail to convey all the necessary information that puts their research in a positive light. As discussed below, adherence to quality writing guidelines as well as use of the checklists that have been developed as part of guidelines for reporting may help them do so.

Quality of Writing

High-quality writing—that is, writing that is organized, concise, precise, and clear (at minimum)—helps all reading audiences, including journal reviewers, journal subscribers, and systematic reviewers. Getting published is a prerequisite for becoming part of the evidence base, and polishing both content and format of one's manuscript to a fine sheen prior to first submission will help acceptance.

A variety of guidelines for reporting on research is available and should be consulted by researchers seeking to avoid writing hurdles that may impede publication of their manuscript or subsequent adoption of their results. First, IMRaD (Introduction, Methods, Results, and Discussion), the traditional format of research reports in peer-reviewed journals, helps the author set forth information systematically and aids any reader (including systematic reviewers) in quickly assimilating the information and finding details if the need arises to refer back to a paper.

Second, a large number of prescriptive publications are available, from Day's classic "How to Write and Publish a Scientific Paper" (1998), which has gone through six editions, to a variety of books that focus on specific areas of academic writing (e.g., in nursing or psychology). In addition, the research and professional literature contains papers focusing on writing the various report elements, from abstracts to reference lists. As of September 2007, Medlineindexed journals had published at least 450 of these how-to papers. Additional help is available in the "bibles" on scientific writing, including the Publication Manual of the American Psychological Association (5th edition, 2005), the American Medical Association Manual of Style (10th edition, 2007), and the Uniform Requirements for Manuscripts Submitted to Biomedical Journals: Writing and Editing for Biomedical Publication by the International Committee of Medical Journal Editors (last modified October 2007).

Third, the instructions for authors that journals publish in their pages or on their Web sites generally contain useful information on the types of articles accepted for consideration, the suggested length for each article type, and many other aspects of manuscripts imposed by disciplinary tradition, the editor-in-chief, or the publisher. With respect to clinical trials, some journals now only publish reports on those trials that have been registered at ClinicalTrials.gov or some other official trial registration site; researchers should explore the requirements of the journal(s) in which they might publish and register their clinical trial prior to recruitment of subjects.

Judgments on many other aspects of a manuscript, such as how information is presented or how data are interpreted, are much more subjective than the rules for citing prior research, and authors should seek out informal peer review by colleagues to provide feedback on these and other stylistic issues.

Inclusion of Information Relevant to Systematic Reviews

Since the initial emergence of systematic reviewing as a research methodology in the 1980s, systematic reviewers have complained about the quality of research reporting (Pocock, Hughes, & Lee, 1987; Meinert, Tonascia, & Higgins, 1984; Emerson, McPeek, & Mosteller, 1984), and the grumbling has not abated (Chan, Man-Son-Hing, Molnar, & Laupacis, 2001; Chan & Bhandari, 2007; Boutron et al., 2007). The major criticism was and is that relevant information on the methods and results of the research is not reported at all, or is reported incompletely or ambiguously. These shortcomings may make it impossible for a systematic reviewer to assess the quality of the research or to abstract information on the findings in a format that is compatible with that used by the other relevant studies.

The dissatisfaction of systematic reviewers (as well as journal editors and others) with the quality of reporting on clinical trials and other study designs led to the creation of a number of statements that specify the information that should be provided in a research report, in what section of the manuscript, in what format, and with what level of detail. The first of these statements was CONSORT (Consolidated Standards of Reporting Trials), first issued in 1996 (Begg et al.) and revised in 2001 (Altman et al.; Moher, Schulz, Altman, & the CONSORT Group). In both versions, CONSORT offers a checklist in which authors can indicate where in the manuscript the required information is provided; in addition, CONSORT requires the preparation of a flowchart that graphically specifies the number of subjects recruited, screened, consented, randomized, and retained at final follow-up in all arms of the study. The CONSORT authors suggest that journal editors require the checklist as part of manuscript submission materials and that the flowchart be included in the manuscript as a figure. Other statements similar to CONSORT that were developed for non-intervention designs have tended to follow the same format: a list of required items and formats, with a checklist to assist (or force) authors to comply with the prescription.

Figure 1 (page 6) lists the major statements and guidelines published as of November 2007. Although CONSORT and some of the others have been published in many of the journals that have adopted them, only one reference is provided here (if possible for a journal that is available without charge in an electronic format). Some statements are available in languages other than English; for instance, CONSORT has been translated into ten languages. Also listed, if available, is the Web site where the statement and related materials may be found. These Web sites are a good resource to keep up with revisions and expansions of statements. They also typically provide a rationale for elements of the checklist and examples of paragraphs from published papers that are exemplary in providing the information. An additional useful Web site is EQUATOR (Enhancing the QUAlity and Transparency Of health Research), a site of England's National Health Service.

Many researchers with an interest in specific research methodologies (rather than research design in general) have paid attention to opportunities for improved reporting of research results. Figure 2 (page 7) lists some additional guidelines and checklists that have been published by a single expert or a small group. This list does not present itself as comprehensive and presumably could easily be expanded by a systematic search in the various bibliographic databases.

The checklists for the three major statements that are of most relevance to rehabilitation researchers (CONSORT, STROBE, and STARD) are provided in Figures 3-5 (pages 9, 11, and 12 respectively). Altman et al. (2001) provide an excellent guide on the use of the CONSORT statement. It should be noted that, while the title of CONSORT suggests that it is applicable to RCTs only, all but a few of the checklist items are just as relevant to non-randomized intervention research designs (e.g., historical controls, case-control studies).

Figure 1. Major Standards for Reporting of Research1

CONSORT: CONsolidated Standards Of Reporting Trials: focuses on randomized clinical trials

Original statement (Begg et al., 1996)
Revised statement (Altman et al., 2001; Moher et al., 2001)
Expanded to cluster trials (Campbell et al., 2004)
Adapted for noninferiority/equivalence trials (Piaggio et al., 2006)
Expanded for herbal medicine trials (Gagnier et al., 2006a, 2006b)
Supplemented for homeopathic trials (Dean et al., 2007)
Expanded for occupational therapy (Moberg-Mogren & Nelson, 2006)
Expanded for reporting on side effects/harms (Ioannidis et al., 2004)
Web site: www.consort-statement.org/

STRICTA: STandards for Reporting Interventions in Controlled Trials of Acupuncture (MacPherson et al., 2002)
Web site: www.stricta.info/

MOOSE: Meta-analysis Of Observational Studies in Epidemiology (Stroup et al., 2000)
Web site: none

TREND: Transparent Reporting of Evaluations with Nonrandomized Designs (Des Jarlais et al., 2004)
Web site : www.trend-statement.org/asp/trend.asp

QUOROM: QUality Of Reporting Of Meta-analyses (Moher et al., 1999)
Web site: none

STARD: STAndards for Reporting of Diagnostic Accuracy (Bossuyt et al., 2006a, 2006b; Smidt et al., 2006)
Web site: www.stard-statement.org/

STROBE: STrengthening the Reporting of OBservational studies in Epidemiology (Fernandez, 2005)
Web site: www.strobe-statement.org/

CHEC: Consensus on Health Economic Criteria (Evers et al., 2005)
Web site: none

1 The major statements tend to be published in every journal that accepts them. Of the many duplicate publications, the references provided here refer to sources easily accessible in an open journal.

Scrutiny of the checklists in Figures 3-5 suggests that good reporting of study findings consists of simply reporting what was done and not omitting, to one's own detriment, design and implementation elements that make one's investigation deserving of a high evidence grade. Obviously, however, perfect reporting does not necessarily equal a top research quality score. For example, if investigators truthfully report that they failed to blind the subjects or the providers in an intervention study, they will not get a maximum score from systematic reviewers. This creates an incentive for rehabilitation researchers to do whatever they can (within the constraints of ethics and resources available) to enhance their research design and implementation in line with the standards set out in systematic review guidelines. A major point is that these checklists can be useful prior to the start of writing a manuscript. At the design stage of a study they provide a detailed list of elements that should be addressed, one way or another, in planning the highest quality study possible. A future Focus issue will address no-cost and low-cost opportunities for doing so.

At the time of this writing, some of the major rehabilitation journals appear to have just begun to explore the usefulness of requiring authors to follow applicable guidelines in developing their manuscripts. A scan of several guidelines for authors in November 2007 produced the following:

Because these requirements may change, authors are encouraged, before they start writing, to check the journal's instructions that are provided for authors. If the journal has no specific requirement, authors should use the information in Figure 1 (and possibly Figure 2) to select and follow the guideline(s) most applicable to their research design and topic area.

Generally, following the instructions of the various reporting standards promulgated by CONSORT, STARD, and other groups will result in longer manuscripts. That may create conflict with the maximum word limits editors often impose or their preference for short papers, which enable them to publish more papers by more authors. Although no simple solution for this dilemma presents itself, with electronic publishing available, journals increasingly publish on their Web sites some of the more highly detailed information referenced in the text (e.g., lengthy tables or appendices that in years prior might have been published as part of the IMRaD core information). Prospective authors should certainly keep this option in mind when writing their manuscripts; it is easier to make decisions on what is primary and what can be published online when one first starts writing, rather than when being forced into it by an editor concerned with an overly long manuscript.

Informativeness of Title, Abstract, and Keywords

Because of the way systematic reviewers go about searching for evidence, three manuscript elements deserve special attention: the title, abstract, and keywords. In designing their studies, systematic reviewers specify the text words and keywords (indexing terms, thesaurus terms, etc.) they will use to find potentially relevant publications in each of the bibliographic databases they have identified as likely to include papers that meet criteria (e.g., MedLine, CINAHL, PsycINFO, ERIC). Once these words and terms are used in the searches, hundreds, if not thousands, of publications may result. Reading all of these papers to identify those that have the evidence needed (in terms of minimal quality of research design and implementation, and substantive area of research) is generally impossible. Systematic reviewers generally use a shortcut to finding relevant papers. First, from the bibliographic database, the title, authors, and abstract are printed. Reviewers then scan the title of each paper to determine if it may be relevant. If not ruled out on this basis, they then quickly read the abstract to detect any information that would exclude the paper from consideration. The full paper is only obtained for those abstracts that pass this initial review. Thus, keywords, the title, and the abstract are critically important in getting a reviewer to obtain the full text and scan it carefully to determine if it contains evidence relevant to the question(s) underlying the review.

The reviewer's need to quickly whittle down a mountain of "potentials" to a molehill of "certains" makes it absolutely necessary for authors to provide their papers with a title and abstract that are descriptive of the methods and the content of the research in as much detail as possible, given space limitations.

Titles are somewhat subject to fashion, but presumably it is fair to say that they have become more informative over the last century, so that the vagueness of titles such as "Some interesting observations on quadriplegia" has become a thing of the past. Authors interested in seeing their papers included in a systematic review should write a title that includes information relevant to all elements of the question likely to underlie a systematic review. In the case of an intervention study, that implies specifying (a) a setting and population, (b) a condition of interest, (c) the treatment(s) used, and (d) the type of outcome studied. An example would be "A randomized trial showed that Abruzalan CR does not improve bladder function in tetraplegics." Adding a term indicating the research design used ("a single-subject study"; "an observational study") may be useful. CONSORT specifies that "random" should appear in the title (and abstract) if some form of randomization was used (see Figure 3 on page 9).

Figure 2. Additional, Non-Official Reporting Guidelines

  • Reporting of momentary assessment self-report data (Stone & Shiffman, 2002)
  • STARLITE: Sampling strategy, Type of study, Approaches, Range of years, Limits, Inclusion and exclusions, Terms used, Electronic sources: a recommendation for reporting of (literature searches in) qualitative systematic reviews (Booth, 2006)
  • Reporting of Bayesian analysis of clinical studies (Sung et al., 2005)
  • CHERRIES: CHEcklist for Reporting Results of Internet E-Surveys (Eysenbach, 2004)
  • Reporting assessment of quality of life in clinical trials (Staquet et al., 1996)
  • Reporting of observational longitudinal research (Tooth et al., 2005)

The abstract offers an opportunity to provide additional detail on study design and implementation, as well as a summary of the key findings and the author's conclusion. Most journals now allow an abstract of at least 200 words, which should be sufficient to get into print (and into the bibliographic databases) all information needed for a systematic reviewer to decide that at a minimum the full paper needs to be read.

It is suggested that rehabilitation researchers use a structured abstract format even if not required by the journal in which they hope to publish, if for no other reason than the "telegraph style" permitted allows one to squeeze more information into the same number of words. If a section on "Objectives" is allowed, that makes it possible to repeat (as necessary, with alternative terms) some of the major elements of the title, including (for intervention research) the problem, setting, population, intervention, and nature of outcomes. The "Methods" or "Design" section gives an opportunity to again indicate the overall design; details such as blinding, number of follow-up assessments, and whatever else is key to indicating the strengths of one's research can be added. In "Participants" the author should specify the number of study subjects (some systematic reviews require a minimal number of cases for a study to be included in the evidence), as well as the setting and key inclusion/exclusion criteria that were used. "Main outcome measures" is the place to indicate the methods and instruments used to quantify the outcome(s) of interest. (The latter two sections sometimes are not allowed, and the information can be put into "Methods.")

"Interventions" offers space for describing the treatment(s) administered to the research participants, something that in rehabilitation research often needs more space than in the typical drug trial. The "Results" and "Conclusions" sections presumably are irrelevant to systematic reviewers looking for evidence because by the time they reach this point of the abstract they should have already decided whether the research described is deserving of full-text scanning.

As of November 2007, the CONSORT group announced on its Web site that it is developing a CONSORT extension addressing Abstracts. Rehabilitation researchers writing a report on a clinical trial may want to check the Web site for publication of these guidelines. The other statement groups (Figure 1) may follow suit.

Many journals require manuscript authors to supply keywords describing their work, which are printed as part of or immediately following the abstract. Now that journals themselves no longer publish annual and decennial indexes to what they have published (the bibliographic databases offer a faster and better means of finding papers), the main reason for these are to help journal staff assign papers to the most knowledgeable editors and peer reviewers, and to assist the indexers working for MedLine and the other databases. Authors who do not want their papers misclassified (decreasing the chance that it will be found by systematic reviewers) should give careful thought to supplying the most appropriate keywords to describe their investigation in terms of substantive area (population, problem, measures, interventions) and methodology.

If a specific keyword set is mandated, it should be used; many health sciences journals use MedLine's MeSH (Medical Subject Headings). If none is prescribed, MeSH may still be used, as well as other terms and phrases that are standard in the discipline of the author. Terms should be selected that are as specific as possible. For example, given the hierarchic nature of MeSH (and similar sets), "brainstem infarction" is categorized under "stroke," which is part of the category "cerebrovascular disorders," which in turn is part of "diseases." The way the information in most bibliographic databases is organized, in this example, anyone looking for evidence on life expectancy in cerebrovascular disorders will find a paper on "female life expectancy after brainstem infarction" without a problem, even if the text words included in the paper's abstract would not steer one in the right direction.


Figure 3. CONSORT Checklist for Reporting of (Randomized) Trials
Paper Section and Topic Item Descriptor Reported Page No.
TITLE AND ABSTRACT 1 How participants were allocated to interventions (e.g., random allocation, randomized, or randomly assigned).  
INTRODUCTION
Background 2 Scientific background and explanation of rationale.  
METHODS
Participants 3 Eligibility criteria for participants and the settings and locations where the data were collected.  
Interventions 4 Precise details of the interventions intended for each group and how and when they were actually administered.  
Objectives 5 Specific objectives and hypotheses.  
Outcomes 6 Clearly defined primary and secondary outcome measures and, when applicable, any methods used to enhance the quality of measurements (e.g., multiple observations, training of assessors).  
Sample size 7 How sample size was determined and, when applicable, explanation of any interim analyses and stopping rules.  
Randomization: Sequence generation 8 Method used to generate the random allocation sequence, including details of any restrictions (e.g., blocking, stratification).  
Randomization: Allocation concealment 9 Method used to implement the random allocation sequence (e.g., numbered containers or central telephone), clarifying whether the sequence was concealed until interventions were assigned.  
Randomization: Implementation 10 Who generated the allocation sequence, who enrolled participants, and who assigned participants to their groups.  
Blinding (masking) 11 Whether or not participants, those administering the interventions, and those assessing the outcomes were blinded to group assignment. If done, how the success of blinding was evaluated.  
Statistical methods 12 Statistical methods used to compare groups for primary outcome(s); methods for additional analyses, such as subgroup analyses and adjusted analyses.  
RESULTS
Participant flow 13 Flow of participants through each stage (a diagram is strongly recommended). Specifically, for each group report the numbers of participants randomly assigned, receiving intended treatment, completing the study protocol, and analyzed for the primary outcome. Describe protocol deviations from study as planned, together with reasons.  
Recruitment 14 Dates defining the periods of recruitment and follow-up.  
Baseline data 15 Baseline demographic and clinical characteristics of each group.  
Numbers analyzed 16 Number of participants (denominator) in each group included in each analysis and whether the analysis was by "intention-to-treat." State the results in absolute numbers when feasible (e.g., 10/20, not 50%).  
Outcomes and estimation 9 For each primary and secondary outcome, a summary of results for each group, and the estimated effect size and its precision (e.g., 95% confidence interval).  
Ancillary analyses 18 Address multiplicity by reporting any other analyses performed, including subgroup analyses and adjusted analyses, indicating those pre-specified and those exploratory.  
Adverse events 19 All important adverse events or side effects in each intervention group.  
DISCUSSION
Interpretation 20 Interpretation of the results, taking into account study hypotheses, sources of potential bias or imprecision, and the dangers associated with multiplicity of analyses and outcomes.  
Generalizability 21 Generalizability (external validity) of the trial findings.  
Overall evidence 22 General interpretation of the results in the context of current evidence.  

Gauging the Impact of a Study's Research Report

The goal of writing clearly, designing a study appropriately, and getting incorporated into systematic reviews is to have one's research bear fruit: having the knowledge contained in the paper used by its target audience to improve the lives of people with a disability. How can we assess the degree to which we are successful? We can do so in several ways, but not very precisely. First, the author may receive formal or informal oral or written feedback that someone is using the new piece of knowledge, innovative intervention, method of assessment, or decisionmaking process. Alternatively, to some degree, reprint requests indicate that a paper may be read and evaluated and its "implications" possibly adopted. In the past, reprint requests constituted a fairly good measure of interest in a paper by an audience wider than the journal's subscribers. However, such requests have become a rarity due to copy machines and the availability of publications online. Download counts may become a suitable replacement for reprint requests as an indicator of the utility of a publication, although imprecise (Coats, 2007; Coats, 2005; Loke & Derry, 2003). Finally, use by other researchers may appear in the form of citations in their papers; the ISI Web of Science database (http://scientific.thomson.com/products/wos/) allows one to track the frequency with which a paper is referenced in the years following publication. However, thorough perusal of papers citing one's study would be needed to document more precisely its impact.

Conclusion

Every rehabilitation researcher wants his or her investigation to have maximum impact—that is, widespread use of one's findings by practitioners, administrators, educators, patients, or other researchers. Although ideally these potential users should and would read the paper itself, that is an unrealistic expectation. So much information is being published that even someone willing and able to read 24 hours a day, 365 days a year, would not be able to keep up even with the papers indexed under "rehabilitation" in MedLine (which is known to omit many of the journals that focus on allied health research). Consequently, the average practitioner or researcher has come to rely on review papers, including systematic reviews, to keep up with research in their areas of interest. Systematic reviews find, evaluate, and synthesize the literature, delineating the evidence base of published (and sometimes unpublished) studies.

If rehabilitation researchers want their work to become part of the evidence base, they have to go beyond getting their manuscripts into print. In addition to adopting the strongest design possible and adhering to methodological excellence, they must make sure (a) their publication can be easily found in the literature; (b) the abstract contains a good summary of objectives and methods; and (c) their paper contains the type of information that systematic reviewers look for in deciding to include/exclude studies from the review, when making decisions on how much to rely on each of the studies they include, and when abstracting information into evidence tables.

Satisfying the demands of anonymous systematic reviewers, who in the future may or may not include one's research findings in a synthesis, may be insufficient incentive for researchers to "jump through the hoops" of CONSORT or the other checklists. However, it should be kept in mind that, with minor exceptions, what is of benefit to systematic reviewers also is of benefit to other readers, who are equally entitled to a clear, unambiguous, and easily digested report of one's findings. As Altman et al. (2001) wrote, "Readers should not have to speculate; the methods used should be transparent, so that readers can readily differentiate trials with unbiased results from those with questionable results. Sound science encompasses adequate reporting, and the conduct of ethical trials rests on the footing of sound science" (p. 686). Getting a paper included in a systematic review and making the paper an exemplar of scientific communication add to its impact on the intended target audience—part of the alchemy of turning research results into gold.


Figure 4. STARD Checklist for the Reporting of
Studies of Diagnostic Accuracy
Paper Section
and Topic
Item No. Descriptor Page No.
TITLE/ABSTRACT
/KEYWORDS
1 Identify the article as a study of diagnostic accuracy (recommend MeSH heading "sensitivity and specificity").  
INTRODUCTION 2 State the research questions or study aims, such as estimating diagnostic accuracy or comparing accuracy between tests or across participant groups.  
METHODS
Participants 3 Describe the study population: the inclusion and exclusion criteria, setting, and locations where the data were collected.  
4 Describe participant recruitment: was recruitment based on presenting symptoms, results from previous tests, or the fact that the participants had received the index tests or the reference standard?  
5 Describe participant sampling: was the study population a consecutive series of participants defined by the selection criteria in items 3 and 4? If not, specify how participants were further selected.  
6 Describe data collection: was data collection planned before the index test and reference standard were performed (prospective study) or after (retrospective study)?  
Test methods 7 Describe the reference standard and its rationale.  
8 Describe technical specifications of material and methods involved including how and when measurements were taken, and/or cite references for index tests and reference standard.  
9 Describe definition of and rationale for the units, cutoffs, and/or categories of the results of the index tests and the reference standard.  
10 Describe the number, training, and expertise of the persons executing and reading the index tests and the reference standard.  
11 Describe whether or not the readers of the index tests and reference standard were blind (masked) to the results of the other test and describe any other clinical information available to the readers.  
Statistical methods 12 Describe methods for calculating or comparing measures of diagnostic accuracy and the statistical methods used to quantify uncertainty (e.g., 95% confidence intervals).  
13 Describe methods for calculating test reproducibility, if done.  
RESULTS
Participants 14 Report when study was done, including beginning and ending dates of recruitment.  
15 Report clinical and demographic characteristics of the study population (e.g., age, sex, spectrum of presenting symptoms, co-morbidity, current treatments, recruitment centers).  
16 Report the number of participants satisfying the criteria for inclusion that did or did not undergo the index tests and/or the reference standard; describe why participants failed to receive either test (a flow diagram is strongly recommended).  
Test results 17 Report time interval from the index tests to the reference standard and any treatment administered between.  
18 Report distribution of severity of disease (define criteria) in those with the target condition; other diagnoses in participants without the target condition.  
19 Report a cross-tabulation of the results of the index tests (including indeterminate and missing results) by the results of the reference standard; for continuous results, the distribution of the test results by the results of the reference standard.  
20 Report any adverse events from performing the index tests or the reference standard.  
Estimates 21 Report estimates of diagnostic accuracy and measures of statistical uncertainty (e.g., 95% confidence intervals).  
22 Report how indeterminate results, missing responses, and outliers of the index tests were handled.  
23 Report estimates of variability of diagnostic accuracy between subgroups of participants, readers, or centers, if done.  
24 Report estimates of test reproducibility, if done.  
DISCUSSION 25 Discuss the clinical applicability of the study findings.  


Figure 5. STROBE Statement—Checklist of Items That Should Be Included in Reports of Observational Studies 1
Section and Topic Item No. Recommendations Page No.
TITLE AND ABSTRACT 1 (a) Indicate the study's design with a commonly used term in the title or the abstract.
(b) Provide in the abstract an informative and balanced summary of what was done and what was found.
 
INTRODUCTION
Background/rationale 2 Explain the scientific background and rationale for the investigation being reported.  
Objectives 3 State specific objectives, including any prespecified hypotheses.  
METHODS
Study design 4 Present key elements of study design early in the paper.  
Setting 5 Describe the setting, locations, and relevant dates, including periods of recruitment, exposure, follow-up, and data collection.  
Participants 6 (a) Cohort study: Give the eligibility criteria, and the sources and methods of selection of participants. Describe methods of follow-up.

Case-control study: Give the eligibility criteria, and the sources and methods of case ascertainment and control selection. Give the rationale for the choice of cases and controls.

Cross-sectional study: Give the eligibility criteria, and the sources and methods of selection of participants.


(b) Cohort study: For matched studies, give matching criteria and number of exposed and unexposed.

Case-control study: For matched studies, give matching criteria and the number of controls per case.
 
Variables 7 Clearly define all outcomes, exposures, predictors, potential confounders, and effect modifiers. Give diagnostic criteria, if applicable.  
Data sources/measurement 8* For each variable of interest, give sources of data and details of methods of assessment (measurement). Describe comparability of assessment methods if there is more than one group.  
Bias 9 Describe any efforts to address potential sources of bias.  
Study size 10 Explain how the study size was arrived at.  
Quantitative variables 11 Explain how quantitative variables were handled in the analyses. If applicable, describe which groupings were chosen and why.  
Statistical methods 12 (a) Describe all statistical methods, including those used to control for confounding.
(b) Describe any methods used to examine subgroups and interactions.

(c) Explain how missing data were addressed.
(d) Cohort study: If applicable, explain how loss to follow-up was addressed.

Case-control study: If applicable, explain how matching of cases and controls was addressed.
Cross-sectional study: If applicable, describe analytical methods taking account of sampling strategy.
(e) Describe any sensitivity analyses.
 
RESULTS
Participants 13* (a) Report numbers of individuals at each stage of study (e.g., numbers potentially eligible, examined for eligibility, confirmed eligible, included in the study, completing follow-up, and analyzed).
(b) Give reasons for non-participation at each stage.
(c) Consider use of a flow diagram.
 
Descriptive data 14* (a) Give characteristics of study participants (e.g., demographic, clinical, social) and information data on exposures and potential confounders.
(b) Indicate number of participants with missing data for each variable of interest.
(c) Cohort study: Summarize follow-up time (e.g., average and total amount).
 
Outcome data 15* (a) Cohort study: Report numbers of outcome events or summary measures over time.

Case-control study: Report numbers in each exposure category, or summary measures of exposure.

Cross-sectional study: Report numbers of outcome events or summary measures.
 
Main results 16 (a) Give unadjusted estimates and, if applicable, confounder-adjusted estimates and their precision (e.g., 95% confidence interval). Make clear which confounders were adjusted for and why they were included.
(b) Report category boundaries when continuous variables were categorized.
(c) If relevant, consider translating estimates of relative risk into absolute risk for a meaningful time period.
 
Other analyses 17 Report other analyses done (e.g., analyses of subgroups and interactions, and sensitivity analyses)  
DISCUSSION
Key results 18 Summarize key results with reference to study objectives.  
Limitations 19 Discuss limitations of the study, taking into account sources of potential bias or imprecision.

Discuss both direction and magnitude of any potential bias.
 
Interpretation 20 Give a cautious overall interpretation of results considering objectives, limitations, multiplicity of analyses, results from similar studies, and other relevant evidence.  
Generalizability 21 Discuss the generalizability (external validity) of the study results.  
OTHER INFORMATION
Funding 22 Give the source of funding and the role of the funders for the present study and, if applicable, for the original study on which the present article is based.  

1 This checklist combines instructions for cohort, case-control, and cross-sectional studies. The STROBE Web site provides separate checklists for these three designs that may be easier to use.

* Give information separately for cases and controls in case-control studies and, if applicable, for exposed and unexposed groups in cohort and cross-sectional studies.

References

Ad Hoc Working Group for Critical Appraisal of the Medical Literature (1987). A proposal for more informative abstracts of clinical articles. Annals of Internal Medicine, 106(4), 598-604.

Altman, D. G. (1996). Better reporting of randomised controlled trials: The CONSORT statement. BMJ, 313(7057), 570-571.

Altman, D. G., Schulz, K. F., Moher, D., Egger, M., Davidoff, F., Elbourne, D., et al. (2001). The revised CONSORT statement for reporting randomized trials: Explanation and elaboration. Annals of Internal Medicine, 134(8), 663-694.

American Psychological Association. (2001). Publication manual of the American Psychological Association (5th ed.). Washington, DC: American Psychological Association.

Begg, C., Cho, M., Eastwood, S., Horton, R., Moher, D., Olkin, I., et al. (1996). Improving the quality of reporting of randomized controlled trials. The CONSORT statement. JAMA: The Journal of the American Medical Association, 276(8), 637-639.

Bhandari, M., Montori, V. M., Devereaux, P. J., Wilczynski, N. L., Morgan, D., Haynes, R. B., et al. (2004). Doubling the impact: Publication of systematic review articles in orthopaedic journals. The Journal of Bone and Joint Surgery, 86-A(5), 1012-1016.

Booth, A. (2006). "Brimful of STARLI TE": Toward standards for reporting literature searches. Journal of the Medical Library Association: JMLA, 94(4), 421-9, e205.

Bossuyt, P. M., Reitsma, J. B., Bruns, D. E., Gatsonis, C. A., Glasziou, P. P., Irwig, L. M., et al. (2003). Towards complete and accurate reporting of studies of diagnostic accuracy: The STARD initiative. Standards for Reporting of Diagnostic Accuracy. Clinical Chemistry, 49(1), 1-6.

Bossuyt, P. M., Reitsma, J. B., Bruns, D. E., Gatsonis, C. A., Glasziou, P. P., Irwig, L. M., et al. (2003). The STARD statement for reporting studies of diagnostic accuracy: Explanation and elaboration. Clinical Chemistry, 49(1), 7-18.

Boutron, I., Guittet, L., Estellat, C., Moher, D., Hrobjartsson, A., & Ravaud, P. (2007). Reporting methods of blinding in randomized trials assessing nonpharmacological treatments. PLoS Medicine, 4(2), e61.

Campbell, M. K., Elbourne, D. R., Altman, D. G., & CONSORT group. (2004). CONSORT statement: Extension to cluster randomised trials. BMJ, 328(7441), 702-708.

Chan, K. B., Man-Son-Hing, M., Molnar, F. J., & Laupacis, A. (2001). How well is the clinical importance of study results reported? An assessment of randomized controlled trials. CMAJ: Canadian Medical Association Journal, 165(9), 1197-1202.

Chan, S., & Bhandari, M. (2007). The quality of reporting of orthopedic randomized trials with use of a checklist for nonpharmacological therapies. The Journal of Bone and Joint Surgery, 89(9), 1970-978.

Coats, A. J. (2005). Top of the charts: Download versus citations in the International Journal of Cardiology. International Journal of Cardiology, 105(2), 123-125.

Coats, A. J. (2007). Most frequently cited and downloaded papers from Volume 98 (2005). International Journal of Cardiology, 122(3), e16-7.

Day, R. A. (1998). How to write & publish a scientific paper (5th ed.). Phoenix, AZ: Oryx Press.

Dean, M. E., Coulter, M. K., Fisher, P., Jobst, K., & Walach, H. (2007). Reporting data on homeopathic treatments (RedHot): A supplement to CONSORT. Homeopathy: The Journal of the Faculty of Homeopathy, 96(1), 42-45.

Des Jarlais, D. C., Lyles, C., Crepaz, N., & TREND Group. (2004). Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: The TREND statement. American Journal of Public Health, 94(3), 361-366.

Devereaux, P. J., Choi, P. T., El-Dika, S., Bhandari, M., Montori, V. M., Schunemann, H. J., et al. (2004). An observational study found that authors of randomized controlled trials frequently use concealment of randomization and blinding, despite the failure to report these methods. Journal of Clinical Epidemiology, 57(12), 1232-1236.

Dijkers, M. P. (2003). Searching the literature for information on traumatic spinal cord injury: The usefulness of abstracts. Spinal Cord, 41(2), 76-84.

Edlund, W., Gronseth, G., So, Y., & Franklin, G. (2004). Clinical practice guidelines process manual - 2004 edition. St. Paul, MN: American Academy of Neurology.

Emerson, J. D., McPeek, B., & Mosteller, F. (1984). Reporting clinical trials in general surgical journals. Surgery, 95(5), 572-579.

Esselman, P. C., Thombs, B. D., Magyar-Russell, G., & Fauerbach, J. A. (2006). Burn rehabilitation: State of the science. American Journal of Physical Medicine & Rehabilitation, 85(4), 383-413.

Evers, S., Goossens, M., de Vet, H., van Tulder, M., & Ament, A. (2005). Criteria list for assessment of methodological quality of economic evaluations: Consensus on Health Economic Criteria. International Journal of Technology Assessment in Health Care, 21(2), 240-245.

Eysenbach, G. (2004). Improving the quality of Web surveys: The Checklist for Reporting Results of Internet E-Surveys (CHERRIES). Journal of Medical Internet Research, 6(3), e34.

Feldstein, D. A. (2005). Clinician's guide to systematic reviews and meta-analyses. WMJ: Official Publication of the State Medical Society of Wisconsin, 104(3), 25-29.

Fernandez, E. (2005). Observational studies in epidemiology (STROBE). [Estudios epidemiologicos (STROBE)] Medicina Clinica, 125(Supl.1), 43-48.

Froom, P., & Froom, J. (1993). Deficiencies in structured medical abstracts. Journal of Clinical Epidemiology, 46(7), 591-594.

Gagnier, J. J., Boon, H., Rochon, P., Moher, D., Barnes, J., Bombardier, C., et al. (2006). Recommendations for reporting randomized controlled trials of herbal interventions: Explanation and elaboration. Journal of Clinical Epidemiology, 59(11), 1134-1149.

Gagnier, J. J., Boon, H., Rochon, P., Moher, D., Barnes, J., Bombardier, C., et al. (2006). Reporting randomized, controlled trials of herbal interventions: An elaborated CONSORT statement. Annals of Internal Medicine, 144(5), 364-367.

Gordon, W. A., Zafonte, R., Cicerone, K., Cantor, J., Brown, M., Lombard, L., et al. (2006). Traumatic brain injury rehabilitation: State of the science. American Journal of Physical Medicine & Rehabilitation, 85(4), 343-382.

Harbourt, A. M., Knecht, L. S., & Humphreys, B. L. (1995). Structured abstracts in MEDLINE, 1989-1991. Bulletin of the Medical Library Association, 83(2), 190-195.

Hartley, J., & Benjamin, M. (1998). An evaluation of structured abstracts in journals published by the British Psychological Society. British Journal of Educational Psychology, 68(3), 443-456.

Haynes, R. B., Mulrow, C. D., Huth, E. J., Altman, D. G., & Gardner, M. J. (1990). More informative abstracts revisited. Annals of Internal Medicine, 113(1), 69-76.

Henkel, V., Mergl, R., Allgaier, A. K., Kohnen, R., Moller, H. J., & Hegerl, U. (2006). Treatment of depression with atypical features: A meta-analytic approach. Psychiatry Research, 141(1), 89-101.

Hernandez, A. V., Boersma, E., Murray, G. D., Habbema, J. D., & Steyerberg, E. W. (2006). Subgroup analyses in therapeutic cardiovascular clinical trials: Are most of them misleading? American Heart Journal, 151(2), 257-264.

Hill, C. L., LaValley, M. P., & Felson, D. T. (2002). Discrepancy between published report and actual conduct of randomized clinical trials. Journal of Clinical Epidemiology, 55(8), 783-786.

International Committee of Medical Journal Editors. (2006). Uniform Requirements for Manuscripts Submitted to Biomedical Journals: Writing and Editing for Biomedical Publication. Retrieved September 14, 2007, from http://www.icmje.org/

Ioannidis, J. P., Evans, S. J., Gotzsche, P. C., O'Neill, R. T., Altman, D. G., Schulz, K., et al. (2004). Better reporting of harms in randomized trials: An extension of the CONSORT statement. Annals of Internal Medicine, 141(10), 781-788.

Iverson C. (2007). AMA manual of style: A guide for authors and editors (10th ed.). New York, NY: Oxford University Press.

Loke, Y. K., & Derry, S. (2003). Does anybody read "evidence-based"" articles? BMC Medical Research Methodology, 3, 14.

Macbeth, F., & Overgaard, J. (2002). Expert reviews, systematic reviews and meta-analyses. Radiotherapy and Oncology: Journal of the European Society for Therapeutic Radiology and Oncology, 64(3), 233-234.

MacPherson, H., White, A., Cummings, M., Jobst, K. A., Rose, K., Niemtzow, R. C., et al. (2002). Standards for Reporting Interventions in Controlled Trials of Acupuncture: The STRICTA recommendations. Journal of Alternative and Complementary Medicine, 8(1), 85-89.

Meinert, C. L., Tonascia, S., & Higgins, K. (1984). Content of reports on clinical trials: A critical review. Controlled Clinical Trials, 5(4), 328-347.

Moberg-Mogren, E., & Nelson, D. L. (2006). Evaluating the quality of reporting occupational therapy randomized controlled trials by expanding the CONSORT criteria. The American Journal of Occupational Therapy, 60(2), 226-235.

Moher, D., Cook, D. J., Eastwood, S., Olkin, I., Rennie, D., & Stroup, D. F. (1999). Improving the quality of reports of meta-analyses of randomised controlled trials: The QUOROM statement. Quality of Reporting of Meta-analyses. Lancet, 354(9193), 1896-1900.

Moher, D., Schulz, K. F., Altman, D., & CONSORT Group (Consolidated Standards of Reporting Trials). (2001). The CONSORT statement: Revised recommendations for improving the quality of reports of parallel-group randomized trials. JAMA: The Journal of the American Medical Association, 285(15), 1987-1991.

Piaggio, G., Elbourne, D. R., Altman, D. G., Pocock, S. J., Evans, S. J., & CONSORT Group. (2006). Reporting of noninferiority and equivalence randomized trials: An extension of the CONSORT statement. JAMA: The Journal of the American Medical Association, 295(10), 1152-1160.

Pocock, S. J., Hughes, M. D., & Lee, R. J. (1987). Statistical problems in the reporting of clinical trials. A survey of three medical journals. The New England Journal of Medicine, 317(7), 426-432.

Sackett, D. L., Rosenberg, W. M., Gray, J. A., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: What it is and what it isn't. BMJ, 312(7023), 71-72.

Sargeant, J. M., Rajic, A., Read, S., & Ohlsson, A. (2006). The process of systematic review and its application in agri-food public-health. Preventive Veterinary Medicine, 75(3-4), 141-151.

Schlosser, R. W. (2006). The role of systematic reviews in evidence-based practice, research, and development. Focus Technical Brief, (15).

Sipski, M. L., & Richards, J. S. (2006). Spinal cord injury rehabilitation: State of the science. American Journal of Physical Medicine & Rehabilitation, 85(4), 310-342.

Smidt, N., Rutjes, A. W., van der Windt, D. A., Ostelo, R. W., Bossuyt, P. M., Reitsma, J. B., et al. (2006). Reproducibility of the STARD checklist: An instrument to assess the quality of reporting of diagnostic accuracy studies. BMC Medical Research Methodology, 6, 12.

Staquet, M., Berzon, R., Osoba, D., & Machin, D. (1996). Guidelines for reporting results of quality of life assessments in clinical trials. Quality of Life Research: An International Journal of Quality of Life Aspects of Treatment, Care and Rehabilitation, 5(5), 496-502.

Stone, A. A., & Shiffman, S. (2002). Capturing momentary, self report data: A proposal for reporting guidelines. Annals of Behavioral Medicine, 24(3), 236-243.

Stroup, D. F., Berlin, J. A., Morton, S. C., Olkin, I., Williamson, G. D., Rennie, D., et al. (2000). Meta-analysis of observational studies in epidemiology: A proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOO SE) group. JAMA: The Journal of the American Medical Association, 283(15), 2008-2012.

Sung, L., Hayden, J., Greenberg, M. L., Koren, G., Feldman, B. M., & Tomlinson, G. A. (2005). Seven items were identified for inclusion when reporting a Bayesian analysis of a clinical study. Journal of Clinical Epidemiology, 58(3), 261-268.

Taddio, A., Pain, T., Fassos, F. F., Boon, H., Ilersich, A. L., & Einarson, T. R. (1994). Quality of nonstructured and structured abstracts of original research articles in the British Medical Journal, the Canadian Medical Association Journal and the Journal of the American Medical Association. CMAJ: Canadian Medical Association Journal, 150(10), 1611-1615.

Tooth, L., Ware, R., Bain, C., Purdie, D. M., & Dobson, A. (2005). Quality of reporting of observational longitudinal research. American Journal of Epidemiology, 161(3), 280-288.

Wright, R. W., Brand, R. A., Dunn, W., & Spindler, K. P. (2007). How to write a systematic review. Clinical Orthopaedics and Related Research, 455, 23-29.

View FOCUS Technical Brief Archived Issues

Last Updated: Tuesday, 26 March 2024 at 02:31 PM CST