PDF Format 238 KB pdf image


The Challenge of Evidence in Disability and Rehabilitation Research and Practice

A Position Paper

Mark V. Johnston, PhD
Gregg C. Vanderheiden, PhD
Marianne D. Farkas, ScD
E. Sally Rogers, ScD
Jean Ann Summers, PhD
John D. Westbrook, PhD

For the NCDDR Task Force on Standards of Evidence and Methods



National Center for the Dissemination of Disability Research - Advancing Research, Improving Education







The Challenge of Evidence in Disability and Rehabilitation Research and Practice

A Position Paper

Introduction to the NCDDR Task Force Papers

The National Center for the Dissemination of Disability Research (NCDDR) has established three task forces to assist the project in analyzing, understanding, and commenting on features of the evidence production process within the disability and rehabilitation research context.

  • Task Force on Standards of Evidence and Methods
  • Task Force on Systematic Review and Guidelines
  • Task Force on Knowledge Translation/Knowledge Value Mapping

Each task force is comprised of senior researchers with current or recent experience in conducting NIDILRR-sponsored research activities. Each task force is charged with developing positions and statements that are relevant in light of current circumstances.

This paper was developed as a group effort of the National Center for the Dissemination of Disability Research (NCDDR) Task Force on Standards of Evidence and Methods (TFSE). Concerns and suggestions of all members have been taken into consideration in the development of this paper. All issues, however, could not be fully addressed in this initial paper. Subsequent papers will address key issues related to this topic area. Task Force members contributing to this paper include:

Mark V. Johnston, PhD, Gregg C. Vanderheiden, PhD, Marianne D. Farkas, ScD, E. Sally Rogers, ScD, Jean Ann Summers, PhD, and John D. Westbrook, PhD


TASK FORCE FACILITATOR:
Mark V. Johnston, PhD

Affiliation:
University of Wisconsin-Milwaukee, WI

Correspondence:
National Center for the Dissemination of Disability Research (NCDDR)
SEDL
4700 Mueller Blvd.
Austin, TX 78723-3081
ncddr@sedl.org

Disclosure:
The Task Force on Standards of Evidence and Methods is sponsored by the National Center for the Dissemination of Disability Research (NCDDR) and funded by the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR)

RECOMMENDED CITATION:
Johnston, M. V., Vanderheiden, G. C., Farkas, M. D., Rogers, E. S., Summers, J. A., & Westbrook, J. D., for the NCDDR Task Force on Standards of Evidence and Methods. (2009). The challenge of evidence in disability and rehabilitation research and practice: A position paper. Austin, TX: SEDL.


Can evidence beneficially shape information resources and services for people with disabilities? What constitutes "best evidence" on interventions for people with disabilities? Do we need to develop specific evidence standards to identify best evidence on interventions for people with disabilities?

This paper states the position of the NCDDR Task Force on Standards of Evidence and Methods (TFSE) regarding the need for (a) the thoughtful determination of research evidence on the basis of both the rigor of the research and the relevance of the research to the lives of people with disabilities; and (b) systems that facilitate our ability, on a timely basis, to describe what the best available evidence is in response to specific topical questions in disability and rehabilitation.

The primary focus of this paper is on evidence for interventions in the field of disability and rehabilitation (D&R). Evidence issues related to D&R interventions concern all people with disabilities and involve both research and development, as both are extremely important. The specific objectives of this paper are the following:

  • To clarify what is meant by the term evidence and to describe the nature of the contemporary systems used to identify and evaluate evidence in intervention research
  • To identify the challenges in meeting contemporary standards of evidence in the field of D&R interventions
  • To propose next steps for examining related issues and for taking action to promote the availability of evidence-based services and information in the field of D&R interventions

The Challenge of Evidence

Evidence and Contemporary Systems for Grading Evidence

Few researchers would disagree with the proposition that D&R policies and practices should be grounded in evidence. The issue is how that evidence should be identified, evaluated, and synthesized. What standards and methods should be applied to evaluate the strength of scientific evidence used to inform practices and policies for people with disabilities?

Evidence, for the purpose of this paper, refers to the knowledge that connects research to practice. Over the years, an increasing emphasis on evidence has led to a movement for evidence-based practice (EBP). Emerging first in the health care industry, EBP has since swept into a number of other professional fields, including D&R.

In the field of D&R, EBP involves using the best available evidence—integrated with clinical expertise and the values and experiences of people with disabilities and other stakeholders—to guide decisions about clinical and community practices. In this paper, we define D&R practices, or interventions, as systematic actions, programs, treatments, devices, or environmental changes designed to benefit, either directly or indirectly, individuals or groups with disabilities. In a clinic or home, D&R interventions usually focus on individuals, although small groups (e.g., family) may be treated as well. D&R interventions may also target larger units such as classrooms, companies, or communities (e.g., to increase physical accessibility, to alter attitudes, to effect universal design, or to improve policies). Primary domains of concern for D&R interventions include participation and community living, employment, health and function, and technology for access and function [Federal Register, 2006].

To support EBP, many professional organizations have developed detailed evidence grading systems for use mainly in evaluating and synthesizing intervention studies (e.g., Edlund, Gronseth, So, & Franklin, 2004; Guyatt et al., 2008; Higgins & Green, 2006; Institute of Medicine, 2008; Sackett, Straus, Richardson, Rosenberg, & Haynes, 2000). While a variety of rankings are used, evidence is commonly graded on a scale from Level 1, the strongest evidence, to Levels 4 or 5, the weakest evidence.

Such evidence grading systems are increasingly being used in the field of D&R to evaluate studies of clinical and community practices. For example, several grading systems are available for use in selecting the best evidence to answer clinical questions in systematic reviews and meta-analyses. These systems include the Cochrane Collaboration, Campbell Collaboration, Agency for Healthcare Research and Quality (AHRQ), and Grading of Recommendations Assessment, Development and Evaluation (GRADE), and grading systems from medical societies such as the American Academy of Neurology, also recommended by the American Congress of Rehabilitation Medicine (Atkins et al., 2004; Edlund et al., 2004; Guyatt et al., 2008; Higgins & Green, 2006; Institute of Medicine, 2008; West et al., 2002).

Evidence grading systems assess and rank the quality of research studies on the basis of pre-established criteria, or standards of evidence, which go beyond dichotomies such as good or bad and rigorous or nonrigorous. Virtually all evidence grading systems for studies of intervention efficacy address the following aspects of research quality (West et al., 2002):

  • Randomization, and in some cases other methods of evaluating the comparability of control groups
  • Blinding, and in some cases other methods of avoiding measurement biases, attrition, or losses
  • Statistical conclusion validity, including the size of statistical confidence intervals

For example, an overarching purpose of using evidence grading systems is to avoid or minimize bias, including not only technical biases in research procedures but also biases associated with self-interest, financial interest, or social pressure to express certain opinions regardless of the scientific data.

In addition to issues of research quality, evidence grading systems (e.g., GRADE) in the field of D&R increasingly consider the strength of recommendation and the relevance of evidence to individuals' needs and values. Such issues of practical application assess the external validity of the evidence, and are just as important for evidence standards and methods to address as issues of research quality, which assess the internal validity of the evidence. For this reason, future Task Force papers will examine further both the research quality and application sides of the evidence bridge.

The Challenge of Evidence in Disability and Rehabilitation

EBP is quickly becoming the preferred approach for guiding D&R professionals in rendering services to individuals with disabilities. However, contemporary evidence standards and methods pose a number of challenges for the field of D&R. For one, the evidence standards and methods used in many systematic reviews and meta-analyses identify few Level 1 studies of D&R interventions or programs (Johnston, Sherer, & Whyte, 2006). Many systematic reviews and meta-analyses include only randomized controlled trials (RCTs), widely recognized as the most rigorous method of testing intervention efficacy. For this reason, some recently published reviews have reported finding very little or no evidence. Although such results may reflect a scarcity of well-controlled D&R trials rather than a lack of intervention effectiveness, findings of an absence of evidence pose a serious and ongoing challenge to the field of D&R.

The shortage of Level 1 clinical trials in D&R is due in large part to the nature and scope of the field. In both research and practice, D&R is an exceptionally wide, multidisciplinary field involving biological, psychological, social, economic, legal, and environmental factors related to disability. The field's mission entails the commitment to help people with disabilities "perform activities of their choice" and "to expand society's capacity to provide full opportunities and accommodations for its citizens with disabilities" (Federal Register, 2006). This vast scope of concern includes social integration, employment, independent living, health, and enabling technology. Although basic scientific standards and methods can be applied to D&R, multiple standards and methods are needed to discern the best evidence for the wide and heterogeneous problems and interventions addressed in D&R research and practice.

The nature of D&R presents significant challenges to knowledge development and evidence identification, including the following:

  • Great breadth and complexity. Conceptually, disability involves the interaction of a person with a wide range of complex factors in the environment (World Health Organization, 2001). In both research and practice, some D&R interventions target health or biological functions; others target skills, feelings, or behaviors; and still others target aspects of the social or physical environment that limit people with disabilities (e.g., attitudes of employers or physical accessibility).
  • Emphasis on empowering people with disabilities. D&R research involves a commitment to a participatory approach that includes people with disabilities as decision makers throughout the process. This approach requires research designs and methodologies that appropriately and effectively allow for such participation. Although critical to ensuring that the research is relevant to the lives and values of people with disabilities, these designs and methodologies may be considered less rigorous under most current evidence grading methods.
  • Small sample sizes. Although disability is common, affecting the majority of people at some point, it is also extremely diverse. Interventions typically must be highly individualized, or client centered, and tailored to particular configurations of impairment or to personal and contextual factors. This diversity and need for customization often result in small samples for studies at any one local site.
  • Difficulty or impossibility of complete blinding and placebo control. For many personalized therapies, the client and therapist need to be aware of the intervention involved. For example, researchers cannot hide from clients the presence of an assistive device or a guide dog.
  • Difficulty in defining an ethical and practical control group. RCTs are comparatively new to D&R and a departure from its research tradition. Practitioners and people with disabilities are apprehensive about the denial of services for control groups in RCTs.
  • Need for enabling technology, including assistive devices and environmental modifications, to improve a disabled person's chosen activities or quality of life. Existing evidence grading systems do not address all of the research methods used to evaluate assistive technology or universal design for accessibility and successful use.
  • Funding levels that are adequate for pilot studies, intervention development, or early stage clinical trials but not for truly rigorous effectiveness studies. D&R is widely perceived as involving issues related to clinical service delivery and advocacy (Field, Jette, & Martin, 2007) rather than issues related to research and development. As a result, funding levels for research are often inadequate for rigorous scientific inquiries using a large, multi-site RCT design.
  • Need to address issues within large social systems that involve consideration for the social, physical, and/or economic environment. Many of the major issues in D&R concern large social systems that cannot be manipulated experimentally (e.g., universal design, accessibility, public attitudes, legal rights, effects of culture, economic factors). These contextual effects are not readily incorporated into current evidence grading systems.

Current EBP standards and methods for D&R were derived from evidence-based medicine (EBM) and optimized for well-funded studies of well-defined, individual level clinical interventions, such as pharmaceutical agents, which should be tested using blinded RCTs. For many of the current research problems in D&R, however, the usual or optimal solution will not be a large RCT. The best research design is not always the largest or most rigorous one possible; it is rather the one that will most advance knowledge on the basis of the state of prior research and development and resource constraints.

To identify the best evidence for many D&R practices, EBP standards and methods for systematic reviews need to be sensitive to non-RCT evidence and to recognize classes of interventions for which RCTs, though occasionally possible and worthwhile, are not expected to be the usual or standard source of evidence (e.g., most assistive or enabling technology). For example, methods of controlling for differences between experimental and control groups other than RCTs exist, and D&R evidence grading systems should incorporate those methods (Institute of Medicine, 2008; Johnston, Ottenbacher, & Reichardt, 1995; Schneider, Carnoy, Kilpatrick, Schmidt, & Shavelson, 2007; Victora, Habicht, & Bryce, 2004; West et al., 2008). At the same time, evidence standards and methods in D&R should continue to support the need to develop and test new interventions using the most rigorous methods, including RCTs, whenever appropriate (Johnston & Case-Smith, 2009).

D&R can also benefit from the study of best practices in other fields facing similar challenges with EBP and then adopt or adapt those practices that apply. For instance, researchers can overcome problems of small sample size by using multicenter collaborative networks or simply avoid the problem by focusing on issues common to a large number of people. Complex, vague intervention strategies can be outlined, and procedures developed to ensure intervention fidelity. In public health, nursing, psychology, and other fields, hundreds of RCTs have been mounted to study multifaceted community and behavioral interventions (e.g., chronic disease self management). In other studies, mixed methods have been applied to understand problems of qualitative complexity and context. Ethical clinical trials are mounted in many fields (e.g., by comparing a promising new-or-improved but unproven intervention to treatment as usual), and sophisticated correlational and quasi-experimental research designs can be used when randomized control is not feasible (Johnston et al., 1995; Schneider et al., 2007; West et al., 2008). Fields as diverse as psychology and public health have employed participatory research strategies to enhance study relevance and success (Viswanathan et al., 2004).

Next Steps

To advance EBP in D&R, we need to redefine the field's evidence standards and methods for intervention-based research. Guidelines for evidence development and application should address scientific research quality, relevance to the needs and values of people with disabilities, and applicability to practice. Many evidence grading systems currently available focus primarily on research quality (internal validity), creating a need to expand the grading systems' relevance to the needs of people with disabilities and the practical applicability of evidence (external validity). One of the next steps related to relevance should involve refining how D&R professionals measure the true needs, views, and desires of people with disabilities.

In addition, because of the breadth of D&R interventions, the field needs to consider developing several evidence grading systems pertaining to specific intervention focuses. Examples of those focuses include interventions involving assistive or enabling technology and device; behavioral and activity-based intervention; and interventions addressing environmental factors such as social, attitudinal, and physical environments in a larger community or societal context. Similarly, a need exists for developing evidence grading systems for nonintervention research and development in D&R, such as measurement, prognosis, and technology development processes.

The challenges identified in this paper need to be addressed in further detail, and possible solutions proposed based on reviews of current best knowledge. This process should involve consensus development among research and evidence experts both from within and outside of D&R, including representatives of people with disabilities and other stakeholders. D&R research encompasses widely varying professional traditions, and considerable work is required to reach consensus on quality indicators and useful educational materials for the various problems and subfields within D&R. Papers proposing solutions should be circulated widely to advance discussion among all stakeholders in the field regarding methods of determining best evidence.

Conclusions

The field of D&R faces the challenge of identifying and applying evidence to its practices. Guidelines and recommendations regarding clinical and community practices in D&R should be based on the best available evidence. The standards and methods used to select that evidence should address research quality, the needs and values of people with disabilities, and applicability to practice. These factors complement one another, and each of them must be considered when using research evidence to guide decisions affecting people with disabilities and the many issues they face in society.


Acknowledgments and References

Acknowledgments

Members of the Task Force on Standards of Evidence and Methods at the time this position paper was prepared included Matthew H. Bakke, PhD, Marianne D. Farkas, ScD, Mark V. Johnston, PhD, Dennis C. Lezotte, PhD, Kathleen M. Murphy, PhD, E. Sally Rogers, ScD, Katherine G. Schomer, MA, Jean Ann Summers, PhD, Gregg C. Vanderheiden, PhD, John D. Westbrook, PhD, and Kathryn M. Yorkston, PhD, all of whom contributed to the development and critical review of the ideas presented in this communication.

Careful review and suggestions by Mark V. Johnston, PhD, contributed to this paper's final format. John D. Westbrook, PhD of the National Center for the Dissemination of Disability Research (NCDDR) assisted in writing and served as liaison for the Task Force. The Task Force thanks the following for their useful feedback during the development of this paper:

Margaret L. Campbell, PhD
Senior Scientist for Planning and Policy Support
National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR)

Arthur M. Sherwood, P.E., PhD
Science and Technology Advisor
National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR)

Pimjai Sudsawad, ScD
Knowledge Translation Program Coordinator
National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR)


References

Atkins, D., Eccles, M., Flattop, S., Goat, G. H., Henry, D., Hill, S., et al. (2004). Systems for grading the quality of evidence and the strength of recommendations 1: Critical appraisal of existing approaches: The GRADE Working Group. BMC Health Services Research, 4(l), 38.

Edlund, W., Gronseth, G., So, Y., & Franklin, G. (2004). Clinical practice guideline process manual. St. Paul, MN: American Academy of Neurology.

Federal Register, February 15, 2006, Vol. 71, Number 31, pp. 8165 – 8200. Notice of final long range plan for fiscal years 2005–2009 (71 FR 8166). Retrieved from the Federal Register Online via GPO Access: http://edocket.access.gpo.gov/2006/pdf/06-1255.pdf

Field, M., Jette, A. M., & Martin, L. (Eds). (2007). The future of disability in America (pp. 315–317). Washington, DC: Institute of Medicine, The National Academies Press.

GRADE Working Group. (2007). GRADEprofiler (Version 3.2.2) [Software]. Available from http://ims.cochrane.org/gradepro

Guyatt, G. H., Oxman, A. D., Vist, G. E., Kunz, R., Falck Ytter, Y., Alonso Coello, P., et al. (2008). GRADE: An emerging consensus on rating quality of evidence and strength of recommendations. BMJ, 336(7650), 924–926.

Higgins, J. P. T., & Green, S. (Eds.). (2006). Cochrane handbook for systematic reviews of interventions (Version 4.2.6). Chichester, UK: John Wiley & Sons, Ltd.

Institute of Medicine, Committee on Reviewing Evidence to Identify Highly Effective Clinical Services. (2008). Knowing what works in health care: A roadmap for the nation. J. Eden, B. Wheatley, B. M., & H. E. Sox (Eds.). Washington, DC: The National Academies Press.

Johnston, M. V., Ottenbacher, K. J., & Reichardt, C. S. (1995). Strong quasi experimental designs for research on the effectiveness of rehabilitation. American Journal of Physical Medicine and Rehabilitation, 74(5), 383–392.

Johnston, M. V., Sherer, M., & Whyte, J. (2006). Applying evidence standards to rehabilitation research: An overview. American Journal of Physical Medicine and Rehabilitation, 85(4), 292–309.

Johnston, M. V., & Case Smith, J. (2009). Development and testing of interventions in occupational therapy: Towards a new generation of research in occupational therapy. Occupational Therapy Journal of Research: Occupation, Participation, and Health, 29(l), 13.

Sackett, D. L., Straus, S. E., Richardson, W. S., Rosenberg, W., & Haynes, R. B. (2000). Evidence-based medicine: How to practice and teach EBM (2nd ed.). Edinburgh, NY: Churchill Livingstone.

Schneider, B., Carnoy, M., Kilpatrick, J., Schmidt, W. H., & Shavelson, R. J. (2007). Estimating causal effects using experimental and observational designs. Washington, DC: American Educational Research Association, Governing Board of the American Educational Research Association Grants Program.

Victora, C. G., Habicht, J. P., & Bryce, J. (2004). Evidence-based public health: Moving beyond randomized trials. American Journal of Public Health, 94(3), 400–405.

Viswanathan, M., Ammerman, A., Eng, E., Gartlehner, G., Lohr, K. N., Griffith, D., et al. (2004). Community-based participatory research: Assessing the evidence (AHRQ Publication No. 04–E022-1). Retrieved from U.S. Department of Health & Human Services, Agency for Healthcare Research and Quality: http://www.ahrq.gov/clinic/tp/cbprtp.htm

West, S., King, V., Carey, T. S., Lohr, K. N., McKoy, N., Sutton, S. F., et al. (2002). Systems to rate the strength of scientific evidence (AHRQ Publication No. 02 EO15, Evidence Report/Technology Assessment No. 47). Rockville, MD: U.S. Department of Health & Human Services, Agency for Healthcare Research and Quality.

West, S. G., Duan, N., Pequegnat, W., Gaist, P., Des Jarlais, D. C., Holtgrave, D., et al. (2008). Alternatives to the randomized controlled trial. American Journal of Public Health, 98(8), 1359–1366.

World Health Organization. (2001). International classification of functioning, disability and health (ICF). Geneva, Switzerland: World Health Organization.


Members of the Task Force on Standards of Evidence and Methods

Matthew H. Bakke, PhD, is associate professor of Audiology and Ph.D. program director in the Hearing Speech and Language Sciences Department at Gallaudet University in Washington, DC. He is director of the Rehabilitation Engineering Research Center (RERC) on Hearing Enhancement, and principal investigator on a NIDILRR Field Initiated Programs Research Project, "An Automatic Fitting Algorithm for Cochlear Implants."

Marianne D. Farkas, ScD, is a research associate professor, Sargent College of Allied Health and Rehabilitation Sciences. She is the co-principal investigator of the NIDILRR funded Innovative Knowledge Dissemination and Utilization for Stakeholders and Professional Associations project as well as the RRTC on Improved Employment Outcomes at the Center for Psychiatric Rehabilitation, Boston University. She is also the Center's director of Training, Dissemination and Technical Assistance.

Mark V. Johnston, PhD, is the facilitator for the Task Force on Standards of Evidence and Methods. He is professor, College of Health Sciences, and Department of Occupational Therapy, at the University of Wisconsin-Milwaukee. He is an experienced senior researcher in disability and rehabilitation, with over 90 publications and reviews, focusing on questions of intervention effectiveness and outcomes experienced by people with disability in clinic and community, funded by NIDILRR, NIH, VA, and other agencies. He is also chair of the Clinical Practice Committee of the American Congress of Rehabilitation Medicine, which also deals with evidence and guidelines in rehabilitation and where he advocates for evidence standards that are both rigorous and sensitive to problems faced in disability and rehabilitation.

Dennis C. Lezotte, PhD, has been a professor with the University of Colorado Denver since 1981. He earned his degree at SUNY at Buffalo in 1976 when he completed his doctoral studies in Statistics under the direction of Dr. Willard Clatworthy. Before coming to Colorado, he worked for 6 years at the University of Florida in the Medical Systems Division of the Department of Pathology. His teaching and research interests are in the area of Health Information Technology, Public Health Informatics and the use of clinical databases and information systems for decision support.

Kathleen M. Murphy, PhD, is the project director for SEDL's research partnership with the Disability and Business Technical Assistance Center (DBTAC) Southwest Americans with Disability Act (ADA) Center. She is also a program associate for SEDL's NCDDR.

E. Sally Rogers, ScD, is director of research at the Center for Psychiatric Rehabilitation at Boston University. She is the co-principal investigator of a NIDILRR-funded Rehabilitation Research and Training Center to improve vocational outcomes for individuals with psychiatric disabilities, a Knowledge Translation grant, and a field initiated grant.

Katherine G. Schomer, MA, is project manager for the Center for Technology and Disability Studies at the University of Washington. She manages a project to conduct systematic reviews of evidence on topics related to spinal cord injury, brain injury, and burn injuries. She also coordinates authorship of reviews with authors across the nation, as well as develops and maintains data extraction tools (databases) for retrieving evidence.

Gregg C. Vanderheiden, PhD, is director of the Trace Research and Development Center, University of Wisconsin-Madison. His interests cover a wide range of research areas in technology, human disability, and aging. Current research includes development of new interface technologies, models for information transfer across sensory modalities, network-based services, techniques for augmenting human performance, enhancing the usability of the environment, and matching enhanced abilities to environmental demands. He also studies and develops standards for access to Web-based technologies, operating systems and telecommunication systems.

John D. Westbrook, PhD, is program manager of the Disability Research to Practice (DRP) program at SEDL. The DRP program currently includes the National Center for the Dissemination of Disability Research (NCDDR) which focuses on developing systems for applying rigorous standards of evidence in describing, assessing, and disseminating outcomes from research sponsored by NIDILRR and the Vocational Rehabilitation Service Models for Individuals with Autism Spectrum Disorders which addresses the growing need for improving vocational rehabilitation and transition to employment services for people with ASD. Westbrook has extensive experience in disability, dissemination/utilization, and knowledge translation. Moving evidence from disability research into mainstream evidence collections, such as the Campbell Collaboration, is a major focus of current activities. He has authored multiple resources addressing knowledge translation of rehabilitation and disability research findings.

Kathryn M. Yorkston, PhD, teaches Evidence-Based Rehabilitation, Foundations of Rehabilitation Science, and Communication Disorders in Rehabilitation at University of Washington, Department of Rehabilitation Medicine. Her clinical interests include the Neuromuscular Clinic for Speech and Swallowing Disorders and her research interests include motor speech disorders in adults.


The Task Force on Standards of Evidence and Methods is sponsored by the National Center for the Dissemination of Disability Research (NCDDR). NCDDR's Task Force Papers are published by SEDL and the NCDDR under grant H133A060028 from the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR) in the U.S. Department of Education's Office of Special Education and Rehabilitative Services (OSERS).

The NCDDR's scope of work focuses on knowledge translation (KT) of NIDILRR-sponsored research and development results into evidence-based instruments and systematic reviews. This paper is published to further discussions regarding the application of rigorous standards of evidence in identifying, assessing, and using high quality outcomes.

Available in alternate formats upon request.
Available online: https://ktdrr.org/ktlibrary/articles_pubs/ncddrwork/tfse_challenge

National Center for the Dissemination of Disability Research - Advancing Research, Improving Education

National Center for the Dissemination of Disability Research
SEDL
4700 Mueller Blvd.
Austin, Texas 78723
(800) 266-1832 or (512) 476-6861
Fax (512) 476-2286
Web Site http://www.ncddr.org
E-mail: NCDDR@sedl.org

SEDL operates the NCDDR, which is funded 100% by NIDILRR at $750,000 per project year. However, these contents do not necessarily represent the policy of the U.S. Department of Education, and you should not assume endorsement by the federal government.

SEDL is an Equal Employment Opportunity/Affirmative Action Employer and is committed to affording equal employment opportunities for all individuals in all employment matters. Neither SEDL nor the NCDDR discriminate on the basis of age, sex, race, color, creed, religion, national origin, sexual orientation, marital or veteran status, or the presence of a disability.

Copyright © 2009 by SEDL