VISTAS articles are made available for historical reference only and are presented "as is." ACA does not guarantee or represent that the information is current, accurate or indicative of the original or intended quality. These materials are not maintained or updated and may contain outdated or incomplete information. Readers should exercise discretion and verify information independently before relying on it. We assume no responsibility for the use or interpretation of this content.
Article 26
The Role of Evidence-Based Therapy Programs in the Determination of Treatment Effectiveness
Paper based on a program presented at the 2009 American Counseling Association Annual Conference and Exposition, March 19-23, Charlotte, North Carolina.
The call for greater accountability in counseling has resulted in attempts to include outcomes research as a component of clinical treatment. “Evidence-Based Practice” has become an accepted term used to describe the integration of research and practice. First used by a Canadian medical group to describe “evidence-based medicine” (Evidence-Based Medicine Working Group, 1992), a widely accepted definition for use in the human services was developed by Gibbs (2003) who stated: “evidence based practitioners adopt a process of lifelong learning that involves continually posing specific questions of direct practical importance to clients, searching objectively and efficiently for the current best evidence relative to each question, and taking appropriate action guided by evidence” (p. 60). This definition has been used by the American Psychological Association to develop a list of empirically validated treatments that “have been referenced by a number of local, state, and federal funding agencies, which are beginning to restrict reimbursement to these treatments” (Levant, 2005). The consequences associated with the rush to adopt empirically supported treatments without careful consideration for the clinical utility of the treatment, the professional expertise of the counselor, and the unique characteristics of the clients, are serious and will impact the future of the profession. Although practitioners may be well-meaning, not all interventions are effective and some may be harmful (Rubin & Babbie, 2005).
Greater emphasis on accountability in the human services has encouraged counselors to consider methods to support their clinical decisions with research initiatives. The American Counseling Association (ACA) Code of Ethics (2005) stops short of requiring professional counselors to actively engage in formal outcomes research activities to support their clinical services, but specific codes are present that promote greater accountability. Sections A.1.c and C.2.d specifically call for professional counselors to pay attention to the issue of counseling effectiveness; section C.2.a requires counselors to practice within the boundaries of their competence; and section C.2.f requires counselors to “acquire and maintain a reasonable level of awareness of current scientific and professional information in their fields of activity.” These codes lay the foundation for the evolution of evidence-based counseling practices among professionals with appropriate training and experience.
The transition from laboratory to practice is not without controversy. The difference between treatment efficacy and clinical effectiveness with actual clients who present with a broad range of co-occurring disorders is yet to be established. Borckardt et al. (2008) suggest that a case-based timed-series research approach has many benefits over the use of group research initiatives, the predominant feature of randomized clinical trials (RCTs). Messer (2004) questions whether RCTs and experimental, single-case studies yield more useful information than philosophical outlook, theory, other research sources, and practical experience on which most practitioners rely. Sharpley (2007) cites numerous studies that suggest the use of RCTs to guide the development of preferred counseling approaches is inappropriate and leads to inaccurate results when applied to practice.
Controversy over effectiveness measures in the human services has led to the emergence and promotion of evidence-based practices that attempt to merge research with practice and demonstrate some level of accountability to the public. However, are counselors to provide services that are strictly objective and data based or purely subjective and experience based (Messer, 2004)?
Rubin (2007) defines evidence-based practitioners as “those who use scientific evidence to guide their own practice and who conduct or participate in evaluations of their own practice or programs” (p. 290). However, in practice the use of the term “scientific evidence” seems to cover a broad range of research approaches, some of which represent more rigorous application of research methodologies than others.
The Continuum of Evidence-Based Practices
It is tenable to assume that all counseling is evidence-based. Some counselors may pursue a course of treatment based on intuition anchored by their experiences with clients with similar characteristics while others may choose a treatment approach based on research found in the literature that recommends certain clinical approaches with clients with certain disorders. Still others may choose to make clinical decisions based on formal, site-based outcomes research activities. This range in the rationale supporting clinical decision making forms a continuum based on research rigor.
At one end of the scale, counselors depend largely on intuition which is often influenced by their training and experience. This includes the “clinical impressions” often cited in discharge summaries providing an evaluation of a client’s progress. Such impressions are often supported by references to in-session observations, client self-reports, and feedback from clients regarding the benefits of therapy.
Further along this continuum are counselors who depend on their training and experience and incorporate programs of study that have been developed to increase a client’s knowledge base or awareness level of a particular clinical problem. Gains in knowledge or awareness are subsequently considered evidence of counseling effectiveness.
Other counselors depend on their training and experience and incorporate outcomes research using global indicators reported in professional journals regarding specific client populations (Stewart & Chambless, 2007). Some of these indicators include psychological, medical, or social characteristics that appear in the literature relative to certain client populations. Examples of this approach include a review of services provided to inmates with co-occurring disorders (Chandler, Peters, Field, & Juliano-Bult, 2004) or the use of a family- based, behaviorally oriented, multimodal, multisystemic approach for children with attention deficit disorder with hyperactivity (ADHD; Edwards, 2002).
Toward the scientific end of this continuum, counselors’ professional training and experience may lead them to expand their reliance on research and incorporate “best practice” treatment approaches supported by literature references. Often these references to “best practices” represent the collective opinion of professionals regarding a treatment strategy for a particular client population with a particular problem and may include some global data regarding client progress. An example of this would be the use of Dialectic Behavioral Therapy as a predominant approach for clients with borderline personality disorder (Linehan, 1993) or the use of Cognitive Behavioral Therapy for individuals experiencing phobic or anxiety disorders (Rubin & Babbie, 2005).
At the scientific end of the scale, counselors use their professional training and experience and engage in formal research activities. These activities include an assessment of client progress, adaptation of treatment to account for the individual characteristics of the client, review of initial assessment information at the end of therapy to identify changes in qualitative indicators, and incorporation of pre- and post-treatment quantitative test data to confirm the changes that have been identified.
Challenges to Evidence-Based Research Approaches
Messer (2004) provides a comprehensive discussion of evidence supported therapies (ESTs) and Randomized Clinical Trials (RCTs) and describes limitations relevant to both approaches. This discussion leads to the conclusion that practitioners need to follow a model of evidence-based psychotherapy practice, such as the disciplined inquiry or local clinical scientist model, that encompasses a theoretical formulation, empirically supported treatments, empirically supported therapy relationships, clinicians’ accumulated practical experience, and their clinical judgment about the case at hand (p. 580). Client characteristics, specifically personality, cultural, socioeconomic, developmental, stressors, and personal preferences, need to be considered because treatments are most likely to be effective when tailored to fit the individual needs of the client (Norcross, 2002).
The continuum of evidence-based practices presented above appears to follow Messer’s (2004) recommendations. Questions emerge, however, regarding the rigor of the research methodology used to support evidence-based counseling approaches and the application of these approaches in a counseling setting.
Intuition and Clinical Impressions
Intuition and clinical impressions are often anchored in a therapist’s training and experience. Such impressions may be offered by paraprofessionals who possess little or no formal counseling training or by licensed professionals with graduate or advanced graduate degrees including exposure to research strategies and techniques.
Intuition and clinical impressions introduce the potential for bias, especially when the clinician is also the researcher. A counselor’s devotion to a particular treatment theory may influence objectivity and create situations where clinicians simply seek to confirm their clinical hypotheses, possibly ignoring indicators that do not fit their treatment schema.
Programmed Studies
Clinicians who use programs of study to help define clinical effectiveness must assume there is a connection between knowledge gain and behavior change. The connection between these two variables has never been established.
Global Indicators
Global indicators supported by research often provide a checklist of psychological, medical, or social characteristics generally determined through clinical studies or gleaned from state or national databases. While global indicators might signify common characteristics among a particular client population, the connection between changes in the levels of these indicators and treatment effectiveness, represented by behavior change, has not always been established.
Best Practices
The concept of “best practices” has been promoted to direct the treatment activities of counselors toward interventions that groups of professionals consider to be most appropriate for particular client populations or for clients with particular clinical issues. Following a medical model, best practices research aims to identify specific techniques or treatments that are “best” for particular categories of client problems. Thus, the mechanisms of change in counseling, according to the “best practices” point of view, are specific techniques, not features common to all counseling orientations (Hansen, 2006).
Two fundamental issues challenge the efficacy of best practices. The first focuses on the nature of the evidence supporting the use of best practices and the second raises concerns about the application of best practices.
In a comprehensive review of decades of outcomes research, Wampold (2001) found that specific ingredients or techniques in the counseling approach played an insignificant role in overall client progress. Messer (2004) appears to confirm this position in a meta-analysis that found a very substantial association between the researchers’ preferred therapy model and the therapy that was more successful. It emerged despite the fact that differences in efficacy between the therapies were rather small and clinically insignificant to begin with (p. 581).
Coupled with the challenge of the research supporting “best practices” are questions regarding the application of best practices in a clinical setting. To what degree do theoretical applications compare among counselors with different levels of formal education? Is it reasonable to assume that a paraprofessional with no graduate education would be able to apply the principles of Cognitive Behavioral Therapy at the same professional level as a licensed professional therapist with an advanced graduate degree?
Instrument Selection
Professional counselors have access to a broad range of instruments that can be used for data collection during the counseling experience. Some of these instruments are designed for diagnostic purposes only (e.g., MMPI-2). Others, such as interest inventories and opinion surveys, are designed to open avenues for discussion.
Not all instruments have the psychometric properties appropriate for use in research designs that require parametric data for analysis. Therefore, counselors who generate quantitative data utilizing test data need to pay attention to scales of measurement of data being collected and use the corresponding test of significance.
Understanding Evidence-Based Therapy
There may be some discussion regarding the placement of the various evidence-based approaches along the research continuum presented above. The research continuum provides the opportunity for counselors to select from many evidence-based practices, some that offer more rigorous approaches to research than others, but all documented, to some degree, in the literature.
The merits of evidence-based therapy hinge on two critical issues that may not be easily recognized or understood by the general public. First, the rigor of the “evidence” associated with evidence- based therapy represents a broad range of possibilities ranging from personal opinion to formal repeated measures designs using valid and reliable instruments. Currently, there is no method for the public to distinguish between the various research approaches implied by the term “evidence-based therapy.” It is reasonable to assume the use of the term “evidence-based practice” might encourage potential clients to select a particular counseling program. It is unreasonable to assume, however, that potential clients would take the time or have the expertise to evaluate the research used to support claims that a particular counseling practice was evidence-based.
Second, research to support clinical initiative might be generated by professional practitioners, those with advanced specialized graduate degrees (Gladding, 2009), or paraprofessional practitioners, those practicing without the benefit of a formal graduate education. Graduate education exposes individuals to research methodologies and proper statistical procedures, necessary components for conducting formal outcomes research. Without restricting the use of the term “evidence-based practice” to research generated by qualified professionals, the general public has no way to readily determine if the evidence has merit.
It might be time for professional counselors to consider establishing a classification system to provide the public with a quick reference to evidence-based practices conducted by qualified researchers and based on rigorous research approaches. Such a system could include a two-tier system. The first tier could represent the researcher’s qualifications and second tier could represent a ranking of rigor of the research approach. A multidisciplinary, non- profit organization could be developed to establish and monitor the research classification system and provide the public with accurate information regarding outcomes research activities.
Conclusions
The demand for greater economic efficiency in the delivery of counseling services is likely to continue into the foreseeable future (Norcross, Hedges, & Prochaska, 2002). The integration of research and counseling services is gaining the attention of state legislatures and third-party payers in an attempt to determine “what is appropriate to do in practice, what is to be reimbursed, and what the rates of reimbursement will be” (Kasdin, 2008, p. 156).
It is tenable to assume that counseling entities will explore integrating “evidence-based therapy” into day-to-day practices. A major concern should be whether all of these research approaches satisfy the definition of “evidence-based practices” or offer sufficient evidence that reflects effective therapy. Further, a major concern should focus on the integrity and rigor of the evidence being generated.
It is unreasonable to assume that the public will become familiar enough with formal research concepts to be able to explore the differences in research used as support for evidence-based practices. Without rigorous research being conducted by qualified researchers, the question is whether some “evidence-based therapy” is actually supported by qualified research or simply a marketing ploy to attract new clients.
As the role of outcomes research in the human services continues to be debated, other questions emerge regarding the responsibility of professional counselors to verify the effectiveness of the clinical services they provide. To what extent are professional counselors ethically bound to produce valid, empirical evidence to support their clinical services? What role should clinical impressions and client satisfaction surveys play in the overall evaluation of the effectiveness of clinical services? How should valid and reliable test instruments be utilized in measuring treatment effectiveness? To what extent can current qualitative and quantitative research principles and practices contribute to a measure of client behavior change noting common limitations regarding subject sampling and research designs that do not necessarily have control groups for comparison? As these questions continue to be debated, individuals seeking counseling are faced with a major challenge in trying to identify effective treatment sources.
References
American Counseling Association. (2005). Code of ethics. Alexandria, VA: Author.
Borckardt, J. J., Nash, M. R., Murphy, M. D., Moore, M., Shaw, D., & O’Neil, P. (2008). Clinical practice as natural laboratory for psychotherapy research: A guide to case-based time-series analysis. American Psychologist, 62, 77-95.
Chandler, R. K., Peters, R. H., Field, G., & Juliano-Bult, J. (2004). Challenges in implementing evidence-based treatment practices for co-occurring disorders in the criminal justice system. Behavioral Sciences and the Law, 22, 431-448.
Edwards, J. H. (2002). Evidence-based treatment for child ADHD: “Real-world” practice implications. Journal of Mental Health Counseling, 24, 126-139.
Evidence-Based Medicine Working Group. (1992). Evidence-based medicine. A new approach to teaching the practice of medicine. Journal of the American Medical Association, 268, 2420-2425.
Gibbs, L. E. (2003). Evidence-based practice for the helping professions: A practical guide with integrated multimedia. Pacific Grove, CA: Brooks/Cole-Thompson Learning.
Gladding, S. T. (2009). Counseling: A comprehensive profession (6th ed.). Upper Saddle River, NJ: Prentice Hall.
Hansen, J. T. (2006). Is the best practices movement consistent with the values of the counseling profession? A critical analysis of best practices ideology. Counseling and Values, 50, 154 – 160.
Kasdin, A. E. (2008). Evidence-based treatment and practice: New opportunities to bridge clinical research and practice, enhance the knowledge base, and improve patient care. American Psychologist, 63, 146-159.
Levant, R. F. (2005, February). Evidence-based practice in psychology. Monitor on Psychology, 36 (2), 5.
Linehan M. M. (1993). Cognitive-behavioral treatment of borderline personality disorder. New York: Guilford Press.
Messer, S. B. (2004). Evidence-based practice: Beyond empirically supported treatments. Professional Psychology: Research and Practice, 35, 580-588.
Norcross, J. C. (Ed.). (2002). Psychotherapy relationships that work: Therapist contributions and responsiveness to patient needs. New York: Oxford University Press.
Norcross, J. C., Hedges, M., & Prochaska, J. O. (2002). The face of 2010: A delphi poll on the future of psychotherapy. Professional Psychology, Research and Practice, 33, 316 – 322.
Rubin, A. (2007). Statistics for evidence-based practice and evaluation. Belmont, CA: Thompson/Brooks-Cole.
Rubin, A., & Babbie, E. (2005). Research methods for social work. Belmont, CA: Thompson/Brooks-Cole.
Sharpley, C. F. (2007). So why aren’t counselors reporting n=1 research designs? Journal of Counseling and Development, 85, 349-356.
Stewart, R. E., & Chambless, D. L. (2007). Does psychotherapy research inform treatment decisions in private practice? Journal of Clinical Psychology, 63, 267-281.
Wampold, B. (2001). The great psychotherapy debate: Models, methods, and findings. Mahwah, NJ: Erlbaum.