Feasibility & Acceptability of Providers as Co-Reviewers for ACT Team Fidelity Assessments
Fidelity is crucial for better clinical outcomes in evidence-based practices. Explore a pragmatic approach using providers as co-reviewers in ACT team fidelity assessments to establish feasibility and acceptability. Learn about the ACT model, team staffing, integrated services, and more.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
PROVIDERS AS CO-REVIEWERS OF ACT TEAM FIDELITY ASSESSMENTS: ESTABLISHING FEASIBILITY & ACCEPTABILITY Maria Monroe-DeVita, PhD Lorna Moser, PhD Sarah Kopelovich, PhD MacKenzie Hughes, BA Bryan Stiles, BA Roselyn Peterson, BA Stacy Smith, MEd Society for Implementation Research Collaboration Conference September 8, 2017
THE FIDELITY DILEMMA Fidelity is an important implementation outcome Shown to predict better clinical outcomes across EBPs/EBTs Increasingly used for quality improvement But can be costly and burdensome Several methods have been used to reduce costs and/or improve feasibility We present a pragmatic approach utilizing providers as co-reviewers within the context of ACT fidelity assessments Primary goal: establish feasibility and acceptability
ACT: A BRIEF OVERVIEW An EBP for adults with SMI Multidisciplinary team shares caseload; no brokering
Typical ACT Team Staffing Position Full Team (serves 80-100) 1 FTE Half Team (serves 42-50) 1 FTE Team Leader 16 hours per 50 consumers 3 FTE 16 hours per 50 consumers 1.5 - 2 FTE Psychiatric Care Provider/Prescriber Registered Nurses 1 FTE 1 FTE Peer Specialist 4 FTE 2 FTE Masters level Clinicians* 1 3 FTE 1.5 2.5 FTE BA-level CMs* *Substance Abuse Specialist 1 FTE 1 FTE *Vocational Specialist 1 FTE 1 FTE 1 1.5 FTE 1 FTE Program Assistant (non-clinical) 4
ACT: A BRIEF OVERVIEW An EBP for adults with SMI Multidisciplinary team shares caseload; no brokering Services primarily provided in vivo Multiple contacts & intensive services 24/7 Integrates other evidence-based practices; not just case management
A Snapshot of ACT Services Integrated Dual Disorders Treatment Treatment Case Pharmacological management Wellness Management Services Wellness Management Psychiatric Rehabilitation Motivational InterviewingACT ACT Cognitive Behavioral Therapy EBTs for MH/COD Psychiatric Rehabilitation Crisis Services Supported Employment Supported Employment
ACT: A BRIEF OVERVIEW An EBP for adults with SMI Multidisciplinary team shares caseload; no brokering Services primarily provided in vivo Multiple contacts & intensive services 24/7 Integrates other evidence-based practices; not just case management Strengths-based, person-centered while balancing assertive engagement
TOOL FOR MEASUREMENT OF ACT (TMACT) 47 items; 5-point anchored scales 6 subscales: 1. Operations & Structure (OS): 12 items 2. Core Team (CT): 7 items 3. Specialist Team (ST): 8 items 4. Core Practices (CP): 8 items 5. Evidence-Based Practices (EP): 8 items 6. Person-Centered Planning Practices (PP): 4 items Monroe-DeVita, Moser, & Teague, 2013
EXAMPLE TMACT ITEMS & RATING SCALE
ACT FIDELITY REVIEW PROCESS Two independent reviewers Team completes survey & spreadsheet beforehand Onsite review 1 days Review randomly selected charts (~20%) Observe daily team meeting, treatment planning, community-based services Conduct interviews with team members & clients Reviewers independently rate/consensus rate Write feedback report, focused on performance improvement recommendations
OVERVIEW OF APPROACH Piloting a co-reviewer process to fidelity reviews of 91 ACT teams in Washington and North Carolina NC began in 2013, WA in 2014
16 ACT TEAMS IN WASHINGTON Compass Compass - - Skagit Skagit Lake Whatcom Center Lake Whatcom Center DESC DESC Kitsap Mental Kitsap Mental Health Services Health Services Compass Compass Snohomish Snohomish WHATCOM SAN JUAN North Sound BHO PEND OREILLE OKANOGAN FERRY STEVENS SKAGIT Spokane County Regional BHO ISLAND Catholic Family Catholic Family & Child Services & Child Services CLALLAM SNOHOMISH Frontier Frontier Behavioral Behavioral Health Health Pioneer Pioneer Human Human Services Services CHELAN Salish BHO DOUGLAS JEFFERSON LINCOLN SPOKANE KITSAP KING Thurston Mason BHO North Central BHO King County BHO GRAYS HARBOR KITTITAS MASON GRANT PIERCE Optum Pierce BHO Navos Navos Southeast Southeast ADAMS WHITMAN THURSTON Behavioral Health Behavioral Health Resources Resources LEWIS PACIFIC FRANKLIN GARFIELD Great Rivers BHO YAKIMA COLUMBIA Greater Columbia BHO New Great New Great Rivers Team Rivers Team WAHKIAKUM ASOTIN SKAMANIA COWLITZ BENTON WALLA WALLA Southwest Washington RSA = Full Team = Half Team = Between Full & Half Team KLICKITAT Community Community Services Northwest Services Northwest CLARK Comprehensive Comprehensive Multicare Multicare Lourdes Lourdes
OVERVIEW OF APPROACH Piloting a provider co-reviewer process to fidelity reviews of 91 ACT teams in Washington and North Carolina Lead fidelity reviewer based at a university (UNC, UW) or state mental health authority (NC only) Co-reviewers: Program managers/clinical supervisors Team leaders Psychiatrists, nurse practitioners Other ACT team members (WA only, starting in 2015) Co-reviewers unpaid; travel reimbursement provided
Fidelity Review Training Training Washington North Carolina TMACT Walk- Through (1 day) Yes (group training) Yes (group training) Trainee group joins TMACT Trainer to review ACT team. All participate in data collection, interviewing, ratings, and consensus calls Single or small group trainees join pair of seasoned TMACT evaluators. Trainees collect data, make ratings, participate in consensus. Training Review of ACT Team 1 Single trainee joins pair of seasoned TMACT evaluators. Trainees collect data, assist with interviews, make ratings, participate in consensus. Training Review of ACT Team 2 n/a Booster Trainings n/a Annual TMACTer Summit
TRAINED CO-REVIEWERS BY TEAM ROLE (N=53) Washington Co-Reviewers (n = 33) North Carolina Co-Reviewers (n = 20) Vocational Specialist 3% Case Manager 3% Psychiatrist 5% ARNP 3% Program Manager 6% Assistant Team Leader 6% CD Specialist 6% Team Leader 40% Program Manager/ Supervisor 50% Team Leader 45% Nurse 15% MHP 18%
CURRENT CO-REVIEWERS (N=32) Washington Co-Reviewers (n = 18) North Carolina Co-Reviewers (n = 14) ARNP 6%, n=1 CD Psychiatrist 7%, n=1 Specialist 6%, n=1 Team Leader 33%, n=6 Nurse 17%, n=3 Program Manager/ Supervisor 57%, n=8 Team Leader 36%, n=5 Case Manager 5%, n=1 MHP 28%, n=5 Assistant Team Leader 5%, n=1
SURVEY METHODS Distributed a 14-item REDCap survey to all current WA and NC ACT fidelity co-reviewers (N=32) Participants were assigned a unique identifier to maintain confidentiality Response Rates 30/32 responded (94% response rate) Washington: 88% (15/17) North Carolina: 100% (15/15)
ESTIMATED FIDELITY REVIEW BURDEN Expectations (When Signing Up) 1 review/year in addition to the training review May be asked to complete up to 4 reviews/year (2 first year, 1 subsequent years) Number of Fidelity Assessments Conducted Per Reviewer Washington (n=15) North Carolina (n= 15) Mean = 2.33 / Median = 3 / Range = 1-4 Mean = 7.8 / Median = 9 / Range = 1-16 Estimated Hours Spent Per Review Mean = 37.2 Mean = 31.2 Median = 33 Median = 30 Range = 20 72 Range = 20 50
Are you interested in doing more or fewer than the expected number of fidelity reviews each year? Reviews WA (n=15) NC (n=15) More 73% (n=11) 33% (n=5) Fewer 7% (n=1) 0% (n=0) Same 20% (n=3) 67% (n=10)
WHY DID YOU DECIDE TO BECOME A CO-REVIEWER? Personal Agency directed 4%, n=2 interest/investment 4%, n=2 Facilitate content mastery of ACT 10%, n=5 Improving within- team outcomes: gain new ideas for my team, so I can improve my team s fidelity and overall outcomes for our clients. Learning about/networking with other teams 25%, n=13 Quality Improvement 10%, n=5 Increase knowledge of ACT fidelity 21%, n=11 Improve personal leadership/augment job performance 11%, n=5 Improving within-team outcomes 15%, n=8
ATTITUDES TOWARDS CONDUCTING FIDELITY REVIEWS (N=25) Clarifies existing ACT standards 4% To what extent has serving as a co-reviewer provided an opportunity to better learn about ACT? Provide opportunities to collaborate 8% 5 4.7 Experiencing novel ways of working within ACT & incorporating into team 42% Other 12% 4 3 Provide holistic view of teams 15% 2 Familiarization with TMACT 19% 1 0
ATTITUDES TOWARDS CONDUCTING FIDELITY REVIEWS (N=24) To what extent has serving as a fidelity co-reviewer provided an opportunity to better learn about ACT fidelity or the ACT fidelity tool (TMACT)? Learning more about reviewer role 8% Exposure to high fidelity teams 12% 5 4.67 4 Understanding fidelity vis- -vis TMACT 48% Other 12% 3 2 peer consultation 20% 1 0
ATTITUDES TOWARDS CONDUCTING FIDELITY REVIEWS (N=18) To what extent have you enjoyed your experience as a fidelity co- reviewer? Having an impact on reviewer s own team 5% 5 4.53 Continued learning 19% 4 Networking with other teams/ relationship building 48% 3 Other 28% 2 1 0
DO YOU FEEL YOUR ACT TEAM HAS BEEN POSITIVELY AFFECTED? 100% Yes Examples: Learning tricks & tools from other teams to bring back & implement within own team Seeing how well other teams function has been a motivator to achieve same Allows reviewer to have a more objective view on items that his/her own team needs to improve on Good to know what reviewers are looking for during the fidelity review process
RELATIONSHIP BETWEEN ACT TEAMS WITH A FIDELITY CO-REVIEWER AND FIDELITY RATINGS (NC TEAMS ONLY) Whether Reviewer is an ACT Team Member or Supervisor ACT Team Overall Average Fidelity Rating* Not a Team Member or Supervisor Team Member or Supervisor Provisional Certification (Rating 3.0 - 3.69) 90% (n=18) 10% (n=2) Met Certification Cutoff (Rating 3.7 or above) 56.6% (n=30) 43.4% (n=23) Total 65.8% (n=48) 34.2% (n=25) *TMACT fidelity ratings range from 1 to 5 2 = 7.192, p = .006
ACT TEAMS WITH HIGHER TMACT RATINGS Further examined teams that met certification cutoffs (rating of 3.7 and higher) TMACT fidelity scores were higher among those ACT teams that had at least one team member or supervisor conducting fidelity reviews of other teams (rs= .314, p = .007)
CO-REVIEWER TURNOVER Washington Turnover: 45% (15/33) trained co- reviewers are no longer conducting fidelity reviews 67% (10/15) co-reviewers left their teams 33% (5/15) co-reviewers were promoted North Carolina Turnover: 15% (3/20) trained co- reviewers are no longer conducting fidelity reviews 66% (2/3) co-reviewers left their teams 33% (1/3) co-reviewers were promoted In NC, 15% (3/20) of reviewers remain as reviewers, but work for the state or at UNC now
DO YOU FEEL YOUR ACT TEAM HAS BEEN NEGATIVELY AFFECTED? 93% (n=28) No 7% (n=2) Yes Negative Effects: Scheduling difficulties to ensure coverage while co-reviewer was out of office Team was short staffed at the time of review. High workload when co-reviewer returned to office
DID YOU ENCOUNTER BARRIERS/OBSTACLES AS PART OF THE FIDELITY REVIEW PROCESS? Equal distribution between states on experiencing barriers related to the process 47% No n=14 53% Yes n=16
DID YOU ENCOUNTER BARRIERS/OBSTACLES AS PART OF THE FIDELITY REVIEW PROCESS? Work burden associated with review 5%, n=1 Burden placed upon other members in reviewer s home agency 10%, n=2 Travel Travel logistics 10%, n=2 reimbursement process 40%, n=8 Review logistics 10%, n=2 Work burden returning to agency after a review 25%, n=5
PRACTICE & POLICY IMPLICATIONS Fidelity reviews of ACT teams can be resource- and time- intensive, but are needed to ensure adherence to the model and ultimately, better clinical outcomes. Our data suggest that this potentially less resource-intensive fidelity review model is feasible, acceptable to program stakeholders, and beneficial to their home teams. The significant relationship between having a team member conduct fidelity reviews and fidelity scores is encouraging suggests that reviewers are learning about best practices by conducting these reviews. Offers a potentially less costly option for reinforcing best practices (and improving outcomes) within ACT & other EBPs
FUTURE DIRECTIONS Are ACT team members fidelity assessments as valid and reliable? What are the costs of different methods of fidelity assessment? How much cheaper is this approach? What factors should be considered in implementing this method for conducting fidelity reviews in other states? Especially when fidelity is tied to certification or funding Need to further assess other teams perceptions of a co-reviewer who is also an ACT team member
THANK YOU! mmdv@uw.edu