Published on in Vol 8, No 6 (2019): June

This is a member publication of Florida State University

Testing a Motivational Interviewing Implementation Intervention in Adolescent HIV Clinics: Protocol for a Type 3, Hybrid Implementation-Effectiveness Trial

Testing a Motivational Interviewing Implementation Intervention in Adolescent HIV Clinics: Protocol for a Type 3, Hybrid Implementation-Effectiveness Trial

Testing a Motivational Interviewing Implementation Intervention in Adolescent HIV Clinics: Protocol for a Type 3, Hybrid Implementation-Effectiveness Trial


1College of Medicine, Florida State University, Tallahassee, FL, United States

2Department of Family Medicine and Public Health Sciences, Wayne State University, Detroit, MI, United States

3Oregon Social Learning Center, Eugene, OR, United States

4Center for HIV Educational Studies and Training, Hunter College, City University of New York, New York, NY, United States

5Health Psychology and Clinical Science Doctoral Program, Graduate Center, City University of New York, New York, NY, United States

Corresponding Author:

Sylvie Naar, PhD

College of Medicine

Florida State University

Main Campus

1115 West Call Street

Tallahassee, FL, 32306

United States

Phone: 1 248 207 2903


Background: Motivational interviewing (MI) has been shown to effectively improve self-management for youth living with HIV (YLH) and has demonstrated success across the youth HIV care cascade—currently, the only behavioral intervention to do so. Substantial barriers prevent the effective implementation of MI in real-world settings. Thus, there is a critical need to understand how to implement evidence-based practices (EBPs), such as MI, and promote behavior change in youth HIV treatment settings as risk-taking behaviors peak during adolescence and young adulthood.

Objective: This study aims to describe the Adolescent Medicine Trials Network for HIV/AIDS Interventions (ATN) protocol of a tailored MI (TMI) implementation-effectiveness trial (ATN 146 TMI) to scale up an EBP in multidisciplinary adolescent HIV settings while balancing flexibility and fidelity. This protocol is part of the Scale It Up program described in this issue.

Methods: This study is a type 3, hybrid implementation-effectiveness trial that tests the effect of TMI on fidelity (MI competency and adherence to program requirements) while integrating findings from two other ATN protocols described in this issue—ATN 153 Exploration, Preparations, Implementation, Sustainment and ATN 154 Cascade Monitoring. ATN 153 guides the mixed methods investigation of barriers and facilitators of implementation, while ATN 154 provides effectiveness outcomes. The TMI study population consists of providers at 10 adolescent HIV care sites around the United States. These 10 clinics are randomly assigned to 5 blocks to receive the TMI implementation intervention (workshop and trigger-based coaching guided by local implementation teams) utilizing the dynamic wait-listed controlled design. After 12 months of implementation, a second randomization compares a combination of internal facilitator coaching with the encouragement of communities of practice (CoPs) to CoPs alone. Participants receive MI competency assessments on a quarterly basis during preimplementation, during the 12 months of implementation and during the sustainment period for a total of 36 months. We hypothesize that MI competency ratings will be higher among providers during the TMI implementation phase compared with the standard care phase, and successful implementation will be associated with improved cascade-related outcomes, namely undetectable viral load and a greater number of clinic visits among YLH.

Results: Participant recruitment began in August 2017 and is ongoing. As of mid-May 2018, TMI has 150 active participants.

Conclusions: This protocol describes the underlying theoretical framework, study design, measures, and lessons learned for TMI, a type 3, hybrid implementation-effectiveness trial, which has the potential to scale up MI and improve patient outcomes in adolescent HIV settings.

Trial Registration: NCT03681912; (Archived by WebCite at

International Registered Report Identifier (IRRID): DERR1-10.2196/11200

JMIR Res Protoc 2019;8(6):e11200




The National Institutes for Health Office of AIDS Research called for implementation science (IS) to address the behavioral research-practice gap [1]. IS is the scientific study of methods to promote the uptake of research findings and evidence-based practices (EBPs) to improve the quality of behavior change approaches in health care settings [2]. A primary challenge of scaling up EBPs is the balance of flexibility (adaptation to context) and fidelity (provider adherence and competence) [3]. Despite the success of the Centers for Disease Control’s dissemination program of HIV-related EBPs, there are substantial barriers to the effective implementation of these interventions in real-world settings [4]. To date, considerably less attention has been paid to IS in HIV care settings [5] and even less in HIV adolescent and young adult care settings, an age group hardest hit by new infections [6]. Youth aged 16-24 years have the highest rates of new HIV infections compared with all other age groups [7]. Rates of new and existing infections continue to be disproportionately higher in racial and ethnic minorities, particularly among African American and Latino adolescents and young adults [8]. With current clinical guidelines, youth living with HIV (YLH) increasingly will be initiating antiretroviral treatment, yet rates of adherence are notoriously poor [9]. Racial and ethnic minority youth are at particular risk of poor adherence to antiretroviral therapy and, therefore, of having detectable viral load [10,11]. Thus, an understanding of how to implement EBPs to promote behavior change in HIV treatment settings is critical and timely, particularly in youth treatment settings, as adolescence and young adulthood are the developmental periods where risk behaviors, including nonadherence, peak. Yet, to the best of our knowledge, there have been no IS studies of behavioral EBPs in adolescent HIV treatment settings.

Motivational Interviewing

Motivational interviewing (MI) is a collaborative, goal-oriented method of communication designed to strengthen intrinsic motivation in an atmosphere of acceptance, compassion, and autonomy support [12]. MI was adapted by the protocol chair for adolescents and young adults [13] and chosen as the EBP of the study because (1) MI-consistent behaviors promote behavior change and treatment engagement across multiple behaviors, in multiple formats, and by multiple disciplines and has shown effectiveness with minority populations [14]; (2) MI was also the only EBP to demonstrate success across the youth HIV prevention and care cascades [15-18], and a recent meta-analysis found that MI was the only effective EBP for behavior change in YLH [19]; (3) MI is already embedded in the clinical guidelines for HIV care [20-23] and HIV risk reduction [24]; (4) MI may provide a foundation for patient-provider communication in the delivery of other EBPs; and (5) MI has been found to have even larger effect sizes in minority populations [14].

Balancing Flexibility and Fidelity

A key tension in IS lies between strict fidelity to EBP program requirements and flexibility in adapting to the community context [25]. Fidelity refers to adherence to the program requirements as well as EBP competence of implementers. Adaptation is the process of making a new program “fit” in the targeted inner context (organization) and outer context (service system). Aarons et al [26] developed the Dynamic Adaptation Process for adapting an EBP to a new context while maintaining fidelity to core elements during 4 phases of the Exploration, Preparation, Implementation, and Sustainment (EPIS) model [27]. The process involves identifying core elements and adaptable characteristics of EBP implementation, then supporting implementation by guiding allowable adaptations to the model, fidelity monitoring and support, and identifying the need for and solutions to system and organizational adaptations. This guidance occurs in collaboration of with local stakeholders who meet regularly as an implementation team (iTeam).

Promoting Sustainability

An EBP is considered sustained if core elements are maintained with fidelity—typically 1 year postimplementation [12]. Fidelity-maintenance strategies such as ongoing audit and feedback and booster training are particularly important for sustainability [28]. While it is clear that ongoing coaching is necessary to sustain MI fidelity, it remains unclear whether this facilitation is best delivered by facilitators who are internal to the organization or by outside experts. Our pilot work suggests that at least 6 months are needed in HIV care settings for even a subset of providers to achieve expert competency sufficient to provide coaching [29]. Furthermore, in these multidisciplinary medical settings, one provider is not typically providing supervision to other providers. Preselecting internal facilitators may be counter to the structure of the team, and preselected staff may not have set aside time to provide such supervision, particularly in an era of shrinking resources. Alternatively, a more feasible model could use the Dynamic Adaptation Process to guide internal facilitation (IF) after a year of external facilitation with data collection on staff competency, time, and interest.

Communities of practice (CoPs) are another strategy to promote the uptake and sustainability of EBP. A CoP is a group of people who learn together and create common practices based on (1) a shared domain of knowledge, tools, language, and stories that creates a sense of identity and trust to promote learning and member participation; (2) a community of people who create the social fabric for learning and sharing, inquiry, and trust; and (3) shared practice made up of frameworks, tools, references, language, stories, and documents that community members share. They can vary in the level of formality, membership (shared discipline or across disciplines), and method of communication (eg, face-to-face and Web-based). They are supposed to be nonhierarchical and can change their agenda to suit the needs of members. While the study of CoPs to promote fidelity in the implementation of EBPs is in its infancy, preliminary findings are promising [30].

Efficient fidelity measurement can aid sustainability by providing supervisors with easily used tools for ongoing quality assurance [31]. A fidelity instrument with strong established psychometric properties will not be used in real-world clinics if it is too costly or difficult to integrate into routine practice; therefore, developing fidelity measures that can be feasibly used by internal or external facilitators to provide rapid, accurate feedback and that have a high likelihood of being sustained to support the ongoing implementation is an important component of a successful implementation strategy. We have tested the efficiency and validity of a trainer or coach rating scale for fidelity monitoring, feedback, and systematic coaching. In addition, we have learned in our preliminary studies that recording actual patient-provider interactions in some HIV clinic settings is not feasible. As a result, we have developed a standard patient interaction model of fidelity monitoring using our trainer or coach rating scale as an alternative choice for implementation [32].

Linking Cost-Effectiveness Research With Implementation Science

In the face of competing demands for health care resources, the importance to establish not just the efficacy of EBPs but also their relative economic value has increased. A recent editorial noted that despite the prevalence of economic evaluation in health services research, there is a dearth of studies on the cost-effectiveness of implementing EBPs [33]. The authors note that the number of economic evaluations contrasts sharply with the number of studies on implementation strategies assessing only their effect on behavior change and health outcomes. To further emphasize this, the National Institutes for Health has established the cost-effectiveness analysis as a key priority for 2016 [34].


The aim of this paper is to describe Adolescent Medicine Trials Network for HIV/AIDS Interventions (ATN) 146 Tailored Motivational Interviewing (TMI) to study the scale up of an EBP in multidisciplinary adolescent HIV care settings while balancing flexibility and fidelity. The protocol is part of the Scale it Up research program focusing on implementation of self-management interventions to impact the adolescent HIV prevention and care cascades [35]. The study seeks to determine primarily the effect of the TMI implementation intervention (set of strategies) on provider fidelity (adherence plus competence) and secondarily HIV care continuum outcomes (collected as part of ATN 156 described in this issue). Another objective of this study is to compare IF plus CoPs with CoPs alone in sustaining fidelity and to explore the role of the barriers and facilitators to implementation (see ATN 153 EPIS protocol paper in this issue), as these impact fidelity in study sites. Finally, this study also seeks to determine the cost-effectiveness of TMI with or without IF sustainment by combining fidelity and cascade outcomes with money spent on implementation strategies.


ATN 146 TMI is part of the Scale It Up Program as described in the overview paper in this issue [35]. TMI is a type 3, hybrid implementation-effectiveness trial [36] that tests the effect on fidelity to MI, using a dynamic wait-listed design [37] with 150 providers (an average of 15 providers and 100 patients each) nested within 10 HIV clinical sites (subject recruitment venues) in the United States. A type 3, hybrid implementation design focuses primarily on the effect of the implementation intervention strategies on implementation outcomes, such as fidelity, and secondarily on patient outcomes and the effect of these outcomes on adaption and fidelity. This design allows for all clinics to receive the implementation intervention (set of implementation strategies), but randomization and implementation intervention phase occur in staggered blocks (in pairs of clinics). Although fidelity assessments occur throughout the study period at each site, a new block enters into the implementation phase every 3 months (Figure 1).

Figure 1. Tailored motivational interviewing (TMI) schedule of assessments.
View this figure

Participants and Recruitment

Eligible participants include all youth HIV care providers (eg, physicians, nurses, mental health clinicians, and paraprofessional staff) who have at least 4 hours of contact with youth for HIV prevention or care. Study coordinators at each clinic work with the research team to introduce the project and recruit participants by scheduling and conducting introductory meetings. After the introductory meetings, a study coordinator from each site sends provider contact information (email and phone number) to the research team that contacts potential participants to provide information and schedule quarterly assessments. A participant is considered enrolled once he or she reviews the information sheet and completes a research element (ie, at least one fidelity assessment). A central institutional review board (IRB) is used to establish a master reliance agreement via the “SMART” or Streamlined, Multisite, Accelerated Resources for Trials IRB Reliance platform. This is designed to harmonize and streamline the IRB review process for multisite studies, while ensuring a high level of protection for research participants across sites. Participants (medical providers) at each site provide informed consent before any study activities. This study has been approved as an expedited protocol at the central IRB site. HIV care and prevention providers may choose to opt out of the study without penalty. A participant meets the criteria for premature discontinuation upon withdrawal of consent before the project’s completion or stops working in the clinic during the study.

Implementation Intervention

The implementation intervention strategies follow the phases of the EPIS model [38].

Exploration Phase

The exploration phase involves a multilevel assessment of system, organization, provider, and client characteristics using qualitative and quantitative assessments. ATN 153 EPIS [39] is utilized for this purpose as providers complete qualitative interviews and quantitative surveys related to the following: (1) anticipated barriers and facilitators of adoption and use of MI and proposed implementation intervention strategies within the inner (provider, clinic, and organization) and outer (system) contexts; (2) ideas to promote sustainability in terms of integration into program and clinic policies; and (3) identification of key stakeholders for the iTeam. In addition to these data, baseline quantitative data on provider competency is collected in this phase.

Preparation Phase

In the preparation phase, a continuous information feedback loop is created such that information gathered during the assessments are used by the iTeam to make adjustments to the implementation strategies while maintaining fidelity to the EBP and mandatory implementation intervention components. The iTeam has monthly conference calls during this period to member-check the barrier and facilitator data and iteratively draft locally customized implementation strategies. Figure 2 shows the mandatory and adaptable components of the implementation intervention.

Figure 2. Dynamic Adaptation Process to balance fidelity and flexibility using monthly implementation team meetings. MI: motivational interviewing.
View this figure
Implementation Phase

Implementation begins with a 12-hour skills workshop [40] delivered by members of the Motivational Interviewing Network of Trainers. The workshop was tailored for adolescent HIV in our prior studies [29,41]. MI training relies on experiential activities developed by the network while minimizing didactic presentations. Cooperative learning methods [42] allow staff members to coach each other in small groups to promote experiential learning and group cohesion. Group MI methods are included to increase intrinsic motivation for implementation strategies [43]. A recent review of 10 studies in health care settings [44] suggested that MI workshops markedly improved MI skills compared with controls; however, as in mental health settings [40,45,46], workshops were not sufficient for trainees to achieve MI competency. There are two mandatory coaching sessions in the 3 months following training. Subsequently, providers complete a quarterly competency assessment (see the schedule of assessments below). Coaching feedback is triggered by a provider falling below the intermediate competency threshold on this measure. Providers receive an autogenerated report based on their scores with recommendations for mandatory coaching for scores below intermediate competency and optional maintenance coaching for scores in the intermediate or advanced range. The duration of coaching sessions is 45-60 minutes, and they are delivered by a member of the Motivational Interviewing Network of Trainers. The standardized coaching includes a brief interaction to elicit change talk around MI implementation, feedback on two highest and two lowest ratings, and review of the audiorecording and coaching activities (eg, fidelity assessments) targeting the lowest ratings.

The iTeam continues to monitor adaptations at the provider and inner and outer organizational contexts as well as any fidelity drift and plan for sustainability.

Sustainment Phase

In the sustainment phase, the iTeam is encouraged to meet without external facilitation to review client and system data and address barriers and facilitators to ongoing EBP fidelity. The iTeam guides the site to develop a CoP and are given a manual of possibly group activities to support MI fidelity. The sites randomized to IF receive .1 full time equivalent for the facilitator who must achieve advanced competency by the end of the implementation period and complete a 5-session facilitator training.

Site Randomization

The research design requires randomization of sites in blocks to the MI implementation intervention. The 10 clinics receive random assignation to 5 blocks to receive the TMI implementation intervention. Every 2 months, 2 clinics are randomized to begin the implementation intervention and the others remain in the wait-listed condition. This continues until the last block is randomized. To allow sufficient time for scheduling and planning the initial workshop component, each wave of randomization occurs 6 months prior to the initiation of implementation. After 1 year of implementation (1-year postworkshop), regardless of the block, sites receive rerandomization to IF plus CoP or CoP alone.

Schedule of Assessments and Compensation

Fidelity is assessed on a quarterly basis for 36 months throughout the study (preimplementation, 12 months of implementation, and sustainment). Provider competence ratings (primary outcome) are collected quarterly preimplementation, once a week for the first 2 weeks of implementation (to support the coaching process), and quarterly during the rest of the year of implementation and, then, quarterly during sustainment. Across clinics, providers preceding the implementation intervention will form the control or comparison group, and the providers following the start of the implementation intervention will form the intervention group. After 1 year of implementation, regardless of the block, sites receive rerandomization to either IF monitoring and coaching plus the encouragement of CoPs or CoP alone.

Each site receives the same incentive budget (the equivalent of US $50 per staff member, or approximately US $3000 in total) and will determine whether incentives will be provided episodically or after program completion. The iTeam decides whether incentives should be delivered directly to individuals for completion of program requirements, utilize a lottery system, or provide a group reward when all site providers adhere to program requirements.

Assessment Scheduling

Appointy, a Web-based scheduling system, is used to schedule fidelity assessments and coaching sessions. Providers are sent an invitation link through Appointy to create an account. Providers can view the hours that are available from the research team to schedule their roleplay and a coaching session. A confirmation email is sent to providers to confirm their booking. In addition, providers have the advantage of rescheduling or canceling their appointments if needed. Canceled or “no show” appointments are tracked along with completed appointments in REDCap, a Web-based database management program. If providers fail to schedule through Appointy, the research team uses direct contact methods (phone or email) to schedule their roleplay or coaching session.

Primary Outcome: Competency Ratings

Every 3 months over the 36 months of the study, providers complete a 15-minute, phone-based standard patient interaction developed in our previous studies [32]. There is a growing body of literature supporting the educational use of standardized patients in teaching and learning [47,48], including teaching MI skills and practice [49,50]. Standard patients’ profiles were developed by actual clinical encounters and delivered by trained actors. In addition to a specified target behavior (eg, medication adherence, appointment attendance, and risk behavior), a detailed patient history is provided to the actor including living situation, pregnancy status, relationship status, drug use, willingness to take medications, talkativeness, and mental health symptoms such as depression. Each scenario also includes 3 unique “must say” statements or questions (eg, “ I hate that I have to deal with this [HIV]. That’s why I don’t date, or get close to people or anything. ”) to be included in the acting session. The supervisor listens to randomly selected recordings on a monthly basis to provide feedback on accuracy and consistency. Standardized profiles are delivered on a schedule, meaning that only 1 profile is used for all interactions conducted in each quarter. We attempt to keep actors and coders blind to the condition by assigning each participant a unique participant identification number (9 digits) that does not reflect participant location or randomization status.

A trained independent rater codes the interactions with the MI Coach Rating Scale [51,52] developed using Item Response Theory item development and evaluation methods [51,53,54]. The scale includes 12 items (Figure 3) assessing MI competence on a 4-point Likert scale (1=Poor, 2=Fair, 3=Good, and 4=Excellent). Overall, 20% interactions are cocoded to confirm interrater reliability. In addition, coders attend a monthly coding lab to discuss discrepancies in a randomly selected recording. Competency thresholds were defined using a Rasch-based objective standard setting procedure [55]. Fifteen MI content experts used the instrument’s 4-point scale to select the minimum rating scale category reflecting beginner, intermediate, and advanced competence. The selected categories were combined with the results of a Many-Facet Rasch Model [56], including item estimates, SEs, and rating scale thresholds. From this information, the average item “difficulty” was computed across raters and items, with separate scores for the beginner and solid competency thresholds. These values were then adjusted for the experts’ ratings of overall competency, from 0% to 100%, required for “somewhat acceptable” and “acceptable” competency. The resulting logit-based criterion scores were then converted to raw scores (using information from the Many-Facet Rasch Model) that correspond to the instrument’s 4-point scale. Applied to datasets from previous studies, including ATN 128, a large proportion of ratings fell in the Beginner category, and based on (1) expert review and (2) the wide range from Beginner to Advanced, the Beginner category was divided into 2 parts, to reflect “Beginner” and “Novice.” Thus, the final categories and associated threshold scores were as follows: Beginner (<2.0); Novice (2.0-2.6); Intermediate (2.7-3.3); and Advanced (>3.3).

Figure 3. Motivational Interviewing Coach Rating Scale.
View this figure

Secondary Outcome: HIV Cascade Variables

ATN 154 Cascade Monitoring [57] examines the trends in treatment cascade, including whether patients are receiving antiretroviral treatment, adhering to regimens, attending care appointments, and maintaining suppressed viral loads, to guide the new protocol development and to facilitate community engagement.

Measures of the Context of Implementation

ATN 153 EPIS [39] assesses the barriers and facilitators to implementation with qualitative interviews and qualitative surveys to address the following: (1) why were some providers and not others able to integrate competent use of MI into their practice with adolescent patients? (2) Why did some providers sustain MI over time? And (3) why were some sites good host settings for an initiative designed to promote the use of MI in routine clinical practice? There are distinct factors that position an organization well for succeeding in implementing a new practice, and there are also distinct provider and organizational influences that can impede or facilitate successful integration of a new practice into providers’ daily routines [58].

Analysis Plan

Aim 1: Effect of Tailored Motivational Interviewing on Provider Motivational Interviewing Competence and Cascade Outcomes

We will confirm the distribution for outcome modeling using graphical or descriptive procedures. The descriptive trajectory for each provider on each outcome will be plotted using “spaghetti plots” [59]. The plots will illustrate the patterns of change over time, including the specific patterns during the preimplementation, implementation, and sustainment phase, and this will inform the specification of the growth models.

Analyses will be conducted using mixed-effects regression models (eg, Raudenbush and Bryk [60]). For the MI competence outcome, aims 1 and 2 will be evaluated using the same base model. The slope term for the preimplementation phase is expected to be nonsignificant; that is, MI competence is expected to relatively low and stable prior to the implementation interventions. Upon entering the implementation phase, the competence slope is expected to shift markedly, becoming more positive. Likewise, the implementation phase indicator should reflect a marked increase in the overall level of competence from the preimplementation phase to the implementation phase. Furthermore, follow-up models will be conducted to determine whether MI competence is higher for clinics in the implementation phase relative to clinics that, at the same time, are still in the preimplementation phase.

The cascade outcomes will be analyzed using a similar approach. For the viral load and appointment adherence outcomes, the model will be specified as described for the provider competence outcome, testing for changes in the viral load and appointment adherence slopes from the preimplementation to implementation to sustainment phases. For the outcomes that are cross-sectional within phases—new diagnosis and receipt of counseling and testing services—phase-level indicators will test for changes in the rate of new diagnoses and receipt of C&T. Furthermore, planned comparisons will be specified to compare the rates between the implementation and sustainment phases.

Because there are multiple phases over time for each provider and clinic, the primary question is whether provider competence slopes change from phase-to-phase. The approach used to estimate the statistical power is recommended by Hox [61] and Hedges and Rhoads [62]. Specifically, there are 3 steps

  1. Estimate power for a single-level regression model as the targeted sample size. In this case, power is .80 to detect a small-to-medium effect of R2=0.10 with 75 single-level, independent observations.
  2. Compute the actual sample size for the proposed study. For the primary outcome of provider competence, focusing on the implementation phase only, with 10 clinics that have 15 providers each and 6 measurements of competence, there are 900 nonindependent observations.
  3. Penalize the actual sample size for the nesting effects using the design effect formula (ie, neff = n /(1+{ nclus−1} ρ), where neff is the effective sample size, n is the total sample of observations, nclus is the cluster size, and ρ is the intraclass correlation), providing the effective sample size. The observations provide the statistical power of 225 independent observations, and adjusting for nesting within clinics, they provide the statistical power of 71 independent observations. As such, the proposed sample is sufficient for detecting a small-to-medium effect of R2=0.11.

For aim 2, the power estimate reflects the ability to detect a difference in the overall level of the primary outcome of provider competence between groups. Power was estimated as detailed for aim 1. With 10 clinics that have 15 providers each and 4 measurements of competence, there are 600 nonindependent observations. These observations provide the statistical power of 214 independent observations, and adjusting for nesting within clinics, they provide the statistical power of 70 independent observations. As such, the proposed sample is sufficient for detecting a small-to-medium effect of f2=0.11.

Aim 2: To Compare Internal Facilitation Plus Communities of Practice to Communities of Practice Alone in Sustaining Competence

For the provider competence outcome, the data structure is identical to that described for aim 1. For the adherence to program requirements outcome, the data are from the sustainment phase only, with repeated measurements of adherence to fidelity assessments and coaching sessions (level 1) nested within providers (level 2) nested within clinics (level 3).

To evaluate the outcomes for aim 2, including provider competence, completion of fidelity assessments, and completion of coaching sessions, a dichotomous indicator will be added at clinic level to differentiate clinics randomized to CoP plus IF from those randomized to CoP alone. For the provider competence outcome, in the model detailed for aim 1, cross-level interactions will be specified between this condition indicator and the level-2 sustainment phase indicator, along with the level-1 growth term for the sustainment phase. This will test the extent to which changes in provider competence during the sustainment phase differ for clinics receiving CoP plus IF and those receiving CoP alone. Likewise, the model can be simplified to test for a difference in the average level of provider competence, rather than change over time, during this phase. For the adherence to program requirements outcomes, the data are dichotomous, and as such, analyzed according to a binomial outcome distribution, reflecting each provider’s completion of planned fidelity assessments and coaching sessions. The clinic-level condition indicator will test for a difference between CoP plus IF and CoP alone in the average rate of adherence to program requirements during the sustainment phase.

For aim 2, the power estimate reflects the ability to detect a difference in the overall level of the primary outcome of provider competence between groups. Power was estimated as detailed for aim 1. With 10 clinics that have 15 providers each and 4 measurements of competence, there are 600 nonindependent observations. These observations provide the statistical power of 214 independent observations, and adjusting for nesting within clinics, they provide the statistical power of 70 independent observations. As such, the proposed sample is sufficient for detecting a small-to-medium effect of f2=0.11.

Aim 3: To Understand Barriers and Facilitators to Implementation

Our research questions for this component of the project are as follows: (1) why were some providers and not others able to integrate the competent use of MI into their practice with adolescent patients? (2) Why did some providers sustain MI over time? and (3) why were some sites good host settings for an initiative designed to promote the use of MI in routine clinical practice? To address these questions, data coding and analysis will proceed in a 3-phase process. First, consistent with Morgan’s [63] recommendations for qualitative content analyses and Hsieh and Shannon’s [64] directed qualitative content analytic approach, standard definitions of the concepts to be coded in the text will initially be developed on the basis of the EPIS model. We will systematically review each interview at each time point for all thematic mentions of (1) features of the inner and outer context per EPIS that have the potential to influence the implementation of MI; (2) all mentions of people; and (3) all mentions of personal perceptions of MI and other behavioral EBPs that have the potential to improve patient outcomes. Within these longer thematic lists, we will then separate specific categories of work setting characteristics, participants’ roles, and perceptions of evidence-based interventions, initially using existing theory to guide categorization but also allowing themes to emerge from the data through open coding procedures [65,66]. This combined inductive and deductive coding approach will allow us to both validate and extend the EPIS framework through our analysis. In addition to identifying categories within the data, we will also note whether providers’ mentions of particular categories of persons, organizational characteristics, and perceptions are positive or negative.

All coding will be conducted using NVivo Version 10. For reliability, a random selection of 30% of the interviews will be independently coded. Coding will be monitored to maintain a kappa coefficient of ≥0.90 [67,68]. In our third step, we will engage in comparative analyses both within and across time so that we may examine differences at the setting and provider levels in the quality and extent of MI implementation. Once all data are coded across all time points, we will adapt the innovation profile approach by Leithwood and Montgomery [69], originally developed for classroom research. The approach results in a multidimensional rubric to classify where a site is in the process of developing its capacity to engage in the integration of EBPs into routine patient care. These data will be integrated with quantitative fidelity data and EPIS surveys using a sequential mixed-method design [70,71] with equal weight given to qualitative and quantitative data sources [72]. We will develop an intervention profile and implementation resources for replication and sustainment of the intervention. The profile will synthesize intervention components and implementation analyses into intervention-specific practical guidance for further scale up.

Cost-Effectiveness Analysis

We will specify costs of implementation for budgeting further scale up as well as the incremental benefit of TMI and the addition of an internal facilitator on provider TMI competence and cascade outcomes over time. The cost-effectiveness analysis for the study is designed to measure costs and consequences of changes in the implementation over the 36 months of study follow-up to help inform the investigators of the economic consequences of the varying amount of resources used in the EPIS components of the study. Data will be collected on resource use and costs using a modification of the Drug Abuse Treatment Cost Analysis Program method [73] based in the approach described by Kim et al [74] to estimate the standard costs of personnel, training, and clinic space and time logs from the workshops, coaching, and fidelity monitoring processes to capture resources used. The units of measurements specified in the analysis will be used to assess cost-effectiveness. We will calculate the cost per provider trained in TMI to competency level and incremental cost-effectiveness of using different coaching approaches and will estimate the cost per provider trained for each site to explore potential for efficiencies that may be relevant to further dissemination of the interventions. Furthermore, we will use a previously developed cost utility model to estimate the cost per quality adjusted life years over a 10-year time horizon expected from cascade outcomes of viral suppression and retention in care.

TMI was launched in August 2017 and is ongoing. Currently, blocks 1-3 (see Table 1 for the list of randomization blocks) are participating in the implementation phase of TMI, while blocks 4 and 5 are still in their baseline period. (The clinic in New Orleans, LA, has decided to withdraw from the study, prior to randomization to TMI, and will not be collecting follow-up data.) From these current 10 sites, a total of 172 providers were invited to participate (excluding those that declined participation or left the clinic); of these 172 potential participants, 146 have consented as of early mid-May 2018. Consented participants have completed at least one quarterly assessment in the preparation phase. This protocol allows for the addition of more participants until a site receives the TMI workshop so the consented participant number may continue to increase.

Table 1. Clinic site block numbers, target enrollment, and consenting participants.
Clinical siteBlock numberTarget enrollment (N=165)Consenting participants (N=146)
San Diego31512
Los Angeles51514
Washington DC51520

Principal Findings

ATN 146 tests the effect of an MI implementation intervention on fidelity (primary outcome) and patient appointment adherence and viral suppression. The proposed design not only has the potential to expand MI to multidisciplinary adolescent HIV settings but may also provide opportunities to improve the implementation of other EBPs by providing a cost-effective implementation schematic. It is true that some, if not most, care providers have already received some exposure to MI; however, adequate competence is essential for successful implementation. The study also tests 2 approaches to sustainability. Finally, using mixed methods from the ATN 156 (EPIS protocol paper) [39], we will be able to understand the variability in implementation success.

Lessons learned thus far include the following:

  1. Although the sites have a strong history of research participation, IS studies are new to the network. Sites required significant education prior to the study initiation to ensure a complete understanding of the protocol and delineation of site staff responsibilities while avoiding coercion for what are optional IS studies.
  2. There appears to be marked variability in adherence to program requirements across sites, which we hypothesize will be explained by data collected regarding implementation factors guided by the EPIS model [39]
  3. Sufficient resources must be allocated to provider recruitment and retention as would be done in a traditional efficacy trial with patients.
  4. iTeams need significant guidance from protocol staff (external facilitators) throughout the phases of implementation.
  5. It is difficult to obtain patient perspectives in an expedited protocol without resources to obtain patient consent. However, we are supporting sites to collect deidentified client satisfaction ratings from all youth who attend clinic during the course of the study.


The real-world clinical context of TMI presents a number of challenges to be addressed by the research design, including the small number of available sites, budget limitations for travel for site training, and inability to randomize providers within sites because of contamination. As such, traditional randomized and cluster randomized designs are not viable options. Utilizing a dynamic wait-list controlled design addresses these barriers, while a second randomization provides a targeted test of the implementation and sustainment interventions.


In conclusion, the TMI study addresses the gap between behavioral research and clinical practice with a type 3 hybrid effectiveness-implementation trial. This protocol describes the study’s underlying theoretical framework, design, measures, and lessons learned. If successful, TMI will have a considerable impact on provider MI competence and positive outcomes on the youth HIV care cascade. Although this intervention is being implemented with MI at multidisciplinary adolescent HIV settings, it can be adapted for delivery of other EBPs in this setting as well as MI implementation in other health care contexts.


This work was supported by the National Institutes of Health (NIH) Adolescent Medicine Trials Network for HIV/AIDS Interventions (ATN 146; PI: KM) as part of the FSU/CUNY Scale It Up Program (U19HD089875; MPI: SN and JTP). The content is solely the responsibility of the authors and does not represent the official views of the funding agencies. The authors would like to thank Amy Pennar, Sarah Martinez, Monique Green-Jones, Jessica De Leon, Lindsey McCracken, Liz Kelman, Xiaoming Li, Kit Simpson, Julia Sheffler, Scott Jones, and Sonia Lee.

Conflicts of Interest

None declared.

  1. Pangaea Global AIDS Foundation. Report from the Expert Consultation on Implementation Science Research: A Requirement for Effective HIV/AIDS Prevention and Treatment Scale-Up. Cape Town, South Africa: Pangaea Global AIDS Foundation; 2009.
  2. Eccles MP, Armstrong D, Baker R, Cleary K, Davies H, Davies S, et al. An implementation research agenda. Implement Sci 2009 Apr 07;4:18 [FREE Full text] [CrossRef] [Medline]
  3. Cross WF, West JC. Examining implementer fidelity: Conceptualizing and measuring adherence and competence. J Child Serv 2011;6(1):18-33 [FREE Full text] [CrossRef] [Medline]
  4. Norton WE, Amico KR, Cornman DH, Fisher WA, Fisher JD. An agenda for advancing the science of implementation of evidence-based HIV prevention interventions. AIDS Behav 2009 Jun;13(3):424-429 [FREE Full text] [CrossRef] [Medline]
  5. Schackman BR. Implementation science for the prevention and treatment of HIV/AIDS. J Acquir Immune Defic Syndr 2010 Dec;55 Suppl 1:S27-S31 [FREE Full text] [CrossRef] [Medline]
  6. Prejean J, Song R, Hernandez A, Ziebell R, Green T, Walker F, HIV Incidence Surveillance Group. Estimated HIV incidence in the United States, 2006-2009. PLoS One 2011;6(8):e17502 [FREE Full text] [CrossRef] [Medline]
  7. Centers for Disease Control and Prevention. 2012. CDC Fact Sheet: New HIV Infections in the United States   URL:
  8. Centers for Disease Control and Prevention. 2008. CDC Fact Sheet: HIV Incidence: Estimated Annual Infections in the U.S., 2008-2014 Overall and by Transmission Route   URL:
  9. MacDonell KK, Jacques-Tiura AJ, Naar S, Fernandez MI, ATN 086/106 Protocol Team. Predictors of Self-Reported Adherence to Antiretroviral Medication in a Multisite Study of Ethnic and Racial Minority HIV-Positive Youth. J Pediatr Psychol 2016 May;41(4):419-428 [FREE Full text] [CrossRef] [Medline]
  10. Simoni JM, Huh D, Wilson IB, Shen J, Goggin K, Reynolds NR, et al. Racial/Ethnic disparities in ART adherence in the United States: findings from the MACH14 study. J Acquir Immune Defic Syndr 2012 Aug 15;60(5):466-472 [FREE Full text] [CrossRef] [Medline]
  11. MacDonell K, Naar-King S, Huszti H, Belzer M. Barriers to medication adherence in behaviorally and perinatally infected youth living with HIV. AIDS Behav 2013 Jan;17(1):86-93 [FREE Full text] [CrossRef] [Medline]
  12. Miller W, Rollnick S. The atmosphere of change. In: Miller WR, Rollnick S, editors. Motivational interviewing: Preparing people to change addictive behavior. NY, USA: The Guilford Press; 1991:3-13.
  13. Naar-King S, Suarez M. In: Rollnick S, Miller WR, Moyers TB, editors. Motivational Interviewing with Adolescents and Young Adults. NY, USA: The Guilford Press; 2011:9781609184735.
  14. Lundahl BW, Kunz C, Brownell C, Tollefson D, Burke BL. A Meta-Analysis of Motivational Interviewing: Twenty-Five Years of Empirical Studies. Research on Social Work Practice 2010 Jan 11;20(2):137-160. [CrossRef]
  15. Chen X, Murphy DA, Naar-King S, Parsons JT, Adolescent Medicine Trials Network for HIV/AIDS Interventions. A clinic-based motivational intervention improves condom use among subgroups of youth living with HIV. J Adolesc Health 2011 Aug;49(2):193-198 [FREE Full text] [CrossRef] [Medline]
  16. Naar-King S, Outlaw A, Green-Jones M, Wright K, Parsons JT. Motivational interviewing by peer outreach workers: a pilot randomized clinical trial to retain adolescents and young adults in HIV care. AIDS Care 2009 Jul;21(7):868-873. [CrossRef] [Medline]
  17. Naar-King S, Parsons JT, Murphy DA, Chen X, Harris DR, Belzer ME. Improving health outcomes for youth living with the human immunodeficiency virus: a multisite randomized trial of a motivational intervention targeting multiple risk behaviors. Arch Pediatr Adolesc Med 2009 Dec;163(12):1092-1098 [FREE Full text] [CrossRef] [Medline]
  18. Outlaw AY, Naar-King S, Parsons JT, Green-Jones M, Janisse H, Secord E. Using motivational interviewing in HIV field outreach with young African American men who have sex with men: a randomized clinical trial. Am J Public Health 2010 Apr 01;100 Suppl 1:S146-S151. [CrossRef] [Medline]
  19. Mbuagbaw L, Ye C, Thabane L. Motivational interviewing for improving outcomes in youth living with HIV. Cochrane Database Syst Rev 2012 Sep 12(9):CD009748. [CrossRef] [Medline]
  20. Bartlett J, Cheever L, Johnson M, Paauw D. A guide to primary care of people with HIV/AIDS. Rockville, MD: U.S. Department of Health and Human Services; 2004.
  21. Kahn J. Predictors of papanicolaou smear return in a hospital-based adolescent and young adult clinic. Obstetrics & Gynecology 2003 Mar;101(3):490-499. [CrossRef]
  22. Magnus M, Jones K, Phillips G, Binson D, Hightow-Weidman LB, Richards-Clarke C, YMSM of color Special Projects of National Significance Initiative Study Group. Characteristics associated with retention among African American and Latino adolescent HIV-positive men: results from the outreach, care, and prevention to engage HIV-seropositive young MSM of color special project of national significance initiative. J Acquir Immune Defic Syndr 2010 Apr 01;53(4):529-536. [CrossRef] [Medline]
  23. Information for Practice. NY, USA: New York State Department of Health; 2010 Apr 11. Substance use and dependence among HIV-infected adolescents and young adults   URL: [accessed 2018-12-31] [WebCite Cache]
  24. CDC’s HIV/AIDS Prevention Research Synthesis Project. Centers for Disease Control and Prevention. Atlanta, GA; 1999. Compendium of HIV Prevention Interventions with Evidence of Effectiveness   URL:
  25. Hamilton JD, Kendall PC, Gosch E, Furr JM, Sood E. Flexibility Within Fidelity. Journal of the American Academy of Child & Adolescent Psychiatry 2008 Sep;47(9):987-993. [CrossRef]
  26. Aarons GA, Green AE, Palinkas LA, Self-Brown S, Whitaker DJ, Lutzker JR, et al. Dynamic adaptation process to implement an evidence-based child maltreatment intervention. Implement Sci 2012 Apr 18;7:32 [FREE Full text] [CrossRef] [Medline]
  27. Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Health 2011 Jan;38(1):4-23 [FREE Full text] [CrossRef] [Medline]
  28. Sterman J. Sustaining Sustainability: Creating a Systems Science in a Fragmented AcademyPolarized World. In: Weinstein MP, Turner RE, editors. Sustainability Science: The Emerging Paradigm and the Urban Environment. New York, NY: Springer; 2012:21-58.
  29. Pennar A, Wang B, Naar S, Fortenberry J, Brogan HK, Adolescent Medicine Trials Network for HIV/AIDS Interventions. A Mixed Methods Study of Motivational Interviewing Implementation to Improve Linkage to Care for Youth Living with HIV: The Minority AIDS Initiative. : The minority AIDS initiative. 39th Annual MeetingScientific Sessions of the Society of Behavioral Medicine; 2018 Presented at: 39th Annual Meeting and Scientific Sessions of the Society of Behavioral Medicine; 2018 April; New Orleans, LA.
  30. Barwick MA, Peters J, Boydell K. Getting to uptake: do communities of practice support the implementation of evidence-based practice? J Can Acad Child Adolesc Psychiatry 2009 Feb;18(1):16-29 [FREE Full text] [Medline]
  31. Schoenwald SK, Garland AF, Chapman JE, Frazier SL, Sheidow AJ, Southam-Gerow MA. Toward the effective and efficient measurement of implementation fidelity. Adm Policy Ment Health 2011 Jan;38(1):32-43 [FREE Full text] [CrossRef] [Medline]
  32. Fortenberry JD, Koenig LJ, Kapogiannis BG, Jeffries CL, Ellen JM, Wilson CM. Implementation of an Integrated Approach to the National HIV/AIDS Strategy for Improving Human Immunodeficiency Virus Care for Youths. JAMA Pediatr 2017 Jul 01;171(7):687-693. [CrossRef] [Medline]
  33. Severens J, Hoomans T, Adang E, Wensing M. Economic evaluation of implementation strategies. In: Grol R, Wensing M, Eccles M, Davis D, editors. Improving Patient Care: The Implementation of Change in Health Care. 2nd ed. Chichester, UK: John Wiley & Sons, Ltd; 2013:350-364.
  34. National Center for Complementary and Integrative Health. 2015 Nov 25. Clarifying NIH Priorities for Health Economics Research   URL:
  35. Naar S, Parsons JT, Stanton BF. Adolescent Trials Network for HIV-AIDS Scale It Up Program: Protocol for a Rational and Overview. JMIR Res Protoc 2019 Feb 01;8(2):e11204 [FREE Full text] [CrossRef] [Medline]
  36. Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care 2012 Mar;50(3):217-226 [FREE Full text] [CrossRef] [Medline]
  37. Brown CH, Wyman PA, Guo J, Peña J. Dynamic wait-listed designs for randomized trials: new designs for prevention of youth suicide. Clin Trials 2006;3(3):259-271. [CrossRef] [Medline]
  38. Aarons GA, Cafri G, Lugo L, Sawitzky A. Expanding the domains of attitudes towards evidence-based practice: the evidence based practice attitude scale-50. Adm Policy Ment Health 2012 Sep;39(5):331-340 [FREE Full text] [CrossRef] [Medline]
  39. Idalski Carcone A, Coyle K, Gurung S, Cain D, Dilones RE, Jadwin-Cakmak L, et al. Implementation Science Research Examining the Integration of Evidence-Based Practices Into HIV Prevention and Clinical Care: Protocol for a Mixed-Methods Study Using the Exploration, Preparation, Implementation, and Sustainment (EPIS) Model. JMIR Res Protoc 2019 May 23;8(5):e11202 [FREE Full text] [CrossRef] [Medline]
  40. Miller WR, Yahne CE, Moyers TB, Martinez J, Pirritano M. A randomized trial of methods to help clinicians learn motivational interviewing. J Consult Clin Psychol 2004 Dec;72(6):1050-1062. [CrossRef] [Medline]
  41. Carcone AI, Naar-King S, Brogan KE, Albrecht T, Barton E, Foster T, et al. Provider communication behaviors that predict motivation to change in black adolescents with obesity. J Dev Behav Pediatr 2013 Oct;34(8):599-608 [FREE Full text] [CrossRef] [Medline]
  42. Kyndt E, Raes E, Lismont B, Timmers F, Cascallar E, Dochy F. A meta-analysis of the effects of face-to-face cooperative learning. Do recent studies falsify or verify earlier findings? Educational Research Review 2013 Dec;10:133-149. [CrossRef]
  43. Wagner C, Ingersoll K. Motivational Interviewing in Groups. New York, NY: Guilford Publications, Inc; 2012.
  44. Moyers TB, Martin T, Houck JM, Christopher PJ, Tonigan JS. From in-session behaviors to drinking outcomes: a causal chain for motivational interviewing. J Consult Clin Psychol 2009 Dec;77(6):1113-1124 [FREE Full text] [CrossRef] [Medline]
  45. Mitcheson L, Bhavsar K, McCambridge J. Randomized trial of training and supervision in motivational interviewing with adolescent drug treatment practitioners. J Subst Abuse Treat 2009 Jul;37(1):73-78. [CrossRef] [Medline]
  46. Moyers TB, Manuel JK, Wilson PG, Hendrickson SML, Talcott W, Durand P. A Randomized Trial Investigating Training in Motivational Interviewing for Behavioral Health Providers. Behav Cognit Psychother 2007 Nov 22;36(02). [CrossRef]
  47. Marken PA, Zimmerman C, Kennedy C, Schremmer R, Smith KV. Human simulators and standardized patients to teach difficult conversations to interprofessional health care teams. Am J Pharm Educ 2010 Sep 10;74(7):120 [FREE Full text] [Medline]
  48. May W, Park JH, Lee JP. A ten-year review of the literature on the use of standardized patients in teaching and learning: 1996-2005. Med Teach 2009 Jun;31(6):487-492. [Medline]
  49. Haeseler F, Fortin AH, Pfeiffer C, Walters C, Martino S. Assessment of a motivational interviewing curriculum for year 3 medical students using a standardized patient case. Patient Educ Couns 2011 Jul;84(1):27-30 [FREE Full text] [CrossRef] [Medline]
  50. Imel ZE, Steyvers M, Atkins DC. Computational psychotherapy research: scaling up the evaluation of patient-provider interactions. Psychotherapy (Chic) 2015 Mar;52(1):19-30 [FREE Full text] [CrossRef] [Medline]
  51. Naar S, Flynn H. Motivational Interviewingthe Treatment of Depression. In: Arkowitz H, Miller WR, Rollnick S, editors. Motivational Interviewing in the Treatment of Psychological Problems, Second Edition. New York, NY: The Guilford Press; 2015.
  52. Naar S, Safren S. In: Rollnick S, Miller WR, Moyers TB, editors. Motivational Interviewing and CBT: Combining Strategies for Maximum Effectiveness. New York, NY: The Guilford Press; 2017:9781462531547.
  53. American Educational Research Association, American Psychological Association, National Council on Measurement in Education. Standards for Educational & Psychological Testing. Washington, DC: American Educational Research Association; 1999:A.
  54. Wilson M. Constructing measures: An item response modeling approach. Mahwah, NJ: Lawrence Erlbaum Associates; 2005.
  55. Stone GE. Objective standard setting (or truth in advertising). J Appl Meas 2001;2(2):187-201. [Medline]
  56. Linacre J. Many-facet Rasch measurement. Chicago, IL: Mesa Press; 1994.
  57. Pennar AL, Dark T, Simpson KN, Gurung S, Cain D, Fan C, et al. Cascade Monitoring in Multidisciplinary Adolescent HIV Care Settings: Protocol for Utilizing Electronic Health Records. JMIR Res Protoc 2019 May 30;8(5):e11185. [CrossRef]
  58. Stetler CB, Damschroder LJ, Helfrich CD, Hagedorn HJ. A Guide for applying a revised version of the PARIHS framework for implementation. Implement Sci 2011 Aug 30;6:99 [FREE Full text] [CrossRef] [Medline]
  59. Hedeker D, Gibbons RD. Longitudinal Data Analysis. Hoboken, NJ: Wiley-Interscience; 2006.
  60. Raudenbush S, Bryk A. Hierarchical Linear Models: Applications and Data Analysis Methods. Thousand Oaks, CA: SAGE Publications; 2002.
  61. Hox J. In: Marcoulides GA, editor. Multilevel analysis: Techniquesapplications. 2nd ed. New York, NY: Routledge/Taylor & Francis Group; 2010:978-971.
  62. Hedges L, Rhoads C. Statistical Power Analysis in Education Research (NCSER 2010-3006). Washington, D.C: National Center for Special Education Research; 2009.
  63. Morgan DL. Qualitative content analysis: a guide to paths not taken. Qual Health Res 1993 Feb 01;3(1):112-121. [CrossRef] [Medline]
  64. Hsieh H, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res 2005 Nov;15(9):1277-1288. [CrossRef] [Medline]
  65. Corbin J, Strauss A. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory. Thousand Oaks, CA: SAGE Publications; 2008.
  66. Miles M, Huberman A. Qualitative data analysis: An expanded sourcebook. Thousand Oaks, CA: SAGE Publications, Inc; 1994.
  67. Bernard H, Ryan G. Analyzing Qualitative Data: Systematic Approaches. Thousand Oaks, CA: SAGE Publications; 2010.
  68. Pett MA. Nonparametric statistics for health care research: Statistics for small samples and unusual distributions. Thousand Oaks, CA: SAGE Publications, Inc; 1997:-8039.
  69. Leithwood K, Montgomery D. Improving Classroom Practice Using Innovation Profiles. Toronto, ON: The Ontario Institute for Studies in Education; 1987.
  70. Creswell JW, Klassen AC, Plano Clark VL, Smith KC. Best Practices for Mixed Methods Research in the Health Sciences. Bethesda: National Institutes of Health; 2018.   URL:
  71. Ivankova NV, Creswell J, Stick S. Using Mixed-Methods Sequential Explanatory Design: From Theory to Practice. Field Methods 2016 Jul 21;18(1):3-20. [CrossRef]
  72. Leech NL, Onwuegbuzie AJ. A typology of mixed methods research designs. Qual Quant 2007 Mar 27;43(2):265-275. [CrossRef]
  73. French MT, Dunlap LJ, Zarkin GA, McGeary KA, McLellan AT. A structured instrument for estimating the economic cost of drug abuse treatment. The Drug Abuse Treatment Cost Analysis Program (DATCAP). J Subst Abuse Treat 1997;14(5):445-455. [Medline]
  74. Kim JJ, Maulsby C, Zulliger R, Jain K, Positive Charge Intervention Team, Charles V, et al. Cost and threshold analysis of positive charge, a multi-site linkage to HIV care program in the United States. AIDS Behav 2015 Oct;19(10):1735-1741. [CrossRef] [Medline]

ATN: Adolescent Medicine Trials Network for HIV/AIDS Interventions
CoP: communities of practice
EBP: evidence-based practice
EPIS: Exploration, Preparation, Implementation, and Sustainment model
IF: internal facilitation
IRB: institutional review board
IS: implementation science
iTeam: implementation team
MI: motivational interviewing
NIH: National Institutes of Health
TMI: Tailored Motivational Interviewing Implementation Intervention
YLH: youth living with HIV

Edited by F Drozd; submitted 31.05.18; peer-reviewed by S Martino, S Comulada, F Wagnew; comments to author 01.08.18; revised version received 26.09.18; accepted 29.10.18; published 07.06.19


©Sylvie Naar, Karen MacDonell, Jason E Chapman, Lisa Todd, Sitaji Gurung, Demetria Cain, Rafael E Dilones, Jeffrey T Parsons. Originally published in JMIR Research Protocols (, 07.06.2019.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Research Protocols, is properly cited. The complete bibliographic information, a link to the original publication on, as well as this copyright and license information must be included.