Published on in Vol 9, No 7 (2020): July

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/15113, first published .
Feasibility of a Mobile Health App for Routine Outcome Monitoring and Feedback in Mutual Support Groups Coordinated by SMART Recovery Australia: Protocol for a Pilot Study

Feasibility of a Mobile Health App for Routine Outcome Monitoring and Feedback in Mutual Support Groups Coordinated by SMART Recovery Australia: Protocol for a Pilot Study

Feasibility of a Mobile Health App for Routine Outcome Monitoring and Feedback in Mutual Support Groups Coordinated by SMART Recovery Australia: Protocol for a Pilot Study

Original Paper

1Faculty of Social Sciences, School of Psychology, University of Wollongong, Wollongong, Australia

2Illawarra Health and Medical Research Institute, Wollongong, Australia

3School of Medicine and Public Health, University of Newcastle, Callaghan, Australia

4Centre for Youth Substance Abuse Research, Lives Lived Well Group, School of Psychology, University of Queensland, Brisbane St Lucia, Australia

5Faculty of Medicine, Nursing and Health Sciences, Eastern Health Clinical School, Monash University, Melbourne, Australia

6Faculty of Arts and Social Sciences, Centre for Social Research in Health, University of New South Wales, Sydney, Australia

7Institute of Psychiatry, Psychology and Neuroscience, King’s College London, London, United Kingdom

8Centre for Addiction Medicine, Harvard Medical School, Harvard University, Boston, MA, United States

9Clinical Research Design, IT and Statistical Support Unit, Hunter Medical Research Institute, New Lambton, Australia

10Health Research Economics Unit, Hunter Medical Research Institute, New Lambton, Australia

11SMART Recovery Australia, Sydney, Australia

Corresponding Author:

Peter J Kelly, BSc, GradDipAppPsych, PhD

Faculty of Social Sciences

School of Psychology

University of Wollongong

Northfields Ave

Wollongong, 2522

Australia

Phone: 61 (02) 4239 2382

Email: pkelly@uow.edu.au


Background: Despite the importance and popularity of mutual support groups, there have been no systematic attempts to implement and evaluate routine outcome monitoring (ROM) in these settings. Unlike other mutual support groups for addiction, trained facilitators lead all Self-Management and Recovery Training (SMART Recovery) groups, thereby providing an opportunity to implement ROM as a routine component of SMART Recovery groups.

Objective: This study protocol aims to describe a stage 1 pilot study designed to explore the feasibility and acceptability of a novel, purpose-built mobile health (mHealth) ROM and feedback app (Smart Track) in SMART Recovery groups coordinated by SMART Recovery Australia (SRAU) The secondary objectives are to describe Smart Track usage patterns, explore psychometric properties of the ROM items (ie, internal reliability and convergent and divergent validity), and provide preliminary evidence for participant reported outcomes (such as alcohol and other drug use, self-reported recovery, and mental health).

Methods: Participants (n=100) from the SMART Recovery groups across New South Wales, Australia, will be recruited to a nonrandomized, prospective, single-arm trial of the Smart Track app. There are 4 modes of data collection: (1) ROM data collected from group participants via the Smart Track app, (2) data analytics summarizing user interactions with Smart Track, (3) quantitative interview and survey data of group participants (baseline, 2-week follow-up, and 2-month follow-up), and (4) qualitative interviews with group participants (n=20) and facilitators (n=10). Feasibility and acceptability (primary objectives) will be analyzed using descriptive statistics, a cost analysis, and a qualitative evaluation.

Results: At the time of submission, 13 sites (25 groups per week) had agreed to be involved. Funding was awarded on August 14, 2017, and ethics approval was granted on April 26, 2018 (HREC/18/WGONG/34; 2018/099). Enrollment is due to commence in July 2019. Data collection is due to be finalized in October 2019.

Conclusions: To the best of our knowledge, this study is the first to use ROM and tailored feedback within a mutual support group setting for addictive behaviors. Our study design will provide an opportunity to identify the acceptability of a novel mHealth ROM and feedback app within this setting and provide detailed information on what factors promote or hinder ROM usage within this context. This project aims to offer a new tool, should Smart Track prove feasible and acceptable, that service providers, policy makers, and researchers could use in the future to understand the impact of SMART Recovery groups.

Trial Registration: Australian New Zealand Clinical Trials Registry (ANZCTR): ACTRN12619000686101; https://anzctr.org.au/Trial/Registration/TrialReview.aspx?id=377336.

International Registered Report Identifier (IRRID): PRR1-10.2196/15113

JMIR Res Protoc 2020;9(7):e15113

doi:10.2196/15113

Keywords



Background

Using standardized outcome measures to regularly monitor client progress in alcohol and other drug (AOD) settings is an important mechanism for monitoring the effectiveness of service provision [1-3]. Routine outcome monitoring (ROM) provides clinicians with timely feedback about client progress and allows clinicians to tailor treatment to the individual needs of clients and guide treatment decisions [4]. This may be of particular importance when a client is not on track (ie, not improving in line with clinical norms [4]). ROM has been specifically recommended for use in AOD treatment settings as the provision of tailored feedback to clients has been found to improve treatment outcomes across a range of AOD treatment settings (eg, acute, community, veterans) [3,5,6]. Evidence also supports clinician use of ROM and tailored feedback to enhance outcomes and/or prevent further deterioration for those clients identified as not on track early in addiction and/or mental health treatment [7,8].

Despite the importance of ROM and tailored feedback, ongoing variability in the implementation, sustainability, and use of ROM data has been noted [9]. The time associated with the completion, scoring, interpretation, and feedback of outcome assessments represents a key barrier to systematic implementation [6,9]. Using a digital platform to administer ROM and feedback may help address these concerns. Mobile health (mHealth) [10] apps can provide quick, easy, interactive, and engaging platforms for tracking and accessing information about health and health-related behaviors [11]. Evidence from the United Kingdom suggests that almost 60% of individuals who access AOD treatment own a smartphone [12]. This figure is likely higher in Australia as it is the leading global adopter of smartphones (88% ownership [13]). Given the ubiquity of smartphones, smartphone apps have the added benefit of engaging individuals in real time, in their natural environment, and offering moment-to-moment support as needed [14]. Moreover, a key benefit highlighted in a recent systematic review of mHealth apps is their ability to provide timely, individualized feedback [15]. Accordingly, an opportunity exists to utilize mHealth to enhance engagement, streamline administration, and put the client at the center of the ROM and feedback process.

To date, much of the literature on ROM and feedback has focused on the provision and use of feedback by clinicians [16,17]. Improving client involvement in the feedback process represents an important clinical and research priority [18]. It is not only consistent with the principles of recovery-oriented service provision and strengths-based care [19] but is also therapeutically useful. Within mental health settings, the benefits of providing clients with outcome feedback include improved client insight; enhanced knowledge, skill, and confidence to effectively self-manage their condition(s); and greater satisfaction, engagement, and involvement in treatment [16]. Evidence from related approaches (eg, therapeutic assessment [20]) has also shown that providing clients with assessment feedback during psychotherapy promotes client self-verification, self-discovery, and self-enhancement [18]. Moreover, delivering feedback directly to the client seems to further enhance the positive impact of ROM on treatment outcome(s), particularly among group-based treatment settings [6]. Therefore, to meet this important need of putting the client at the center of the ROM and feedback process, during the first phase of this study, we developed a purpose-built mHealth app.

Mutual Support Groups

Mutual support refers to the reciprocal provision of social, emotional, and informational support by group members undergoing recovery from addiction [21]. Mutual support groups are widely available [22], commonly accessed [23], and play an extremely important role in the treatment of AOD use disorders [2,24]. Two approaches recommended by clinical guidelines include 12-step models (eg, Alcoholics Anonymous) and Self-Management and Recovery Training (SMART Recovery [2,24]). Although the 12-step models are traditionally the most well-known and accessed models for mutual support [22], other approaches (eg, SMART Recovery) are gaining momentum. For example, SMART Recovery Australia has seen an almost 40% increase in groups over the last 4 years, with over 300 groups currently running nationwide [25]. Although accumulating evidence points to the benefit of participating in mutual support groups [26,27], much of the research is derived from the 12-step models. In light of the growth of SMART Recovery groups, expanding the evidence base beyond the 12-step models is a priority. A major limitation in developing a strong evidence base is the lack of outcome data evaluating service delivery. Accordingly, the purpose-built mHealth app developed during the first phase of the study provides a mechanism for not only improving service provision but also providing unique insights into the outcome(s) demonstrated by SMART Recovery participants.

Although many AOD services provided by public health and nongovernment organizations are contracted to monitor client outcomes routinely [28], we are unaware of any research describing the use of ROM in mutual support groups for addictive behaviors. As a trained facilitator leads all SMART Recovery groups, a unique opportunity exists to work with SMART Recovery facilitators to embed ROM and tailored feedback as a standard component of the groups. Investigating the use of ROM and feedback within a mutual support setting helps address the need for improved participant involvement in the assessment and feedback process [18], in addition to building a platform for improving the evidence base for SMART Recovery, a clinical and research priority [27].

This Study

In this paper, we detail the study protocol for a nonrandomized, prospective, single-arm pilot study, with a concurrent cost evaluation and nested qualitative evaluation designed to explore the feasibility and acceptability of a novel mHealth ROM and feedback app (Smart Track) in SMART Recovery groups coordinated by SRAU. The secondary objectives are to describe Smart Track usage patterns, psychometric properties of the ROM items, and participant-reported outcomes.


Approval, Registration, and Reporting

This study was approved by the University of Wollongong and Illawarra Shoalhaven Local Health District (ISLHD) Health and Medical Human Research Ethics Committee (HREC; 2018/099; HREC/18/WGONG/34). The trial has been registered with the Australian New Zealand Clinical Trials Registry (ACTRN12619000686101). Any amendments will be submitted to the ISLHD HREC before implementation, as per HREC guidelines. Any important protocol modifications will be reported in the outcomes paper. To enhance the quality, completeness, and transparency of the proposed study, this study protocol follows the Standard Protocol Items: Recommendations for Interventional Trials [29] and the Consolidated Standards of Reporting Trials-eHealth checklist [30] (Multimedia Appendix 1).

Participants

Eligibility

Participants must be at least 18 years of age, currently participating in one or more SMART Recovery groups located within New South Wales (NSW), have a current email address or be willing to obtain an email address, and be able to comprehend English at a level sufficient to complete study requirements.

Participants will be eligible irrespective of their self-reported computer and/or smartphone literacy. No restrictions will be placed on concomitant care, including the frequency or duration of SMART Recovery group participation or participation in other forms of AOD treatment. Participants will only be excluded if they are unable or unwilling to provide informed consent. Exclusion criteria were kept to a minimum to ensure that the study sample is representative of people attending SMART Recovery groups.

Smartphone Ownership

Although we expect that most participants will own a smartphone [12,13], potential participants do not need to own a smartphone to participate. The research team will provide tablets to study sites (locations where regular SMART Recovery groups are held) so that participants can use the tablet before and/or after attending a group to complete the ROM questions and receive feedback.

Study Setting

Potential participants will be sourced from the SMART Recovery groups held in NSW, Australia. A full list of study sites will be reported in the outcomes paper. SMART Recovery groups are held in the community as well as in inpatient, outpatient, and clinical health organizations, including private, public, and not-for-profit mental health, AOD, and general health services. Online SMART Recovery groups are also available. A detailed description of the SMART Recovery groups has been provided elsewhere [31]. Briefly, SMART Recovery focuses on self-empowerment and utilizes evidence-based techniques (eg, cognitive behavioral therapy and motivational interviewing) [32]. To ensure that our sample adequately reflects SMART Recovery participants, study sites with established SMART Recovery groups were selected to reflect a range of geographical locations and service providers. At the time of manuscript submission, 147 groups were conducted throughout the NSW.

Enrollment

Group facilitators will use a script to introduce the study to potential participants and invite expressions of interest. The following strategies will be adopted to maximize adequate enrollment. Group facilitators will be asked to check in with participants regarding the completion and/or return of expression of interest forms (across a maximum of 3 meetings). A member of the research team will also visit SMART Recovery groups throughout the recruitment period to directly provide group members with information about the study and collect the expression of interest forms. Depending on accrual, the study may also be advertised (eg, online, local media, flyers/pamphlets, and study website) to extend participant recruitment beyond this study’s sites.

Informed Consent

A member of the research team (AKB) will collect informed consent from all study participants (written or verbal according to the participant’s preference). A copy of consent (audio recording and/or signed consent form) will be retained for all study participants and securely stored according to the HREC–approved methods.

Overview of the mHealth Routine Outcome Monitoring and Feedback App (Smart Track)

The Smart Track app was designed for participants attending SMART Recovery groups. ROM items are intended for weekly completion. However, it is up to the individual to decide whether, when, and how they engage with Smart Track. Given the low-risk, low-burden nature of this study, there are no contingencies for discontinuing access to the Smart Track app. Owing to app store regulations, Smart Track will not be restricted to study participants. It will be freely available for download through Android and iPhone Operating System stores (only data from those who provide consent will be included in the study).

Publications detailing the methods and findings from the qualitative work [33] and app development process (including theoretical foundations) will be reported separately. To provide context, a brief summary is presented here. The Smart Track app was developed during the first phase of this study using participatory design workshops and an iterative development process informed by the (1) consideration of existing ROM tools, (2) qualitative feedback [33] and usability testing sessions with SMART Recovery participants and facilitators, (3) clinical and research expertise from the members of the expert advisory and steering committees, and (4) technological and creative expertise of the development team employed to work on this study (GHO, Sydney) [34].

The functionality of Smart Track was initially tested with 3 members of the research team. Several bugs were identified and fixed before the amended beta version was released to a convenience sample (n=40) for further testing. This convenience sample of beta testers included members of the expert advisory committee, steering committee, SMART Recovery board members, and SMART Recovery facilitators. Further refinements were made in line with the feedback received (bug fixes, minor amendments to functionality, and content).

Smart Track Routine Outcome Monitoring Domains and Items

Consistent with clinical guidelines [3,35] and informed by recommendations arising from systematic reviews evaluating ROM in both mental health [18] and addiction [6] settings, we sought to create a tool that provided multidimensional assessment and feedback. Utilizing an iterative process, we generated a list of candidate outcomes (group attendance; goal setting and attainment; values; self-efficacy; quality of life; self-care; mental health; quantity, frequency, and impact of addictive behavior(s); social support; financial stability; optimism; and frequency, strength, and duration of urges). Corresponding assessment items and/or instruments were then identified to measure these domains. Where possible, validated free-to-access measurement instruments were selected from the published literature. Some changes to the structure, wording, and/or response format were required to improve the clarity and appropriateness of some items. The final item set included in the tool is detailed in Multimedia Appendix 2 [36-47] as a function of the target domain and assessment frequency.

Smart Track Tailored Feedback

The same iterative process was used to inform the content, format, and frequency of the Smart Track feedback. On the basis of the participants’ responses to individual ROM items and/or subscale scores, tailored visual cues (eg, colors and arrows) are used to provide a snapshot of progress within each domain (eg, current score range and/or direction of progress). Participants can also select one or more domains to receive further detailed feedback (written and visual).

The written feedback comprises encouraging statements, self-reflection questions, and/or self-management suggestions. As health messages are more effective when they are tailored to the individual [48] and tailored feedback is central to both popularity [49] and effectiveness [14] of mHealth apps for alcohol use, written feedback and visual cues are tailored. Visual feedback comprises a graph illustrating the participant’s progress over time (with the option of viewing data for the week, month, and year). Guided by the literature highlighting the utility of providing feedback according to the level of observed progress (eg, on-track or off-track [16]) and informed by previous stop-light–style ROM feedback systems (see the study by Kendrick et al [7] for a review), written feedback and visual cues are tailored according to three pathways, based on whether individual domain scores suggest (1) a good score range or improvement (green), (2) an ok score range or stability (yellow), or (3) a less good score range or deterioration (red). The chosen logic allows feedback content and visual cues to be tailored such that all progress is encouraged and reinforced, with a specific focus on (1) maintaining change (green), (2) highlighting additional change(s) that may be of benefit (yellow), or (3) troubleshooting difficulties and/or seeking support (red).

Additional Features of Smart Track

Inconsistent engagement (client and/or clinician) has long been identified as a challenge to both ROM and feedback [50] and, more broadly, the mHealth literature [51]. Therefore, in addition to the core ROM and feedback functionality, several additional features (Table 1) have been included. Aside from the hints, tips, and motivational statements (which automatically feature once per day that the app is used), it is up to the individual as to how frequently they access various features.

Table 1. Additional Smart Track features.
FeatureDescription
Customizable support(s) and personal motivation(s)Participants have the option of tailoring the app content by uploading key contact number(s), support services and/or personal motivation(s) for change (photo, audio, video, and/or written)
ResourcesInformation about self-management strategies (including SMARTa Recovery resources) and motivational stories from people with lived experience of addictive behavior(s)
Hints, tips, and motivational statementsA self-management tip, motivational statement, or inspirational quote will be included as pop-up content. These brief messages comprise direct and adapted quotes from the transcripts of the qualitative interviews
JournalThere is a free text box on each feedback page to allow participants to reflect on their progress and/or the tailored feedback provided
Interactive urge logIn addition to tracking the number, frequency, and strength of urges, when the participant reports an urge, this interactive tool prompts the participants to manage their urges, log triggers, and reflect on how to maintain and/or improve effective urge self-management

aSMART: Self-Management and Recovery Training.

Implementation Strategies

The app contains an in-built walk through to orient new users to the features of the app. We also intend to develop a brief tutorial to assist participants with app download and set up. Consistent with the recommendations for improving ROM uptake [9], SMART Recovery facilitators at each study site will assume the role of local champions of Smart Track. The research team will work with the facilitators to orient them to the app so that they are confident in responding to the participants’ questions about ROM completion and troubleshooting any difficulties that may arise. Participants and facilitators may also contact the research team directly for support. SMART Recovery facilitators will also prompt, encourage, and support study participants to regularly complete the ROM questions before and/or attending a SMART Recovery group.

In-app push notifications will also be used to prompt participants to complete the ROM items. In-app notifications (red marker) will also be used to highlight section(s) that require participants’ attention (eg, outstanding 7-day plan).

Data Collection Procedures

The participant timeline is outlined in Figure 1, and the corresponding schedule of participant assessments is summarized in Table 2. There are 4 main modes of data collection in this study: (1) participant-completed ROM data collected via Smart Track (Multimedia Appendix 1), (2) broader app-generated data analytics summarizing interactions with Smart Track, (3) quantitative baseline and follow-up assessments with study participants, and (4) qualitative interviews with study participants and group facilitators.

Baseline and follow-up assessments will primarily be conducted over the telephone by a member of the research team (a trained clinical psychologist). To promote follow-up, appointments will be scheduled at a time convenient to the participants, with options to accommodate participants’ preferences for video link, face-to-face, and/or self-report (where feasible). Telephone, text, letter, and/or facilitator prompting will be utilized (as needed) to remind the participants of upcoming and/or missed appointments.

Figure 1. Participant flow chart. SMART: Self-Management and Recovery Training.
View this figure
Table 2. Schedule of data collection.
Data collection method/instrumentBaselineDailyWeekly2-week follow-up2-month follow-up
SMARTa Recovery participants

Smart Track app


Data analytics





ROM itemsb




Telephone interview

Demographics



Network of Alcohol and other Drug Agencies Client Outcome Management System


Severity of Dependence Scale




Drug and alcohol use




Kessler 10+ scale




The World Health Organization Quality of Life–8




New South Wales minimum data set items (living arrangements and income)




BTOM-Cc items on arrests




BTOM-C items on risky drug using practices



Substance Use Recovery Evaluator



Client Services Receipt Inventory



Mobile App Rating Scale–user version




Digital Working Alliance Inventory



Qualitative interview (n=20)



SMART Recovery facilitators

Demographics




Mobile App Rating Scale–user version




Qualitative interview (n=10)



aSMART: Self-Management and Recovery Training.

bSee Multimedia Appendix 2 for a detailed description of routine outcome monitoring items as a function of assessment domain and frequency of administration.

cBTOM-C: Brief Treatment Outcome Measure-Concise.

Data Handling and Storage

Initially, the ROM data entered by participants into Smart Track will be stored locally on the participants’ phones. When the participants connect to the internet, ROM data will be transmitted to a secure mobile and web app development platform (managed by SRAU), before transmitting to a secure server hosted by the University of Wollongong.

The participants’ responses to the baseline and follow-up research assessment instruments will be entered at the time of interview directly into REDCap, a secure web application for building and managing online surveys and databases. Further information about data management, monitoring, and dissemination is provided in Multimedia Appendix 3.

Key Measures and Assessment Instruments

Primary Objectives: Feasibility and Acceptability
Mobile App Data Analytics

Mobile app data analytics will be captured daily throughout the study period using a mobile and web app development platform. Mobile app data analytics provide insight into how and when participants interact with an app (including participants’ interactions with the on-site tablets provided by the research team). In this study, analytics will be used to inform both feasibility (primary objective) and to describe usage patterns (secondary objective).

Qualitative Feedback

A qualitative evaluation (described under Nested Qualitative Evaluation) will be conducted 2 months postbaseline to collect detailed feedback from study participants and facilitators regarding their experience of and satisfaction with Smart Track.

App Quality Assessment

The Mobile App Rating Scale (MARS) [52] is designed to assess the quality of mHealth apps. The original version of this rating tool is designed to be completed by researchers, professionals, and/or clinicians [52]. The Mobile Application Rating Scale–User Version (uMARS [53]) is a simplified, end-user version. A total of 16 items are used to assess app quality across 4 domains (engagement, functionality, aesthetics, and information quality). Items are rated on a scale from 1 to 5 (1=inadequate and 5=excellent). Means are calculated for each quality domain and summed to produce an overall app quality mean score. Each instrument also contains 4 additional items to assess subjective quality and a further 6 items to assess the perceived impact of the app. Both the MARS [52] and uMARS [53] have excellent internal consistency and sound test-retest reliability.

The Digital Working Alliance Inventory (D-WAI) [54] is a brief, simple scale designed to assess therapeutic alliances within the context of app usage. It was recently developed to address the need for improved assessment of working alliances when evaluating the quality of digital health apps [54]. The D-WAI comprises 6 items and is derived from the short form of the Working Alliance Inventory [55], a commonly implemented and validated index of working alliances [56-58].

Cost Analysis
Health Services and Medication: Usage and Cost

An adapted version of the Client Service Receipt Inventory (CSRI)—generic UK mental health [59] will be used to assess health services and medication usage. The content of this inventory has been updated to reflect key sources of mental health expenditure in Australia [60]. These data will allow us to explore clinical and treatment characteristics that may be associated with app usage and will provide some insight into costing.

Time and Resource Utilization

Time and resource utilization will be captured in Microsoft Excel using a cost capture template [61-63]. This template will be used by the research team to maintain a record of cost data associated with the conduct of the study and the development and implementation of Smart Track. Only costs required to develop and implement the Smart Track app will be included in the cost analysis.

Secondary Objectives
Usage Patterns, Psychometric Properties, and Participant-Reported Outcomes

Secondary objectives will be informed by (1) participant-entered (and missing) data for Smart Track ROM items (detailed in Multimedia Appendix 2), (2) app-generated data analytics, and (3) the following data (eg sociodemographic characteristics; drug and alcohol use; health and social functioning; and recovery) collected by the research team at baseline and follow-up assessments.

Demographic Characteristics

Collection of sociodemographic characteristics (referral source, date of birth, gender, marital status, indigenous status, education/training, accommodation, and income) will be guided by items from the CSRI [59] and/or NSW minimum data set (MDS) for drug and alcohol treatment services [64].

The Network of Alcohol and other Drug Agencies (NADA) Client Outcome Management System (COMS) was developed by NADA to address the need for greater consistency in how outcomes are assessed across the drug and alcohol treatment sector [46]. We have chosen to use the COMS in this study to ensure that our data are directly comparable with the broader drug and alcohol treatment sectors. The COMS comprises a battery of items designed to assess 4 key domains: (1) drug and alcohol use, (2) psychological health, (3) health and social functioning, and (4) blood-borne virus (BBV) risk. The instruments used to assess each domain are outlined below. The COMS will be administered in full at baseline and at 2-month follow-up. A subset of items (Multimedia Appendix 2) will also be administered via Smart Track.

Client Outcome Management System: Drug and Alcohol Use
Severity of Dependence Scale

The Severity of Dependence Scale [65] is a 5-item screening measure of the psychological aspects of dependence that takes less than 1 min to complete. The items assess feelings of impaired control over drug taking, together with preoccupations and anxieties about drug taking. Participants will be asked to respond based on the substance that was causing them the greatest concern (1) over the preceding 2 months and (2) when they began attending SMART Recovery. The items are rated on a 4-point Likert scale, and total scores range from 0 to 15. Higher scores indicate a higher level of dependence. It is widely validated for use across a range of drug types, including heroin, cannabis, cocaine, amphetamine, and benzodiazepines [65-67].

Substance Use

Alcohol and tobacco use is measured by assessing both frequency (number of days) and quantity used during the preceding 4 weeks. Furthermore, 2 separate measurements for alcohol are included: number of days the person drank alcohol (and average number of drinks per day) and number of days of heavier drinking than usual (and average number of drinks on those days). For benzodiazepines and any illicit drugs, only the number of days of use is assessed.

Client Outcome Management System: Psychological Health

The Kessler 10 scale (K10) [68] is a widely used self-report measure of psychological distress. It comprises 10 questions that assess the level of nervousness, agitation, psychological fatigue, and depression in the past 4 weeks [68]. Each item is scored from 1 to 5, from none of the time to all of the time. Scores are then totaled, resulting in a K10 score between 10 and 50, with higher scores indicating greater distress [69]. The Kessler 10+ scale includes 4 additional questions to provide a context for interpretation (number of days where work, study, and/or management of daily activities were stopped and/or reduced because of these feelings; number of times professional help was sought; and perceived contribution of physical health problems to reported distress) [46]. The K10 has been successfully used in a range of populations, including a range of different Australian settings [69] and specifically with users of AOD in Australian settings [70].

Client Outcome Management System: Health and Social Functioning
World Health Organization Quality of Life Scale–8

The World Health Organization Quality of Life–8 [71] questions (also known as the EUROHIS QoL-8) is a very brief adaptation of the WHOQOL-100 and the WHOQOLBREF. Each item is scored from 1 (eg, not at all/very poor/very dissatisfied) to 5 (eg, completely/very good/very satisfied). Items are totaled (range 8-40), with higher scores reflecting greater perceived quality of life over the preceding 2 weeks. Domain scores can also be calculated for overall perception of the quality of life, overall perception of health, physical quality of life, psychological quality of life, satisfaction with social relationships, and satisfaction with the environment. It has been used and validated across a range of populations and settings [45,71], including AOD [72] and mental health [73].

New South Wales Minimum Data Set Items

To provide a differing and more objective assessment of the changes in the perceived quality of life [46], the COMS also includes two items on living arrangements (“Who do you live with?” and “Usual accommodation?”) and 1 item on income status (“What is your main source of income?”) taken from the NSW MDS [64]. Each item is answered by selecting one option from the response categories provided.

Two items on crime from the Brief Treatment Outcome Measure–Concise (BTOM-C) [47] are also included. These items were developed by the NSW Ministry of Health and assess the number of times the individual has been arrested in the last 2 months and how many arrests were for offenses committed in the preceding 2 months.

Client Outcome Management System: Blood-Borne Virus Risk

The BBV exposure risk-taking domain of the COMS comprises 4 items from the BTOM-C [47] on injecting drug use and overdose. These items are part of the validated BTOM measurement tool developed by NSW Health [47]. They are designed to measure changes and outcomes in relation to injecting and other risky drug use practices.

Client Perspectives of Recovery

The Substance Use Recovery Evaluator (SURE) [43] is designed to measure recovery from drug and alcohol dependence. It was developed in close consultation with people in recovery and comprises 21 items across 5 domains (drinking and drug use, self-care, relationships, material resources, and outlook on life). Items are summed to generate domain scores and an overall recovery score, with higher scores indicating greater progress toward recovery. Evidence supports face and content validity, acceptability, and usability for people in recovery [43]. We have selected the SURE as it is the first patient-reported outcome measure to provide a multidomain assessment of recovery, as defined by adults with experience of addiction [19]. Holistic assessment across a range of domains is consistent with both service user needs [74] and recovery-oriented service provision [19], thereby increasing the relevance of our findings to both service users and service providers. The SURE will be administered in full at baseline and at 2-month follow-up, with items also administered via Smart Track.

Nested Qualitative Evaluation

Qualitative interviews will be conducted 2 months postbaseline to explore the experience and opinions of participants with diverse engagement with Smart Track. Participants will be purposively sampled according to their baseline characteristics, pattern of Smart Track usage, and responses to the 2-month follow-up assessment. Specifically, we wish to explore the experience and opinions of 2 groups of SMART Recovery participants: (1) those who attended SMART groups regularly and completed the ROM regularly (n=10) and (2) those who attended SMART groups regularly and did not complete the ROM regularly (n=10). A qualitative researcher independent from the research team will use a topic guide to ask additional open-ended questions to the selection of participants (n=20) until the nominated sample size in both groups is reached. The research team will monitor recruitment to ensure that there is an adequate distribution of gender, main behavior of concern, geographical location, and group setting. Two corresponding groups of SMART Recovery facilitators (n=5 for each group) will also be recruited, namely (1) one group with members who regularly used Smart Track and (2) another group with members who did not regularly use Smart Track.

All interviews will be audio-recorded. A professional transcriber working under a confidentiality agreement will transcribe the recordings. The transcripts will be checked against the recordings for accuracy and deidentified (by removal of identifying information).

Study Outcomes

Primary and secondary endpoints are presented in Tables 3 and 4, respectively.

Table 3. Primary endpoints.
Primary objectivePrimary endpoint
To explore the feasibility of using Smart Track as part of SMARTa Recovery groups for the purposes of ROMb and tailored feedback
  • Proportion of eligible participants who consent to the study
  • Proportion of missing data for each of the ROM items/instruments at each week of administration, across the 2-month period of Smart Track usage
  • Costs associated with developing Smart Track and maintaining the app until the completion of data collection
  • Participant engagement with Smart Track, as indexed by data analytics captured daily across the data collection period
To explore the acceptability of using Smart Track as part of SMART Recovery groups for the purposes of ROM and tailored feedback
  • Detailed qualitative feedback from SMART Recovery group members and facilitators to explore their experience of and satisfaction with Smart Track (2-month follow-up)
  • Quality ratings as assessed by participant and facilitator ratings of the user version of the Mobile App Rating Scale (2-week follow-up) and Mobile App Rating Scale (2-month follow-up), respectively
  • Digital therapeutic alliance ratings as assessed by participant ratings of the Digital Working Alliance Inventory (2-week and 2-month follow-up)

aSMART: Self-Management and Recovery Training.

bROM: routine outcome monitoring.

Table 4. Secondary endpoints.
Secondary objectiveSecondary endpoints
To explore how participants engage with the app and describe usage patterns
  • Demographic, clinical, and treatment factors (as measured by Client Service Receipt Inventory, COMSa, and SUREb at baseline and 2-month follow-up) associated with (in)completion of Smart Track ROMc items
  • Data analytics captured daily across the data collection period
To provide preliminary evidence for the psychometric properties of the ROM items administered by Smart Track
  • Internal reliability and convergent and divergent validity of COMS and SURE items administered by Smart Track (relative to the complete versions administered at baseline and 2-month follow-up)
To provide preliminary evidence for participant-reported outcomes in behaviors of concern, recovery, and mental healthParticipant-reported progress across the 2-month period of app usage in the following:
  • Addictive behaviors (COMS [41] and item adapted from the Screener for Substance and Behavioral Addictions [42])
  • Addiction recovery (SURE [43])
  • Mental health (Kessler [44,68])

aCOMS: Client Outcome Management System.

bSURE: Substance Use Recovery Evaluator.

cROM: routine outcome monitoring.

Participant Reimbursement

Consistent with the Australian guidelines for acknowledging the time and value of consumer participation [75], participants will be offered modest reimbursement for any time, travel, and inconvenience associated with participation in the study assessments (supermarket voucher to the value of AUD $19.60 for baseline and 2-month follow-up assessments; AUD $1=US $0.65).

Statistical Analysis

Primary Objectives

Given the primary objectives of exploring feasibility and acceptability, outcome data will primarily utilize descriptive statistics (eg, summarizing the recruitment rate, proportion of missing data, data analytics, MARS and uMARS quality ratings, and D-WAI alliance ratings). Descriptive statistics will be supplemented by the following cost and qualitative analyses.

Cost Analysis

A cost analysis will be conducted with the assistance of the health economics unit at the Hunter Medical Research Institute, Australia. The analysis will adopt a health provider perspective; it will measure and report the cost associated with developing and maintaining Smart Track. This is policy-relevant information as it estimates the resources required to translate the model of care to another location. Cost modeling will be conducted to report the direct costs of the additional resources required to develop and maintain Smart Track. The perspective adopted will be limited in the base case analysis to that of the health provider. Costs and resource use will be prospectively collected for the duration of the feasibility study and will be valued using a combination of hospital data from NSW Health, Medicare Benefits Schedule tariffs, and market rates. Downstream cost savings associated with hospitalization will also be explored.

Qualitative Evaluation

Qualitative data will be examined to inform the acceptability of Smart Track by exploring the participants’ and facilitators’ experiences of the perceived usefulness of ROM and any reason(s) for nonadherence. The analysis will proceed in 2 ways. First, we will identify key concepts and experiences to inform future development and/or refinement of Smart Track content, features, and/or procedures. Second, if there is sufficient data, an inductive approach [76] will be used to shed light on the ways in which individuals understand themselves and their actions (such as in the SMART Recovery group). The methodology also allows for individual beliefs and experiences to be positioned within broader social, service, and policy contexts, including factors such as drug treatment policy and service availability and social attitudes toward drugs and people who use them [77].

Secondary Objectives
App Engagement and Usage Patterns

A detailed exploration of the relationship between app usage (as indexed by frequency of ROM completion, number of missing items, and time to disengagement) and participant characteristics (demographic, clinical, and treatment variables) will be explored using linear regression. Furthermore, we intend to explore the ROM data graphically to see (1) whether it is likely to characterize particular patterns of usage and (2) whether these patterns appear to be influenced by various participant characteristics. Potential patterns that emerge during this exploratory phase will be followed up using latent trajectory analysis. This will clarify whether app use increases, decreases, or has some other pattern over time. Descriptive statistics will also be used to summarize app-generated data analytics.

Preliminary Psychometrics of Smart Track ROM Items

Preliminary psychometrics for Smart Track items will be explored via sensitivity to change, internal consistency, test-retest reliability, convergent validity, and exploratory factor analysis. Internal consistency of Smart Track ROM items will be evaluated using Cronbach alpha coefficient. Pearson correlation analysis will be used to examine the test-retest reliability of ROM scores for the first and second completion of each item set. Convergent validity will be examined using Pearson correlation analysis to explore the associations of initial ROM scores with the standardized measures (COMS and SURE) administered at baseline. Sensitivity to change will be examined via effect sizes, reliable change index (RCI), and growth curve modeling. As appropriate, internal consistency, test-retest reliability, concurrent validity, and RCI will be further examined as a function of age group, gender, and primary behavior of concern.

Effect sizes will be estimated for participants’ average ROM change scores between the first and last sessions. To explore the minimum reliable amount of change in scores (while accounting for measurement error [78]), we will calculate RCI between participants’ first and last ROM scores. The Jacobson and Traux criteria [78] will be applied to describe the proportion of participants who improved, did not change, or deteriorated. RCIs will also be calculated for the standardized measures administered at baseline and follow-up to allow comparisons with the ROM items. Growth curve modeling will be used to estimate the average rates of change in ROM scores across the 2-month period of ROM usage.

Method of Dealing With Missing Data

Descriptive analyses will use all available data. Inferential analyses (eg, linear regression and growth curve modeling) will use multiple imputation (with chained regression equations) as the primary method of dealing with missing data. The number of imputed data sets will depend on the fraction of missing data, but the stability of results will be assessed over a range of imputation numbers.

Participant Outcomes

Participants’ responses to the COMS and SURE (as captured via interview and ROM items across the 2-month data collection period) will be summarized using descriptive statistics.

Power

We aim to recruit participants from 13 sites that conduct a combined total of 25 groups per week. Assuming an average of 6 eligible participants per group per week and a target sample size of 100 participants, we anticipate recruitment to take between 4 and 6 weeks (for an estimated recruitment rate between 11% and 17%). A sample of this size will enable the estimation of the recruitment rate and 95% CI, with a margin of error of no more than 7%.


At the time of submission, 13 sites (25 groups per week) had agreed to be involved. Funding was awarded on August 14, 2017 and ethics approval was granted on April 26, 2018 (HREC/18/WGONG/34; 2018/099). Enrollment is due to commence in July 2019. Data collection is due to be finalized in October 2019.


Principal Findings

Integrating ROM and tailored feedback into SMART Recovery groups is an important step toward improving client care [5,6,79]. Given the dearth of published research specifically examining the effectiveness of SMART Recovery [27], ROM provides the opportunity to establish an evidence base for SMART Recovery. Improved engagement with ROM and feedback requires innovative solutions [9]. To overcome the current limitations [9,18,80], in this project, we have harnessed technology to develop a fee-free mHealth app that provides interactive, client-centered, multidimensional progress monitoring. Written and visual feedback is generated automatically and is available almost instantly. Consistent with the need to improve the quality of mHealth solutions [81], Smart Track is grounded in theory, informed by an in-depth understanding of the needs and opinions of SMART Recovery participants and facilitators, and will undergo methodologically rigorous evaluation.

Conclusions

To the best of our knowledge, this study will be the first to use ROM and tailored feedback within a mutual support group setting. Our study design will provide an opportunity to identify the acceptance of a novel mHealth ROM and feedback app within this setting and provide detailed information on the factors that help to promote or hinder the use of ROM within this context. The study will provide important contextual information to inform the development of future intervention studies focused on the effectiveness of adding ROM plus feedback to SMART Recovery. Further, should Smart Track prove feasible and acceptable, this project offers a new tool that service providers, policy makers and researchers could one day use to understand the impact of SMART Recovery.

Acknowledgments

We gratefully acknowledge the creative and technical expertise of GHO (Customer Experience Agency, Sydney). Ryan Chao (Executive Creative Director) provided overall creative direction and lead the user experience design. James Legge (Executive Strategy Director) led the strategy and facilitated our workshops. Marcos Martini (BBA Technical Lead) was responsible for overall development of the App across iOS and Android. Sharon Peng (UX/UI designer) designed the user experience, user interface and conducted the usability tests. Phoebe Calcutt (Project Manager) managed the overall delivery of the App. We also wish to acknowledge the time and expert insights from the members of our steering committee and the valuable support and contributions made by SMART Recovery participants and facilitators to the development of Smart Track and the conduct of this research. Funding for this research was provided by the NSW Ministry of Health under the NSW Health Alcohol and Other Drugs Early Intervention Innovation Grant Scheme. The funding body did not directly contribute to the design, conduct, analysis, write-up and submission of this research for publication and does not have ultimate authority over any of these activities.

Conflicts of Interest

RM is the executive director of SMART Recovery Australia. AA is employed by SMART Recovery as the national program manager and trainer. PK, FD, ALB, AS, LH, VM, BL, AKB, JK, and AA volunteer as members of the SMART Recovery Australia research advisory committee. The potential and/or perceived conflict of interest is negligible. The role of study investigators on the research advisory committee and/or as an employee of SMART Recovery is freely available on the SMART Recovery Australia website (and study participants can be directed to this information as required). Furthermore, the team responsible for informing the study design and overseeing the conduct of the study and data analysis also comprises researchers, clinicians, and statisticians independent from SMART Recovery. An independent qualitative researcher will collect and analyze qualitative data, and an independent statistical team will conduct the quantitative and economic analyses. No financial conflicts of interest exist. Author contributions [82], and the composition of the expert advisory and steering committees are described in Multimedia Appendix 4.

Multimedia Appendix 1

Study reporting and registration.

PDF File (Adobe PDF File), 1361 KB

Multimedia Appendix 2

Content and administration of routine outcome monitoring items.

DOCX File , 27 KB

Multimedia Appendix 3

Data management, monitoring and dissemination.

DOCX File , 23 KB

Multimedia Appendix 4

Roles and responsibilities.

DOCX File , 23 KB

Multimedia Appendix 5

Peer Review Summary by NSW Health.

PDF File (Adobe PDF File), 284 KB

  1. Burgess P, Pirkis J, Coombs T. Routine outcome measurement in Australia. Int Rev Psychiatry 2015;27(4):264-275. [CrossRef] [Medline]
  2. The National Institute for Health and Care Excellence. Drug Use Disorders in Adults   URL: https://www.nice.org.uk/guidance/qs23 [accessed 2020-05-18]
  3. The National Institute for Health and Care Excellence. 2011. Alcohol-use Disorders: Diagnosis, Assessment and Management of Harmful Drinking (High-Risk Drinking) and Alcohol Dependence   URL: https://www.nice.org.uk/guidance/cg115 [accessed 2020-05-26]
  4. Lambert MJ, Harmon C, Slade K, Whipple JL, Hawkins EJ. Providing feedback to psychotherapists on their patients' progress: clinical results and practice suggestions. J Clin Psychol 2005 Feb;61(2):165-174. [CrossRef] [Medline]
  5. Goodman JD, McKay JR, DePhilippis D. Progress monitoring in mental health and addiction treatment: a means of improving care. Prof Psychol Res Pr 2013 Aug;44(4):231-246. [CrossRef]
  6. Carlier IV, Eeden WA. Routine outcome monitoring in mental health care and particularly in addiction treatment: evidence-based clinical and research recommendations. J Addict Res Ther 2017;8(4):-. [CrossRef]
  7. Kendrick T, El-Gohary M, Stuart B, Gilbody S, Churchill R, Aiken L, et al. Routine use of patient reported outcome measures (PROMs) for improving treatment of common mental health disorders in adults. Cochrane Database Syst Rev 2016 Jul 13;7:CD011119 [FREE Full text] [CrossRef] [Medline]
  8. Shimokawa K, Lambert MJ, Smart DW. Enhancing treatment outcome of patients at risk of treatment failure: meta-analytic and mega-analytic review of a psychotherapy quality assurance system. J Consult Clin Psychol 2010 Jun;78(3):298-311. [CrossRef] [Medline]
  9. Boswell JF, Kraus DR, Miller SD, Lambert MJ. Implementing routine outcome monitoring in clinical practice: benefits, challenges, and solutions. Psychother Res 2015;25(1):6-19. [CrossRef] [Medline]
  10. Agarwal S, LeFevre AE, Lee J, L'Engle K, Mehl G, Sinha C, WHO mHealth Technical Evidence Review Group. Guidelines for reporting of health interventions using mobile phones: mobile health (mhealth) evidence reporting and assessment (mERA) checklist. Br Med J 2016 Mar 17;352:i1174. [CrossRef] [Medline]
  11. Boudreaux ED, Waring ME, Hayes RB, Sadasivam RS, Mullen S, Pagoto S. Evaluating and selecting mobile health apps: strategies for healthcare providers and healthcare organizations. Transl Behav Med 2014 Dec;4(4):363-371 [FREE Full text] [CrossRef] [Medline]
  12. Milward J, Day E, Wadsworth E, Strang J, Lynskey M. Mobile phone ownership, usage and readiness to use by patients in drug treatment. Drug Alcohol Depend 2015 Jan 1;146:111-115. [CrossRef] [Medline]
  13. Drumm JW, Morne S, Davey M. Deloitte US. 2017. Smart Everything, Everywhere: Mobile Consumer Survey 2017 - The Australian Cut   URL: https://www2.deloitte.com/au/mobile-consumer-survey [accessed 2020-05-26]
  14. Garnett C, Crane D, West R, Brown J, Michie S. Identification of behavior change techniques and engagement strategies to design a smartphone app to reduce alcohol consumption using a formal consensus method. JMIR Mhealth Uhealth 2015 Jun 29;3(2):e73 [FREE Full text] [CrossRef] [Medline]
  15. Han M, Lee E. Effectiveness of mobile health application use to improve health behavior changes: a systematic review of randomized controlled trials. Healthc Inform Res 2018 Jul;24(3):207-226 [FREE Full text] [CrossRef] [Medline]
  16. Gondek D, Edbrooke-Childs J, Fink E, Deighton J, Wolpert M. Feedback from outcome measures and treatment effectiveness, treatment efficiency, and collaborative practice: a systematic review. Adm Policy Ment Health 2016 May;43(3):325-343 [FREE Full text] [CrossRef] [Medline]
  17. Davidson K, Perry A, Bell L. Would continuous feedback of patient's clinical outcomes to practitioners improve NHS psychological therapy services? Critical analysis and assessment of quality of existing studies. Psychol Psychother 2015 Mar;88(1):21-37. [CrossRef] [Medline]
  18. Carlier I, Meuldijk D, van Vliet IM, van Fenema E, van der Wee NJ, Zitman F. Routine outcome monitoring and feedback on physical or mental health status: evidence and theory. J Eval Clin Pract 2012 Feb;18(1):104-110. [CrossRef] [Medline]
  19. NSW Mental Health Commission. 2014. Living Well: A Strategic Plan for Mental Health in NSW 2014-2024   URL: https:/​/nswmentalhealthcommission.​com.au/​resources/​living-well-strategic-plan-for-mental-health-in-nsw-2014-2024 [accessed 2020-05-18]
  20. Finn SE, Tonsager ME. Therapeutic effects of providing MMPI-2 test feedback to college students awaiting therapy. Psychol Assess 1992;4(3):278-287. [CrossRef]
  21. Public Health England. 2015. Improving Mutual Aid Engagement: A Professional Development Resource   URL: http:/​/webarchive.​nationalarchives.gov.uk/​20170807160728/​http:/​/www.​nta.nhs.uk/​r-Evidence%20and%20Guidance.​aspx [accessed 2020-05-18]
  22. Kelly J, Yeterian J. The role of mutual-help groups in extending the framework of treatment. Alcohol Res Health 2011;33(4):350-355 [FREE Full text] [Medline]
  23. Kaskutas LA, Borkman TJ, Laudet A, Ritter LA, Witbrodt J, Subbaraman MS, et al. Elements that define recovery: the experiential perspective. J Stud Alcohol Drugs 2014 Nov;75(6):999-1010 [FREE Full text] [CrossRef] [Medline]
  24. The National Institute for Health and Care Excellence. 2011. Alcohol-Use Disorders: Diagnosis, Assessment and Management of Harmful Drinking (High-Risk Drinking) and Alcohol Dependence   URL: https://www.nice.org.uk/guidance/cg115 [accessed 2020-05-18]
  25. SMART Recovery Australia. 2019. Our Reach   URL: https://smartrecoveryaustralia.com.au/training-register-for-training/ [accessed 2020-05-18]
  26. Ferri M, Amato L, Davoli M. Alcoholics anonymous and other 12-step programmes for alcohol dependence. Cochrane Database Syst Rev 2006 Jul 19(3):CD005032. [CrossRef] [Medline]
  27. Beck AK, Forbes E, Baker AL, Kelly PJ, Deane FP, Shakeshaft A, et al. Systematic review of SMART recovery: outcomes, process variables, and implications for research. Psychol Addict Behav 2017 Feb;31(1):1-20. [CrossRef] [Medline]
  28. Kelly PJ, Robinson LD, Baker AL, Deane FP, McKetin R, Hudson S, et al. Polysubstance use in treatment seekers who inject amphetamine: drug use profiles, injecting practices and quality of life. Addict Behav 2017 Aug;71:25-30. [CrossRef] [Medline]
  29. Chan A, Tetzlaff J, Gøtzsche PC, Altman DG, Mann H, Berlin JA, et al. SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials. Br Med J 2013 Jan 8;346:e7586 [FREE Full text] [CrossRef] [Medline]
  30. Eysenbach G, CONSORT-EHEALTH Group. CONSORT-EHEALTH: improving and standardizing evaluation reports of web-based and mobile health interventions. J Med Internet Res 2011 Dec 31;13(4):e126 [FREE Full text] [CrossRef] [Medline]
  31. Kelly PJ, Raftery D, Deane FP, Baker AL, Hunt D, Shakeshaft A. From both sides: participant and facilitator perceptions of SMART recovery groups. Drug Alcohol Rev 2017 May;36(3):325-332. [CrossRef] [Medline]
  32. Horvath AT, Yeterian J. SMART recovery: self-empowering, science-based addiction recovery support. J Groups Addict Recover 2012 Apr;7(2-4):102-117. [CrossRef]
  33. Gray RM, Kelly PJ, Beck AK, Baker AL, Deane FP, Neale J, et al. A qualitative exploration of SMART recovery meetings in Australia and the role of a digital platform to support routine outcome monitoring. Addict Behav 2020 Feb;101:106144. [CrossRef] [Medline]
  34. GHO Sydney.   URL: http://www.ghosydney.com [accessed 2020-05-26]
  35. The National Institute for Health and Care Excellence. 2008. Drug Misuse: Psychosocial Interventions   URL: https:/​/www.​nice.org.uk/​guidance/​cg51/​evidence/​drug-misuse-psychosocial-interventions-full-guideline-pdf-195261805 [accessed 2020-05-18]
  36. SMART Recovery Australia. 2015. SMART Recovery Facilitator Training Manual: Practical Information and Tools to Help You Facilitate a SMART Recovery Group   URL: http:/​/www.​anzctr.org.au/​Steps11and12/​377336-(Uploaded-17-04-2019-09-10-15)-Study-related%20document.​docx [accessed 2020-05-26]
  37. SMART Recovery Australia. 2016. SMART Recovery Australia Participants' Manual: Tools and Strategies to Help You Manage Addictive Behaviours   URL: http:/​/www.​anzctr.org.au/​Steps11and12/​377336-(Uploaded-17-04-2019-09-10-15)-Study-related%20document.​docx [accessed 2020-05-26]
  38. Wilson KG, Sandoz EK, Kitchens J, Roberts M. The valued living questionnaire: defining and measuring valued action within a behavioral framework. Psychol Rec 2017 Jun 30;60(2):249-272. [CrossRef]
  39. Miller W, Rollnick S. Motivational Interviewing: Preparing People to Change Addictive Behavior. Third Edition. New York, USA: Guildford Press; 2013.
  40. Nikolaos KF. Assessment of homework completion. In: Kazantzis N, Deane FP, Ronan KR, L'Abate L, editors. Using Homework Assignments in Cognitive Behavior Therapy. New York, USA: Routledge Taylor & Francis Group; 2005:50-60.
  41. NADA. 2009. NGO Drug and Alcohol and Mental Health Information Management Project: Determining the Treatment Outcomes Data Collection Set   URL: https://www.nada.org.au/wp-content/uploads/2018/06/determining_the_data_collection_set_dec09.pdf [accessed 2020-05-15]
  42. Schluter MG, Hodgins DC, Wolfe J, Wild TC. Can one simple questionnaire assess substance-related and behavioural addiction problems? Results of a proposed new screener for community epidemiology. Addiction 2018 Aug;113(8):1528-1537. [CrossRef] [Medline]
  43. Neale J, Vitoratou S, Finch E, Lennon P, Mitcheson L, Panebianco D, et al. Development and validation of 'sure': a patient reported outcome measure (PROM) for recovery from drug and alcohol dependence. Drug Alcohol Depend 2016 Aug 1;165:159-167 [FREE Full text] [CrossRef] [Medline]
  44. Kessler RC, Andrews G, Colpe LJ, Hiripi E, Mroczek DK, Normand SL, et al. Short screening scales to monitor population prevalences and trends in non-specific psychological distress. Psychol Med 2002 Aug;32(6):959-976. [CrossRef] [Medline]
  45. da Rocha NS, Power MJ, Bushnell DM, Fleck MP. The EUROHIS-QOL 8-item index: comparative psychometric properties to its parent WHOQOL-BREF. Value Health 2012 May;15(3):449-457 [FREE Full text] [CrossRef] [Medline]
  46. Network of Alcohol & Other Drugs Agencies. 2012. Using the Client Outcomes Management System (COMS)   URL: https://www.nada.org.au/wp-content/uploads/2019/03/Technical-Report_COMS-Data-Report-2013-Final.pdf [accessed 2020-05-26]
  47. Lawrinson P, Copeland J, Indig D. National Drug and Alcohol Research Centre (NDARC). 2003. The Brief Treatment Outcome Measure: Opioid Maintenance Pharmacotherapy (BTOM) Manual   URL: https:/​/ndarc.​med.unsw.edu.au/​resource/​brief-treatment-outcome-measure-opioid-maintenance-pharmacotherapy-btom-manual [accessed 2020-05-26]
  48. Noar SM, Benac CN, Harris MS. Does tailoring matter? Meta-analytic review of tailored print health behavior change interventions. Psychol Bull 2007 Jul;133(4):673-693. [CrossRef] [Medline]
  49. Hoeppner BB, Schick MR, Kelly LM, Hoeppner SS, Bergman B, Kelly JF. There is an app for that - or is there? A content analysis of publicly available smartphone apps for managing alcohol use. J Subst Abuse Treat 2017 Nov;82:67-73. [CrossRef] [Medline]
  50. Duncan EA, Murray J. The barriers and facilitators to routine outcome measurement by allied health professionals in practice: a systematic review. BMC Health Serv Res 2012 May 22;12:96 [FREE Full text] [CrossRef] [Medline]
  51. Berrouiguet S, Baca-García E, Brandt S, Walter M, Courtet P. Fundamentals for future mobile-health (mhealth): a systematic review of mobile phone and web-based text messaging in mental health. J Med Internet Res 2016 Jun 10;18(6):e135 [FREE Full text] [CrossRef] [Medline]
  52. Stoyanov SR, Hides L, Kavanagh DJ, Zelenko O, Tjondronegoro D, Mani M. Mobile app rating scale: a new tool for assessing the quality of health mobile apps. JMIR Mhealth Uhealth 2015 Mar 11;3(1):e27 [FREE Full text] [CrossRef] [Medline]
  53. Stoyanov SR, Hides L, Kavanagh DJ, Wilson H. Development and validation of the user version of the mobile application rating scale (uMARS). JMIR Mhealth Uhealth 2016 Jun 10;4(2):e72 [FREE Full text] [CrossRef] [Medline]
  54. Henson P, Wisniewski H, Hollis C, Keshavan M, Torous J. Digital mental health apps and the therapeutic alliance: initial review. BJPsych Open 2019 Jan;5(1):e15 [FREE Full text] [CrossRef] [Medline]
  55. Hatcher RL, Gillaspy JA. Development and validation of a revised short version of the working alliance inventory. Psychother Res 2006 Jan;16(1):12-25. [CrossRef]
  56. Munder T, Wilmers F, Leonhart R, Linster HW, Barth J. Working alliance inventory-short revised (WAI-SR): psychometric properties in outpatients and inpatients. Clin Psychol Psychother 2010;17(3):231-239. [CrossRef] [Medline]
  57. Falkenström F, Hatcher RL, Holmqvist R. Confirmatory factor analysis of the patient version of the working alliance inventory--short form revised. Assessment 2015 Oct;22(5):581-593. [CrossRef] [Medline]
  58. Smits D, Luyckx K, Smits D, Stinckens N, Claes L. Structural characteristics and external correlates of the working alliance inventory-short form. Psychol Assess 2015 Jun;27(2):545-551. [CrossRef] [Medline]
  59. Beecham J, Knapp M. Costing psychiatric interventions. In: Thornicroft G, Brewin CR, Wing J, editors. Measuring Mental Health Needs. London, UK: Gaskell/Royal College of Psychiatrists; 1992.
  60. Medibank Private Health Insurance. 2013. The Case for Mental Health Reform in Australia: A Review of Expenditure and System Design   URL: https://www.medibank.com.au/Client/Documents/Pdfs/The_Case_for_Mental_Health_Reform_in_Australia.pdf [accessed 2020-05-26]
  61. Major G, Ling R, Searles A, Niddrie F, Kelly A, Holliday E, et al. The costs of confronting osteoporosis: cost study of an Australian fracture liaison service. JBMR Plus 2019 Jan;3(1):56-63 [FREE Full text] [CrossRef] [Medline]
  62. Ling R, Rush A, Carter C, Carpenter J, Watson PH, Byrne JA, et al. An Australian biobank certification scheme: a study of economic costs to participating biobanks. Biopreserv Biobank 2018 Feb;16(1):53-58. [CrossRef] [Medline]
  63. Yoong SL, Grady A, Wiggers J, Flood V, Rissel C, Finch M, et al. A randomised controlled trial of an online menu planning intervention to improve childcare service adherence to dietary guidelines: a study protocol. BMJ Open 2017 Sep 11;7(9):e017498 [FREE Full text] [CrossRef] [Medline]
  64. NSW Health - NSW Government. 2015. Nsw Minimum Data Set (MDS) for Drug and Alcohol Treatment Services   URL: https://www.health.nsw.gov.au/aod/Pages/minimum-data-set.aspx [accessed 2020-05-26]
  65. Gossop M, Darke S, Griffiths P, Hando J, Powis B, Hall W, et al. The severity of dependence scale (SDS): psychometric properties of the SDS in English and Australian samples of heroin, cocaine and amphetamine users. Addiction 1995 May;90(5):607-614. [CrossRef] [Medline]
  66. Hides L, Dawe S, Young R, Kavanagh D. The reliability and validity of the severity of dependence scale for detecting cannabis dependence in psychosis. Addiction 2007 Jan;102(1):35-40. [CrossRef] [Medline]
  67. Kaye S, Darke S. Determining a diagnostic cut-off on the severity of dependence scale (SDS) for cocaine dependence. Addiction 2002 Jun;97(6):727-731. [CrossRef] [Medline]
  68. Kessler RC, Barker PR, Colpe LJ, Epstein JF, Gfroerer JC, Hiripi E, et al. Screening for serious mental illness in the general population. Arch Gen Psychiatry 2003 Feb;60(2):184-189. [CrossRef] [Medline]
  69. Andrews G, Slade T. Interpreting scores on the Kessler psychological distress scale (K10). Aust N Z J Public Health 2001 Dec;25(6):494-497. [CrossRef] [Medline]
  70. Hides L, Lubman DI, Devlin H, Cotton S, Aitken C, Gibbie T, et al. Reliability and validity of the Kessler 10 and patient health questionnaire among injecting drug users. Aust N Z J Psychiatry 2007 Feb;41(2):166-168. [CrossRef] [Medline]
  71. Schmidt S, Mühlan H, Power M. The EUROHIS-QOL 8-item index: psychometric results of a cross-cultural field study. Eur J Public Health 2006 Aug;16(4):420-428. [CrossRef] [Medline]
  72. Schmidt S, Power M, Bullinger M, Nosikov A. The conceptual relationship between health indicators and quality of life: results from the cross-cultural analysis of the EUROHIS field study. Clin Psychol Psychother 2005 Jan;12(1):28-49. [CrossRef]
  73. Schmidt S, Power M. Cross-cultural analyses of determinants of quality of life and mental health: results from the Eurohis study. Soc Indic Res 2006 May;77(1):95-138 [FREE Full text] [CrossRef]
  74. Neale J, Tompkins C, Wheeler C, Finch E, Marsden J, Mitcheson L, et al. 'You’re all going to hate the word ‘recovery’ by the end of this': service users’ views of measuring addiction recovery. Drugs 2014 Aug 4;22(1):26-34. [CrossRef]
  75. Health Issues Centre. 2015. Paying and Reimbursing Consumers: Position Statement   URL: https://www.healthissuescentre.org.au [accessed 2020-05-15]
  76. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol 2006 Jan;3(2):77-101. [CrossRef]
  77. Kellehear A. The Unobtrusive Researcher: A Guide to Methods. Sydney, Australia: Allen & Unwin; 1993.
  78. Jacobson NS, Truax P. Clinical significance: a statistical approach to defining meaningful change in psychotherapy research. J Consult Clin Psychol 1991 Feb;59(1):12-19. [CrossRef] [Medline]
  79. Lambert MJ, Whipple JL, Kleinstäuber M. Collecting and delivering progress feedback: a meta-analysis of routine outcome monitoring. Psychotherapy (Chic) 2018 Dec;55(4):520-537. [CrossRef] [Medline]
  80. Jensen-Doss A, Haimes EM, Smith AM, Lyon AR, Lewis CC, Stanick CF, et al. Monitoring treatment progress and providing feedback is viewed favorably but rarely used in practice. Adm Policy Ment Health 2018 Jan;45(1):48-61 [FREE Full text] [CrossRef] [Medline]
  81. Michie S, Yardley L, West R, Patrick K, Greaves F. Developing and evaluating digital interventions to promote behavior change in health and health care: recommendations resulting from an international workshop. J Med Internet Res 2017 Jun 29;19(6):e232 [FREE Full text] [CrossRef] [Medline]
  82. International Committe of Medical Journal Editors. 2019. Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals   URL: http://www.icmje.org/icmje-recommendations.pdf [accessed 2020-05-18]


AOD: alcohol and other drug
BBV: blood-borne virus
BTOM-C: Brief Treatment Outcome Measure-Concise
COMS: Client Outcome Management System
CSRI: Client Service Receipt Inventory
D-WAI: Digital Working Alliance Inventory
HREC: Human Research Ethics Committee
ISLHD: Illawarra Shoalhaven Local Health District
K10: Kessler 10 scale
MARS: Mobile App Rating Scale
MDS: minimum data set
mHealth: mobile health
NADA: Network of Alcohol and other Drug Agencies
NSW: New South Wales
RCI: reliable change index
ROM: routine outcome monitoring
SMART: Self-management And Recovery Training
SURE: Substance Use Recovery Evaluator
uMARS: Mobile Application Rating Scale–User Version


Edited by G Eysenbach; submitted 19.06.19; peer-reviewed by F Mckay, A Najm; comments to author 24.02.20; revised version received 25.02.20; accepted 26.02.20; published 09.07.20

Copyright

©Peter J Kelly, Alison K Beck, Amanda L Baker, Frank P Deane, Leanne Hides, Victoria Manning, Anthony Shakeshaft, Briony Larance, Joanne Neale, John Kelly, Christopher Oldmeadow, Andrew Searles, Carla Treloar, Rebecca M Gray, Angela Argent, Ryan McGlaughlin. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 09.07.2020.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Research Protocols, is properly cited. The complete bibliographic information, a link to the original publication on http://www.researchprotocols.org, as well as this copyright and license information must be included.