Published on in Vol 12 (2023)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/44469, first published .
Cultural Adaptation of the Actionable Health App Evaluation in Japan: Protocol for a Web-Based Modified Delphi Expert Consensus Study

Cultural Adaptation of the Actionable Health App Evaluation in Japan: Protocol for a Web-Based Modified Delphi Expert Consensus Study

Cultural Adaptation of the Actionable Health App Evaluation in Japan: Protocol for a Web-Based Modified Delphi Expert Consensus Study

Protocol

1School of Psychological Sciences, University of Human Environments, Ehime, Japan

2Research Center for Child Mental Development, Chiba University, Chiba, Japan

3School of Medicine, Fukushima Medical University, Fukushima, Japan

4Graduate School of Human Sciences, Osaka University, Osaka, Japan

Corresponding Author:

Kengo Yokomitsu, PhD

School of Psychological Sciences

University of Human Environments

9-12, Dogo-himata, Matsuyama

Ehime, 790-0825

Japan

Phone: 81 89 926 7007

Email: yokomitsuken5@gmail.com


Background: With an increase in both the number of mental health disorders people are experiencing and the difficulty in accessing mental health care, the demand for accessible mental health care services has increased. The use of mobile devices has allowed people to receive care in their daily lives without restrictions on time or location. However, the majority of publicly available mobile health apps are not evidence-based, and the top-rated apps are not always safe or user-friendly and may not offer clinically beneficial results.

Objective: This study aims to create a cultural adaptation of the American Psychiatric Association’s comprehensive app evaluation framework in Japan using a web-based modified Delphi expert consensus.

Methods: A web-based modified Delphi study includes developing the Japanese version of the comprehensive app evaluation framework and 3 Delphi rounds. In the first round, our working group sends a questionnaire to the panelists, who then complete it. In the second and third rounds, the working group sends a questionnaire and a summary of the panelists’ answers based on each of the previous rounds. The panelists answer the questionnaires based on this summary. The summarization procedure is automated to help reduce the biases that can be generated when panelists’ answers are summarized and when the panelists receive them. The working group sends only the result of the summarization with the next round’s questionnaire. All interactions between the working group and the panelists will be conducted on Qualtrics (Qualtrics Japan LLC), a questionnaire platform. To culturally validate the comprehensive mental health app evaluation framework, participants from the following three categories will be recruited in Japan: (1) researchers, (2) practitioners, and (3) app developers.

Results: This study received funding from a crowdfunding campaign in Japan (April 2023). The Delphi study began in January 2023 and will be completed in December 2023. We had already completed the translation of the 105 original app evaluation item questions by December 2022.

Conclusions: While the need for treatment using mental health apps is increasing, no framework that can be used to develop a centralized database for health apps is available or accessible, and no consensus has been reached among stakeholders in Japan about an appropriate framework. The results of the web-based modified Delphi method presented in this paper may provide direction for the development and use of mental health apps in the future among the relevant stakeholders. Furthermore, this study will enhance recognition of the framework among researchers, clinicians, mental health app developers, and users, in addition to devising new instruments to help users or practitioners efficiently choose the right app for their situations.

International Registered Report Identifier (IRRID): PRR1-10.2196/44469

JMIR Res Protoc 2023;12:e44469

doi:10.2196/44469

Keywords



Problems with mental health, such as mental health disorders, affect people internationally. According to the Global Burden of Diseases, Injuries, and Risk Factors Study, the 2 most disabling mental disorders, depression and anxiety disorders, ranked among the top 25 leading causes of burden worldwide in 2019 [1]. In addition, the COVID-19 pandemic has had an enormous impact on mental health. For example, the World Health Organization [2] indicated that there was a remarkable increase in mental health problems in the general population; loneliness and a positive COVID-19 diagnosis increased the risk of suicidal thoughts. The COVID-19 pandemic led to a 27.6% increase in cases of major depressive disorders and a 25.6% increase in cases of anxiety disorders in 2020 [3]. Furthermore, COVID-19 has made it more difficult for people facing mental health problems to access outpatient mental health services [2].

With the rise of mental health problems and the increasing difficulty in access to in-person mental health care services, the demand for accessible mental health care services has significantly increased. The use of mobile devices, which are now owned by a majority of people, has made it possible for individuals to access care in their daily lives without being restricted by place and time [4-6]. As of 2017, more than 318,000 health apps existed worldwide; this is twice as many as were available in 2015. Of 318,000 health apps, 490 apps targeted mental health and behavioral disorders [7]. In Japan, a systematic search of the Google Play Store and Apple App Store between June 4 and June 11, 2021, found that 172 mental health apps are available [8]. Moreover, a previous study of Japanese university students found that the mobile health (mHealth) apps usage rate was 32.1% [9]. Considering that 36% of adults who owned smartphones or tablets in a national sample of the United States [10] had mHealth apps on their devices, the usage rate of mental health apps in Japan is estimated to be about the same as in other countries.

Recent reviews have found that mHealth apps could be effective in reducing symptoms of depression and anxiety in adults and youth [11,12]. Unfortunately, some of the apps available are unreliable or harmful. Most publicly available mHealth apps are not evidence-based, and top-rated apps on the app store are not always safe or user-friendly and may not offer clinically beneficial results [13,14]. Similar problems have been reported in Japan [15].

It is difficult for service users (eg, clinical therapists, patients, the general public, and those without mental health concerns) to ensure that apps are safe, evidence-based, usable, and offer clinically beneficial results [16]. Multiple scales and frameworks have been developed to evaluate mHealth apps [17,18]. These scales and frameworks include scales for evaluating the quality, safety, usability, and importance of apps [19]. However, the majority of scales have been developed for app developers to evaluate their own apps [19], and more inclusive tool apps are necessary to help users select the right app based on their conditions and preferences.

The App Evaluation Model framework proposed by the American Psychiatric Association (APA) is useful for addressing this issue [20]. This framework was constructed via a 6-step process that involved harmonizing the 961 app evaluation questions included in 45 existing app evaluation frameworks, removing duplicate, redundant, and nonrelevant questions, and then grouping the remaining 357 questions into 5 priority levels: background information, privacy and security, evidence-based, ease of use, and data integration [21]. Even then, there was no centralized database where users can glance at how various health apps perform when assessed via the framework. Therefore, Lagan et al [16] translated APA’s framework for evaluating apps into a set of objective questions that can be published on the internet; the questions from the APA model were operationalized into the 105 objective questions that are either binary or numeric. The 105 objective questions are aligned with the levels of the APA framework, with the 5 levels arranged in a pyramid format to reinforce the need to first consider access, safety, and privacy. There were some additions and alterations to several questions to reflect ongoing feedback from stakeholders after a 2-day summit in December 2019 [16]. The main difference between the user version of the Mobile App Rating Scale, which is the most used mHealth apps evaluation scale, and the app evaluation model is that the app evaluation model does not score questions or produce summary scores about the quality of app safety, usability, etc, but instead allows the user to judge what is important and a good match for them.

Currently, no framework that can be used to develop a centralized database for health apps is available and accessible, and no consensus has been reached among experts and clinicians in Japan about an appropriate framework. The aim of this paper is to describe the study protocol that will be used to validate the adaptation of the 105 objective questions in Japan and develop a Japanese version of the framework among stakeholders involved in mental health apps. In the absence of empirical evidence to reach such a consensus, various methods can be used to synthesize opinions based on expertise and scientific background. In this study, we will use the Delphi method, which ensures anonymity and allows opinions to be expressed without group pressure. The Delphi method has been used to formulate various guidelines and models [22,23]. Our Delphi study is conducted to examine whether the 105 objective questions are comprehensive, content-appropriate, and relevant to this domain as criteria for evaluating mental health apps. The target audience for this framework includes people involved in the research, clinical practice, and app development of health apps throughout Japan. The reason for targeting these people whose native language is Japanese is also to identify difficulties in understanding items that were not noticed during the Japanese translation process.


Ethical Considerations

Participation in this study is voluntary. The emailed Delphi survey invitation includes the following information for potential panelists: presentation of the researcher responsible for the study, description of the study, reasons for the selection of the expert, procedures to be followed to participate, estimation of the time required, expectations regarding the expert (including the importance of participating in all rounds of the consultation), the promise of anonymity and participation recognition, and rewards for participation (payment of 20,000 JPY [approximately US $143] at the end of the study). Before participation in this study, informed consent was obtained from all panelists.

Design

This research procedure is planned with reference to the Conducting and Reporting Delphi Studies (CREDES) guidelines [24] and reviews of the Delphi study [25,26]. The classic Delphi techniques have the following characteristics [27]: (1) a working group organizes the Delphi study; (2) the working group recruits a panelist’s group of individuals with some expertise on the topic; (3) the group compiles a questionnaire with a list of statements that the experts rate for agreement; (4) the group collects responses from the panelists using the questionnaire; (5) the group gives anonymous feedback to each panelist about how their responses compare to the rest of the panelists; (6) each panelist is able to revise their responses to the questionnaire after receiving the feedback; and (7) responses converge across rounds of questionnaires, with some statistical criterion being used to define consensus. Since developing a Japanese version of the comprehensive app evaluation framework serves as a starting point for the modified Delphi technique, we refer to our Delphi study as a modified Delphi study. The expected duration of the study is 12 months, which began in January 2023. To culturally validate the comprehensive mental health app framework in Japan, Japanese participants will be recruited from three categories: (1) researchers, (2) practitioners, and (3) app developers.

Procedure of the Web-Based Modified Delphi Study

Table 1 shows the procedure of developing the Japanese version of the comprehensive app evaluation framework. The Delphi method includes 3 rounds, with evaluation panelists answering anonymously in each round. In each round of the Delphi survey, the working group will pilot the round with 1 researcher, 1 practitioner, and 1 app developer prior to initiation. The individuals who participate in the pilot will not participate in the Delphi process. In the first round, our working group will send a questionnaire to the panelists, who will complete it within 4 weeks. If any panelist does not complete the evaluation within 4 weeks and does not respond to communications from the working group, the panelist would be considered to have dropped out of the study and will be excluded from the analysis. In the second and third rounds, the working group will send a questionnaire and a summary of the panelists’ answers based on each previous round. The panelists answer the questionnaires based on this summary. The summarization procedure is automated to reduce biases that may occur when summarizing panelists’ answers and providing those summaries to the panelists. The working group sends the result of the summarization with the next round’s questionnaire. Each round’s summary includes the following: the quantitative result for each item’s question and explanation, the qualitative comments and explanation for each evaluation, and suggestions for modifying (revising, removing, and adding) items. When the panelist suggests revision or addition of items, the working group members will review the proposals within 3 weeks after the panelists’ response. The working group would discuss the items and judge whether each revised or added item’s expression is adequate. An independent liaison from the working group will make contact with the panelists, and the information about the panelists will be masked to the working group members who are participating in the discussion. These discussions will be conducted by 2 or more members.

Table 1. Steps of the study process.

Working groupPanelist
Preparation

Step 1 (completed)
  • Developing the app evaluation items Japanese version
N/Aa

Step 2 (completed)
  • Defining the inclusion and exclusion criteria for panelists
N/A

Step 3
  • Sampling the Delphi round’s panelists
Determine whether or not to participate in this study

Step 4
  • Developing the Delphi questionnaire
N/A
Delphi round

First round
  • Conducting the pilot study
  • Analysis
Evaluating the appropriateness of items in the final version of the 105 objective questions and suggesting notations for each item

Second round
  • Conducting the pilot study
  • Providing feedback to panelists on the summary of the first round
  • Analysis
Judging the appropriateness of items in the final version of the 105 objective questions and suggesting notations for each item

Final round
  • Conducting the pilot study
  • Providing feedback on the summary of the second round
  • Final analysis and publication
Judging the appropriateness of items in the final version of the 105 objective questions and suggesting notations for each item

aN/A: not applicable.

Develop Questionnaires

The 105 original evaluation items were translated in accordance with the guidelines provided by the International Society for Pharmacoeconomics and Outcomes Research task force [28]. First, forward translation from English to Japanese was performed independently by 2 of the authors (KY and HNT). Then, a professional English translator who was English-Japanese bilingual and blinded to the 105 objective questions translated the provisional app evaluation item back into English. The 2 English versions of the app evaluation item questions (ie, the original and back-translated versions) were reconciled by KY, HNT, and John Torous (original 105 objective questions’ author), and only minor discrepancies were found. These discrepancies were discussed until a consensus was reached. The original author (John Torous) evaluated the finalized English version of the app evaluation item questions and confirmed that the original meanings of the items, instructions, and responses had been maintained throughout the translation procedure.

Administer Questionnaires

Panelists will complete evaluations in a 2-stage structure (the 105 items and the 11 key levels). The 105 objective questions are constructed by the following key levels: app origin, functionality, app store attributes, accessibility, privacy and security, inputs, outputs, clinical foundation, features, engagement style, and interoperability and data sharing. At the beginning of each key level, the questionnaire explains the key level, and panelists will be asked to assess the appropriateness of the item that comprises the key level. At that time, we will ask panelists to suggest additional items if needed. After the evaluation of key level, panelists will assess 105 objective questions and descriptions. Panelists will be asked to respond on a 9-point scale (1=not at all appropriate to 9=quite appropriate) regarding whether the item is appropriate to evaluate apps. When the panelists assign a score of 1-5, they will be asked to provide free-text feedback that includes suggestions for changing the wording.

Synthesize Answers

In keeping with the described methods, satisfactory agreement for each round will require both (1) all responses >5 (ie, no participant disagreement) and (2) 60% (n=31) participation in voting in each round. These criteria were conservatively set with reference to several previous studies [29,30]. In addition, satisfactory consensus for each round will require the following: the smaller the IQR, the greater the consensus; an IQR of 0 or 1 indicates a very strong consensus, while an IQR of 2 indicates a strong consensus [31]. IQR includes the difference between the 75th percentile (third quartile) and the 25th percentile (first quartile). The items that meet these criteria will be removed for the next round of the Delphi. In addition, for free-text feedbacks, 2 or more people of the working group would discuss the panelists’ responses regarding the addition or modification of items, whether to adopt the responses in the next round, and in case of adoption, how the items should be added or modified. If no conclusion is reached through discussion, the working group members who were not able to participate on the discussion day will be asked for their opinions. For this reason, the discussions will be recorded on video and via the minutes of the meeting.

Elaborate Selection Criteria for Evaluation Panelists

Overview

The panelists will consist of 51 researchers, practitioners, and app developers. Inclusion and exclusion criteria of the panelists in each category are as follows.

Researchers who have experience contributing to at least 1 study for psychological assessment or treatment with mHealth apps will be included. The number of relevant publications, the manner in which they contribute to the research, and the degree of contribution are not required; however, if they are involved in assessment, treatment, development, usability or feasibility studies, protocol studies, reviews, preprints, conference papers, etc, they will be considered candidates for panelists.

Any practitioner who has experience with at least 1 app that assists in the treatment of procedure to a patient or client who is facing mental health issues will be considered a candidate. Practitioners include psychiatrists and clinical psychologists. Clinical experience related to mental health care is not required. Given the state of app development in Japan, we anticipate that the panelists would be limited, which would undermine their inclusiveness. Therefore, we include those who have used an app for patients for treatment or assessment purposes, regardless of the mental health app.

For app developers, candidates must have experience developing at least 1 app related to mental health care. There is no minimum requirement for years of experience in mental health app development that must be met. However, panelists who participated in the development of mental health app items will be excluded.

Make a List of Panelists

Panelists will be recruited based on the inclusion and exclusion criteria that will be considered in the primary list. Subsequently, snowball sampling will be conducted to expand the pool of panelists.

Dissemination

The preponderant involvement of various stakeholders throughout the study will offer the possibility of disseminating the results and progress of this study during the conduction of this study. Following that, diverse activities will take place to transfer knowledge. For example, scientific papers will be published, and conferences will be held to share the results with researchers. These activities could include working with the media to produce commentary papers, features, and stakeholder interviews on research findings. Workshops will be organized with professionals. A podcast will also be available for people interested in delving deeper into the results and related topics and listening to interviews and discussions with experts. Public lectures for citizens are also planned to disseminate the results of the study and explain how to use the app evaluation items to the general public. These activities will be conducted both on-site and on the internet, and records of the activities will be made available through the website and social networking sites developed for the study.


We have already completed the translation of the 105 original app evaluation item questions in December 2022. We compiled a list of panelists willing to participate in the Delphi study. We anticipated the first Delphi round to commence in January 2023, and that all Delphi rounds will be completed by December 2023. Finally, the results of each Delphi round will be anonymously reported, and the report will be distributed among all panelists who participated in the Delphi study. This report will include an indication of the distribution of panelists’ ratings, including their comments and suggestions. The results will be presented both quantitatively and qualitatively. We expect this study to be published in a peer-reviewed journal and presented at national and international conferences.


Principal Considerations

While the need for treatment using mental health apps and the development of mental health apps to meet this need are increasing [32], no framework which can be used to develop a centralized database for health app is available and accessible, and no consensus has been reached among stakeholders in Japan about an appropriate framework. The web-based modified Delphi method presented in this paper aims to reach a consensus on an app framework, and thus ensure the direction of use and development of mental health apps in the future.

We expect that this web-based modified Delphi method will incorporate many different stakeholders involved in mental health apps (researchers, clinicians, and mental health app developers), facilitate the application of the comprehensive app evaluation framework in Japan, and provide valuable insights that will contribute to future developments in this field.

Furthermore, we expect that this work will raise awareness of the framework among some stakeholders involved in mental health apps, in addition to leading to the development of tools that will enable users or practitioners to efficiently choose the right app for their situations.

Strengths and Limitations

The majority of studies aiming at cultural adaptation of a survey have thus far failed to follow the CREDES guidance. This study is the second CREDES-based cultural adaptation of a scale in Japan. Furthermore, the panelist group in this Delphi study includes clinical experts, researchers, and developers to use diverse perspectives. This limitation will be due to the diversity of panelists, as there will be only 51 panelists in total, which is 17 participants for each domain. The CREDES prefers 50 participants for Delphi studies; therefore, the planned number of participants is acceptable. On the other hand, since this study plans to conduct the first round for 51 panelists at the time of study participation, it is possible that the number of panelists will decrease over time.

Conclusions

This study will develop the Japanese version of the comprehensive app evaluation framework that has reached a consensus among some stakeholders involved in the mental health app. It will generate useful insights for people involved in research, clinical practice, and app development regarding mental health apps throughout Japan and suggest the direction of use and development of mental health apps in the future. Furthermore, this study will enhance recognition of the framework among researchers, clinicians, mental health app developers, and users, in addition to developing new instruments to help users or practitioners efficiently choose the right app for their situations.

Acknowledgments

The authors would like to acknowledge the financial contributions of over 150 individuals via the crowdfunding. They provided us with the opportunity to conduct this study. We also thank Editage for the English language review.

Data Availability

Data sharing is not applicable to this paper as no data sets were generated or analyzed during this study.

Conflicts of Interest

KY received personal fees from a for-profit company promoting integrated resorts in Japan and abroad, domestic tobacco companies, and CureApp Inc. YT received personal fees from CureApp Inc, emol Inc, and Kaijyu Campany Inc. SM received personal fees from CureApp Inc.

  1. GBD 2019 DiseasesInjuries Collaborators. Global burden of 369 diseases and injuries in 204 countries and territories, 1990-2019: a systematic analysis for the Global Burden of Disease Study 2019. Lancet. 2020;396(10258):1204-1222. [FREE Full text] [CrossRef] [Medline]
  2. Mental health and COVID-19: early evidence of the pandemic's impact: scientific brief. World Health Organization. 2022. URL: https://apps.who.int/iris/handle/10665/352189 [accessed 2023-06-02]
  3. COVID-19 Mental Disorders Collaborators. Global prevalence and burden of depressive and anxiety disorders in 204 countries and territories in 2020 due to the COVID-19 pandemic. Lancet. 2021;398(10312):1700-1712. [FREE Full text] [CrossRef] [Medline]
  4. Byambasuren O, Sanders S, Beller E, Glasziou P. Prescribable mHealth apps identified from an overview of systematic reviews. NPJ Digit Med. 2018;1:12. [FREE Full text] [CrossRef] [Medline]
  5. Huguet A, Rao S, McGrath PJ, Wozney L, Wheaton M, Conrod J, et al. A systematic review of cognitive behavioral therapy and behavioral activation apps for depression. PLoS One. 2016;11(5):e0154248. [FREE Full text] [CrossRef] [Medline]
  6. Shen N, Levitan MJ, Johnson A, Bender JL, Hamilton-Page M, Jadad AAR, et al. Finding a depression app: a review and content analysis of the depression app marketplace. JMIR Mhealth Uhealth. 2015;3(1):e16. [FREE Full text] [CrossRef] [Medline]
  7. The growing value of digital health: evidence and impact on human health and the healthcare system institute report. IQVIA. 2017. URL: https://www.iqvia.com/insights/the-iqvia-institute/reports/the-growing-value-of-digital-health [accessed 2023-06-02]
  8. Yamamoto K, Ito M, Sakata M, Koizumi S, Hashisako M, Sato M, et al. Japanese version of the Mobile App Rating Scale (MARS): development and validation. JMIR Mhealth Uhealth. 2022;10(4):e33725. [FREE Full text] [CrossRef] [Medline]
  9. Cao J, Kurata K, Lim Y, Sengoku S, Kodama K. Social acceptance of mobile health among young adults in Japan: an extension of the UTAUT model. Int J Environ Res Public Health. 2022;19(22):1-16. [FREE Full text] [CrossRef] [Medline]
  10. Bhuyan SS, Lu N, Chandak A, Kim H, Wyant D, Bhatt J, et al. Use of mobile health applications for health-seeking behavior among US adults. J Med Syst. 2016;40(6):153. [CrossRef] [Medline]
  11. Firth J, Torous J, Nicholas J, Carney R, Pratap A, Rosenbaum S, et al. The efficacy of smartphone-based mental health interventions for depressive symptoms: a meta-analysis of randomized controlled trials. World Psychiatry. 2017;16(3):287-298. [FREE Full text] [CrossRef] [Medline]
  12. Firth J, Torous J, Nicholas J, Carney R, Rosenbaum S, Sarris J. Can smartphone mental health interventions reduce symptoms of anxiety? A meta-analysis of randomized controlled trials. J Affect Disord. 2017;218:15-22. [FREE Full text] [CrossRef] [Medline]
  13. Baumel A, Torous J, Edan S, Kane JM. There is a non-evidence-based app for that: a systematic review and mixed methods analysis of depression- and anxiety-related apps that incorporate unrecognized techniques. J Affect Disord. 2020;273:410-421. [CrossRef] [Medline]
  14. Wisniewski H, Liu G, Henson P, Vaidyam A, Hajratalli NK, Onnela JP, et al. Understanding the quality, effectiveness and attributes of top-rated smartphone health apps. Evid Based Ment Health. 2019;22(1):4-9. [FREE Full text] [CrossRef] [Medline]
  15. Takashina H, Suzuki H, Siratsuka R, Ohashi K, Miyashita T, Yokomitsu K. A review of mobile applications for psychological interventions concerning depressive symptoms in Japan. Jpn Behav Cogn Ther. 2021;47:1-10. [FREE Full text] [CrossRef]
  16. Lagan S, Aquino P, Emerson MR, Fortuna K, Walker R, Torous J. Actionable health app evaluation: translating expert frameworks into objective metrics. NPJ Digit Med. 2020;3:100. [FREE Full text] [CrossRef] [Medline]
  17. Stoyanov SR, Hides L, Kavanagh DJ, Wilson HW. Development and validation of the user version of the Mobile Application Rating Scale (uMARS). JMIR Mhealth Uhealth. 2016;4(2):e72. [FREE Full text] [CrossRef] [Medline]
  18. Sasaki N, Obikane E, Vedanthan R, Imamura K, Cuijpers P, Shimazu T, et al. Implementation Outcome Scales for Digital Mental Health (iOSDMH): scale development and cross-sectional study. JMIR Form Res. 2021;5(11):e24332. [FREE Full text] [CrossRef] [Medline]
  19. Azad-Khaneghah P, Neubauer N, Miguel Cruz A, Liu L. Mobile health app usability and quality rating scales: a systematic review. Disabil Rehabil Assist Technol. 2021;16(7):712-721. [CrossRef] [Medline]
  20. App advisor: an American Psychiatric Association initiative. American Psychiatric Association. URL: https://www.psychiatry.org/psychiatrists/practice/mental-health-apps [accessed 2023-06-02]
  21. Henson P, David G, Albright K, Torous J. Deriving a practical framework for the evaluation of health apps. Lancet Digit Health. 2019;1(2):e52-e54. [FREE Full text] [CrossRef] [Medline]
  22. Lecours A. Scientific, professional and experiential validation of the model of preventive behaviours at work: protocol of a modified Delphi study. BMJ Open. 2020;10(9):e035606. [FREE Full text] [CrossRef] [Medline]
  23. Prinsen CAC, Vohra S, Rose MR, King-Jones S, Ishaque S, Bhaloo Z, et al. Core Outcome Measures in Effectiveness Trials (COMET) initiative: protocol for an international Delphi study to achieve consensus on how to select outcome measurement instruments for outcomes included in a 'core outcome set'. Trials. 2014;15:247. [FREE Full text] [CrossRef] [Medline]
  24. Jünger S, Payne SA, Brine J, Radbruch L, Brearley SG. Guidance on Conducting and Reporting Delphi Studies (CREDES) in palliative care: recommendations based on a methodological systematic review. Palliat Med. 2017;31(8):684-706. [CrossRef] [Medline]
  25. Beiderbeck D, Frevel N, von der Gracht HA, Schmidt SL, Schweitzer VM. Preparing, conducting, and analyzing Delphi surveys: cross-disciplinary practices, new directions, and advancements. MethodsX. 2021;8:101401. [FREE Full text] [CrossRef] [Medline]
  26. Nasa P, Jain R, Juneja D. Delphi methodology in healthcare research: how to decide its appropriateness. World J Methodol. 2021;11(4):116-129. [FREE Full text] [CrossRef] [Medline]
  27. Jorm AF. Using the Delphi expert consensus method in mental health research. Aust N Z J Psychiatry. 2015;49(10):887-897. [CrossRef] [Medline]
  28. Wild D, Grove A, Martin M, Eremenco S, McElroy S, Verjee-Lorenz A, et al. ISPOR Task Force for TranslationCultural Adaptation. Principles of good practice for the translation and cultural adaptation process for Patient-Reported Outcomes (PRO) measures: report of the ISPOR task force for translation and cultural adaptation. Value Health. 2005;8(2):94-104. [FREE Full text] [CrossRef] [Medline]
  29. Dip F, Boni L, Bouvet M, Carus T, Diana M, Falco J, et al. Consensus conference statement on the general use of near-infrared fluorescence imaging and indocyanine green guided surgery: results of a modified Delphi study. Ann Surg. 2022;275(4):685-691. [FREE Full text] [CrossRef] [Medline]
  30. Reid H, Ridout AJ, Tomaz SA, Kelly P, Jones N, Physical Activity Risk Consensus group. Benefits outweigh the risks: a consensus statement on the risks of physical activity for people living with long-term conditions. Br J Sports Med. 2022;56(8):427-438. [FREE Full text] [CrossRef] [Medline]
  31. Rietjens JAC, Sudore RL, Connolly M, van Delden JJ, Drickamer MA, Droger M, et al. European Association for Palliative Care. Definition and recommendations for advance care planning: an international consensus supported by the European Association for Palliative Care. Lancet Oncol. 2017;18(9):e543-e551. [FREE Full text] [CrossRef] [Medline]
  32. Torous J, Jän Myrick K, Rauseo-Ricupero N, Firth J. Digital mental health and COVID-19: using technology today to accelerate the curve on access and quality tomorrow. JMIR Ment Health. 2020;7(3):e18848. [FREE Full text] [CrossRef] [Medline]


APA: American Psychiatric Association
CREDES: Conducting and Reporting Delphi Studies
mHealth: mobile health


Edited by A Mavragani; submitted 23.12.22; peer-reviewed by B Chaudhry, C Gibson; comments to author 22.02.23; revised version received 29.03.23; accepted 30.03.23; published 03.11.23.

Copyright

©Kengo Yokomitsu, Hikari N Takashina, Yoshitake Takebayashi, Seiji Muranaka. Originally published in JMIR Research Protocols (https://www.researchprotocols.org), 03.11.2023.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Research Protocols, is properly cited. The complete bibliographic information, a link to the original publication on https://www.researchprotocols.org, as well as this copyright and license information must be included.