Protocol
Abstract
Background: Understanding a student’s depressive symptoms could facilitate significantly more precise diagnosis and treatment. However, few studies have focused on depressive symptom prediction through unobtrusive systems, and these studies are limited by small sample sizes, low performance, and the requirement for higher resources. In addition, research has not explored whether statistically significant rhythms based on different app usage behavioral markers (eg, app usage sessions) exist that could be useful in finding subtle differences to predict with higher accuracy like the models based on rhythms of physiological data.
Objective: The main objective of this study is to explore whether there exist statistically significant rhythms in resource-insensitive app usage behavioral markers and predict depressive symptoms through these marker-based rhythmic features. Another objective of this study is to understand whether there is a potential link between rhythmic features and depressive symptoms.
Methods: Through a countrywide study, we collected 2952 students’ raw app usage behavioral data and responses to the 9 depressive symptoms in the 9-item Patient Health Questionnaire (PHQ-9). The behavioral data were retrieved through our developed app, which was previously used in our pilot studies in Bangladesh on different research problems. To explore whether there is a rhythm based on app usage data, we will conduct a zero-amplitude test. In addition, we will develop a cosinor model for each participant to extract rhythmic parameters (eg, acrophase). In addition, to obtain a comprehensive picture of the rhythms, we will explore nonparametric rhythmic features (eg, interdaily stability). Furthermore, we will conduct regression analysis to understand the association of rhythmic features with depressive symptoms. Finally, we will develop a personalized multitask learning (MTL) framework to predict symptoms through rhythmic features.
Results: After applying inclusion criteria (eg, having app usage data of at least 2 days to explore rhythmicity), we kept the data of 2902 (98.31%) students for analysis, with 24.48 million app usage events, and 7 days’ app usage of 2849 (98.17%) students. The students are from all 8 divisions of Bangladesh, both public and private universities (19 different universities and 52 different departments). We are analyzing the data and will publish the findings in a peer-reviewed publication.
Conclusions: Having an in-depth understanding of app usage rhythms and their connection with depressive symptoms through a countrywide study can significantly help health care professionals and researchers better understand depressed students and may create possibilities for using app usage–based rhythms for intervention. In addition, the MTL framework based on app usage rhythmic features may more accurately predict depressive symptoms due to the rhythms’ capability to find subtle differences.
International Registered Report Identifier (IRRID): DERR1-10.2196/51540
doi:10.2196/51540
Keywords
Introduction
Background
Need for Identification of Depressive Symptoms
Every 40 seconds, a person commits suicide and there are more than 20 attempts worldwide [
]. Among suicide attempters, major depressive disorder (MDD) is common [ ], and people having MDD are at greater risk of suicidality [ ]. Despite these facts, there is a significant lack of interventions to mitigate depression, due to which the depression rate is increasing. In fact, it is estimated that depression will rank first as a global burden of disease by 2030 [ ]. In addition, due to the COVID-19 pandemic, a significantly higher number of people have MDD [ ], and such a negative impact may persist for a prolonged period. In Bangladesh, the depression prevalence is higher than the overall rate in South Asia [ ]. Compared to Bangladeshi people of any other professions, the depression rate is higher among university students [ ] (eg, 82.4% of students have at least mild depression [ ]), which is alarming.To significantly facilitate early interventions to mitigate depression, there is an urgent need for its early identification [
]. A person is identified as depressed if symptoms (eg, hopelessness) appear for a period, such as most days for a minimum of 14 days [ , ]. However, it is difficult to precisely assess depressive symptoms [ ]. In fact, primary care providers fail to identify depression in more than 50% of cases [ , ]. Understanding the depressive symptoms, such as those in the 9-item Patient Health Questionnaire (PHQ-9) [ ], of a person in real time can significantly facilitate mental health care professionals to understand the illness more precisely, to identify depression early, and to take steps accordingly for intervention.Pervasive Health Research in Low- and Middle-Income Countries
Although over 80% of the burden of depression is found in low- and middle-income countries (LMICs) [
], there remains a severe scarcity of mental health care professionals in LMICs. In Bangladesh, there are only 565 psychologists [ ], although the population is over 165 million [ ], with 1.234 million university students [ ]. In these cases, a pervasive technology, such as a smartphone-based monitoring system, which is available to a large number of people in LMICs, can play a significant role [ ] In addition, to minimize the barriers to health care facilities in low-resource settings, artificial intelligence (AI)–based mobile apps can be useful [ ]. However, almost all the studies that have demonstrated a pervasive technology–based automated system to identify depression have been conducted in the context of high-income countries. For instance, all the studies included in a recent systematic review were from countries other than LMICs [ ], which indicates how little pervasive health research has been conducted in the context of LMICs. As a result, the models developed based on participants from high-income countries may not be applicable in LMICs, since the behavior (eg, app usage [ ]) varies among countries and various factors, such as socioeconomic status [ ] and culture [ ], impact behavior.App Usage Rhythm That May Resemble Biobehavioral Rhythm
Zeitgebers, social and environmental cues, help a person’s rhythms synchronize well [
], which can impact their daily activities. Rhythms based on pervasive device–sensed physiological data change depending on external cues, such as light exposure, eating time, and physical activity [ ]. Similarly, smartphone usage behavior is linked with factors such as eating behavior [ ]. In addition, there is a relation of app usage with alertness, chronotype [ ], and physiological data, such as sleep [ - ]. Like physiological data, app usage behavioral markers vary depending on the hour of the day [ - ] and exercise [ ]. These facts show app usage behavior may also have a rhythmic pattern with reproducible waveforms similar to the rhythms based on physiological data.Although a recent study extracted parametric rhythmic feature–dominant periods from smartphone usage data [
], the study was limited by not exploring rhythmic features, such as the acrophase, interdaily stability (IS), intradaily variability (IV), and relative amplitude (RA). In addition, the study explored mere screen usage, without any exploration of more informative features [ ], such as entropy data–based features. In the case of other previous app usage data–based studies, researchers used descriptive statistical methods [ - ] to determine whether there is any variation over the day and inferential statistical methods [ , ] to find the difference in terms of aggregated data of the 4 time periods, namely morning, afternoon, evening, and night. These approaches have some limitations. First, there is a lack of statistical evidence to show whether there exists any rhythm that could be resolved with the zero-amplitude test [ , ], which is used to detect rhythm in the field of chronobiology. In addition, the mere analysis in presenting the difference between the aggregated data of morning and evening cannot show whether there is a cycle that can repeat over days. The average data may also miss the microscopic view of data [ ]. However, an analysis fitting mathematical models (eg, cosinor model) to time series data can present a microscopic view of the data and inferential statistical estimates of the rhythmic properties [ ]. In addition, many informative behavioral markers, such as the dominant period, stability in behavior, and the peak time of oscillations in the rhythm, cannot be extracted from just finding the differences between periods (eg, morning vs night).Potential of App Usage–Based Rhythms in Identifying Depressive Symptoms in LMICs
In human life, physiological changes reappear in a cyclable waveform [
]. Rhythm features based on physiological data have been explored in both the chronobiology [ ] and pervasive health [ ] areas. Researchers have found a relation between physiological data–based rhythmic markers and health status [ ]. These markers can find out subtle differences that enable marker-based features to predict hospital readmission [ ] and to identify loneliness and the depression class [ ]. This shows the possibility of improving the performance of the models upon the incorporation of app usage rhythmic features to predict the symptoms of depression. However, previous studies [ , ] have mostly relied on wearables to extract rhythmic features, which are costly and may not be affordable [ ] for people with a low income. In contrast, smartphones are economically attainable [ ], and app usage data are resource insensitive [ ]. As a result, app usage data–based systems may be feasible in LMICs.Predicting Depression and Depressive Symptoms Through an Unobtrusive Method
Classification of the Depressed and Nondepressed
Most existing research based on AI for mental health has worked on classification problems [
]. Researchers have classified depressed and nondepressed individuals by developing personalized models [ ] and using contextually filtered features where the rule mining technique was incorporated [ ]. Some other studies (eg, [ , ]) have relied on sensing data (eg, GPS data), along with smartphone data call history, to predict depression. Researchers have also leveraged internet usage data [ ], location data retrieved through the campus WiFi infrastructure [ , ], and the GPS [ - ]. Recently, some researchers focused on exploring rhythmic features to assess depression. For instance, Yan et al [ ] leveraged rhythm-based features to classify depressed and nondepressed participants. However, the classification does not provide precise information about a participant’s depressive status, since scores of all the symptoms are aggregated to keep the participant in a particular group (eg, depressed or moderate depression), resulting in a loss of the complexity of the psychological problem of depression.Predicting Depression Scores
Compared to classification research problems, there is relatively less research on predicting the depression score (eg, PHQ-9 score=11). Studies in the pervasive health area can be broadly categorized into those that have developed models leveraging data based on both smartphones and wearables [
- ]; only smartphones [ , ]; various sensing devices, along with social media [ ]; and subjective and smartphone data [ ]. There remain mixed findings on whether the smartphone has superior performance than the wearable. Smartphone-sensed data showed higher performance in a previous study [ ] when models were evaluated after splitting training and testing data based on time. However, in the same study [ ], researchers found smartphones have inferior performance on another evaluation criterion. Regardless of superiority, both wearables and smartphones show promising performance in the automated assessment of depression [ ], which can play a role in real-time remote monitoring of depressed individuals [ ]. Wearable- and smartphone-sensed behavioral markers, such as call duration [ ] and heart rate [ ], as well as inferred markers, such as the circadian rhythm [ ], have a significant correlation with the depression score, which may explain the fact of enabling the sensed data to predict depression. However, like the classification, in the prediction of depression scores also, there remains the inability to precisely understand depressive symptoms since a person’s depression score (eg, 11) can be found in combination with different frequencies of the appearance of different symptoms.Predicting Symptoms and Multitask Learning
Researchers have conducted a network analysis of depressive symptoms and presented a possible viable target for intervention [
] and central symptoms for possible focused treatments [ ]. The relation of ecological momentary assessment of depressive symptoms with the PHQ-9 score [ ] and the relation of depressive symptoms with behavioral data [ ] have also been investigated in previous studies. Exploring pervasive device–sensed data, researchers have found a link of higher spending duration in students with higher fatigue, which is a depressive symptom [ ]. Although there are studies predicting symptoms of other psychological problems, such as schizophrenia [ ] and attention-deficit/hyperactivity disorder (ADHD) [ ], few studies [ ] have predicted the appearance of depressive symptoms. In a previous study [ ], authors predicted depressive symptoms through the data of iPhone users and Android users. However, their study has several limitations. First, their models’ performance is low in the case of the maximum depressive symptoms; particularly, the specificity score is around 60% or below 60% in many models developed by leveraging smartphone data. Second, they developed a separate model to predict each symptom, which makes the model resource sensitive. Third, they considered each symptom-predicting task as a separate one, which could be improved by leveraging the shareable information among symptoms through the multitask learning (MTL) framework as MTL has superior performance than the single-task learning (STL) technique [ , ]. However, researchers have grouped all the predicting tasks into a single model in previous studies [ , ], which may lower performance since all tasks do not help each other to improve performance [ ].Objective
To overcome the above-mentioned limitations, the main objective of this study is to explore app usage data–based rhythmicity detection and develop and validate an MTL framework leveraging app usage data–based parametric and nonparametric rhythmic parameters through a countrywide study in Bangladesh. Another objective of this study is to explore whether app usage rhythmic features are related to depressive symptoms.
Methods
Ethical Considerations
Our study was approved (reference: 2022/OR-NSU/IRB/0704) by the North South University Institutional Review Board/Ethics Review Committee. While collecting data, we presented the consent form in short in Bangla (native language) and English (
a), where we provided a summary of the data that we collected. In addition, we provided a consent form in detail ( b) about different matters, including data safety, privacy, and each data item that we collected. Furthermore, before data collection, we briefly described the research to the participants. All participants provided their consent. After collecting data, we provided an incentive of food tokens equivalent to around US $0.30 to almost all participants. For certain participants (<5%) where it was not feasible to give food tokens, we provided an equivalent gift in other forms (e.g., by book diary).All participants’ data are stored in the Google Firebase database, which only 1 of the researchers can access with 2-factor authentication. We did not collect any personal information from the participants, such as name, email, and phone number. In addition, we did not collect the names of the participants’ departments and universities so that they would feel more comfortable in providing data. We reported the number of departments and universities based on our aggregated notes.
Retrieval of App Usage Behavioral Markers
In this study, we will focus on developing our system in such a way that the users of the system and also health care professionals can be informed about depressive symptoms in real time. For instance, a health care professional, with the user’s consent to access information, can be notified if our system’s automatic prediction in a day shows that the depressive symptoms can worsen. For this research, we used our previously developed app [
], which can retrieve each participant’s past 7 days’ foreground and background app usage event data within 1 second once the participant provides consent. The average time required to retrieve app usage events is 307.94 milliseconds (SD 1.1 seconds) [ ]. For each app usage event, there are data on the app name, package name, and timestamp of the event, which we will use to extract behavioral markers. The app ( ) was used in our previous studies to explore different research problems, including students’ academic results [ - ], depression [ - , ], and loneliness [ , ], showing the app’s reliability and validity.Sample Size Determination and Data Collection
In Bangladesh, there are around 1.234 million university students [
], and an optimal sample that represents the behavior of this population with a 95% CI and a 5% margin of error is 385 university students, as we found with SurveyMonkey [ ], which uses the formula for the finite sample size [ ]. Using the same formula, we found that the required sample size to represent the 0.448 million female and 0.786 million male student groups each is 384 [ ]. Since the depression rate differs between students of public and private universities in Bangladesh [ ], we collected data from both types of universities. In the 0.329 million and 0.902 million students of public and private universities, respectively [ ], we found that the required sample size in each case is 384. However, there is no fixed sample size that can ensure the generalizable performance of machine learning (ML) models. Therefore, to develop an impactful model, we tried to maximize the number of participants by conducting a countrywide study. In addition, we tried to maximize the number of students from public universities because the largest number of students study there [ ].We collected data from all 8 divisions of Bangladesh using the multistage convenient sampling method. From each division, we collected data from at least 1 university and multiple departments. We tried to maximize the diversity among the participants because the socioeconomic status and many other demographic characteristics vary by region of a country, which can have an impact on mental health [
] and smartphone usage behavior [ , ].We collected data from a total of 2952 participants from September 2022 to March 2023. While collecting data, first, through our app [
], the participants responded to questions about demographic characteristics (eg, gender) after providing consent. Next, using the same app, they responded to the items of the PHQ-9 [ ] based on their experiences of the past 14 days. After saving the responses to all psychological questions, the app automatically retrieved their app usage data, which may take less than 1 second in almost all cases, as shown in our previous estimation [ ]. After retrieval, the app saved the app usage data instantly.Data Preprocessing and Data Set Description
Our app could not retrieve any app usage data from 14 (0.47%) participants’ phones. One of the plausible reasons for this problem could be system problems in the phones, as shared by the participants. Several of these participants shared that they do not use the original version of the phones. In addition, for 2 (0.07%) participants, age values were missing, and we did not impute these values, as age information was not required for the primary purpose of the research. Furthermore, data on 82 (2.77%) participants’ professions were missing. We imputed the missing professions based on 2 pieces of information: First, in our study, only 2 (0.07%) participants were not students, and we did not reach out to them for data collection; they provided data by installing the app from Google Play Store, which was based on their own interests, indicating a low probability of having a profession other than “student.” Second, all 82 (2.77%) participants provided data when we reached the universities for data collection, as we found by matching the study dates and timestamps of when these participants provided data. Thus, we imputed these 82 participants’ profession as “student.”
Although the participants were required to provide data at least once, we encouraged them to provide data as many times as possible. After excluding the 14 (0.47%) participants for whom our app could not collect any app usage events and the 2 (0.07%) participants who were not students, 2936 (99.46%) participants remained. Of them, 71 (2.42%) provided data at least twice. However, since there were few participants who provided data twice, in this study, we kept only the first time–provided data for the next steps of the analysis.
Since in this study, we will estimate the circadian rhythm, we followed previous studies to find out the minimum number of days required to estimate the circadian rhythm. Studies have suggested to have data of at least 1 day for estimating the circadian rhythm [
]. However, sensed data of 2 days can estimate the circadian rhythm sufficiently [ ]. Researchers have also found that rhythmic features extracted based on the data of 2 days can predict more accurately physiological and mental changes [ ]. Therefore, inspired by these previous studies, we excluded 34 (1.15%) participants who had app usage data of less than 2 days.Finally, after excluding 2 nonstudents, 14 students without any app usage data, and 34 students having app usage data of less than 2 days, we were left with 2902 (98.31%) students’ app usage data (
), which will be analyzed in the next steps. In total, there are 24.48 million foreground and background app usage events of 24.84 million minutes. Of the 2902 participants, 2849 (98.17%) have app usage data of 7 days ( ).Ground Truth Data to Measure Depressive Symptoms
We used the PHQ-9 (
) [ ], which is one of the most commonly used scales to assess depression [ ]. In our study, we used the PHQ-9 translated into Bengali [ ], which has been validated and is largely used in Bangladesh (eg, [ , ]). The PHQ-9 score of 10 has a sensitivity and specificity of 88% in identifying MDD [ ]. Based on this cutoff score, we will categorize the participants as depressed if the PHQ-9 score is at least 10.In a previous study [
], a person was categorized as having a depressive symptom if that symptom appeared for several days or more. However, we will consider the threshold more than half of the days since the National Institute of Mental Health (NIMH) and the World Health Organization (WHO) define a person as depressed if symptoms appear for a specific time frame, such as most days for a minimum of 14 days [ , ]. In this study, if a participant reports being bothered by a depressive symptom (eg, hopelessness) in the PHQ-9 for more than half of the days or nearly every day of the past 14 days, we will categorize that participant as having that depressive symptom.Symptom number | Symptom in the PHQ-9 |
1 | Little interest or pleasure in doing things |
2 | Feeling down, depressed, or hopeless |
3 | Trouble falling or staying asleep or sleeping too much |
4 | Feeling tired or having little energy |
5 | Poor appetite or overeating |
6 | Feeling bad about yourself or that you are a failure or have let yourself or your family down |
7 | Trouble concentrating on things, such as reading the newspaper or watching television |
8 | Moving or speaking so slowly that other people could have noticed or the opposite—being so fidgety or restless that you have been moving around a lot more than usual |
9 | Thoughts that you would be better off dead or of hurting yourself in some way |
aPHQ-9: 9-item Patient Health Questionnaire.
Extraction of Behavioral Markers
As we will test the rhythmicity where the time series data are used [
] and explore the rhythmicity of the app usage behavioral markers instead of exploring raw foreground and background app usage events, we will set a time frame of 15 minutes based on which each app usage behavioral marker will be extracted. However, to check the robustness of the findings, we will follow the same process to explore rhythmicity using 2 other time frames as well: 10 minutes and 20 minutes.In addition to calculating the usage duration, frequency of launching apps, entropy based on duration and entropy based on frequency of launching apps, and app usage sessions [
], we will calculate the following behavioral markers based on which we will extract parametric and nonparametric rhythmic features: relative importance of app categories, personalized top n apps, and cousage of apps.Relative Importance of App Categories
In pervasive health research, aggregated app usage data (eg, [
- ]) are widely used, where the individual app categories remain unexplored. In our previous studies [ , ], we found that app category–based features are more important than aggregated smartphone usage regardless of category. However, in those studies, features of an app category were extracted independently regardless of the usage behavior of other categories. As a result, the individual category itself may not provide higher information for the app categories that are less used but contain distinguishable markers. For instance, if a participant launches social media apps 100 times and health and fitness apps only 10 times, data from the health and fitness category may get lower importance. However, data from the health and fitness category can be more important since depressed students use apps that contain features for smoking prevention and body weight reduction, which may not be used more frequently but may contain enough information to be significantly different from those used by nondepressed students [ ]. Hence, we will calculate the term frequency–inverse document frequency (TF-IDF), which is a widely used technique in natural language processing [ ] where less frequent terms across documents can get more importance. To adapt TF-IDF in the context of app usage, we will use data from all time frames over all days. In each time frame f, the app usage sessions s will work as the set of documents, and in each session (ie, a document) j, the list of categories of the used apps will act as the words.Let the set of documents of participant i in f be {Dij, Dij+1, . . . Dis}, where , and n represents the number of participants. We will calculate TF-IDFicj for each app category c based on the participant’s data of sessions s.
TF-IDFicj = TFicj × IDFicj,
where TFicj = log[freq(c) + 1], IDFicj = , freq(c) is the number of times app c was launched in session j, df is the number of sessions where c was used, and s represents the total number of sessions in f. After calculating TF-IDFicj for each session, the mean TF-IDF value will be calculated for each category over the sessions of a time frame. Finally, using that mean value, we will extract the rhythmic features to understand how the relative importance of participant i in using a particular category c varied or remained constant over days and over the periods of a day and whether there was a rhythm in behavioral markers based on the TF-IDF.
To categorize the apps, we will follow relevant previous studies (eg, [
]) and the process in our previous studies [ - , , ], where we categorized the apps into more than 20 categories after exploring developers’ referred categories in Google Play Store and other app stores and discussing with graduate students of the computer science and engineering (CSE) department.Personalized Top n Apps
We will calculate entropy based on the usage duration and the frequency of launching the top n apps, which can vary by student. To find the value of n, we will use the probability distribution and plot the probability of using apps on the y axis and the number of apps on the x axis. The point where the curve falls will be considered the threshold (ie, the value of n) to find the top n apps. To find the cutoff value, we will follow a previous study [
] that found cutoff values to exclude the participants having missing values.Cousage of Apps
A significant portion of app usage consists of switching from one app to another [
]. One plausible reason for such behavior can be changing moods during app usage [ ], and the users may do it to seek support, overcome negative emotions, etc. To quantify how participants switch from one app to another, we will calculate the cousage of apps, where the usage of 2 apps will be considered coapp usage if they are used in a single app usage session and also used consecutively. However, in a smartphone, there are many system apps that open automatically to support the function of another app. Users do not need to use those apps intentionally, and as a result, the inclusion of those apps may misrepresent behavior. In addition, to switch from one app to another, the user returns to the home screen of the smartphone, where the launcher app opens automatically. Considering the aforementioned issues, we will exclude these system apps and launcher apps before quantifying the coapp usage.To find subtle differences in the variation in app usage patterns, we will build a network based on cousage, where each edge will present the cousage of 2 different apps. The weight of the edge will be calculated using point mutual information (PMI).
where p(A1, A2) represents the probability of A1 and A2 apps to consecutively appear in the same app usage session and p(A1) and p(A2) represent the probability of A1 and A2 apps, respectively, to appear in that session regardless of consecutiveness. Next, we will calculate the centrality and graph edit distance. The calculation of centrality will help us understand the most influential app in the network that is connected to the maximum number of nodes. The graph edit distance between 2 different sessions of the same time frame will inform whether the behavior differs. Finally, we will use the average data of each session of a time frame to explore the rhythmicity of the centrality and graph edit distance.
Rhythm Analysis and Extraction of Rhythmic Features
The parametric cosinor method is one of the most widely used approaches to find out rhythmic parameters. However, cosinor analysis cannot find out the fragmentation in the rest-activity rhythm [
], which can be detected by extracting nonparametric rhythmic parameters, such as the IS. Therefore, to obtain a comprehensive picture of rhythms, we will conduct both parametric and nonparametric tests and extract the respective rhythmic features, as described later. In this study, to extract rhythms, instead of focusing only on all students’ data-based global models, we will also develop individual models and extract the rhythmic parameters for each participant. The main reason is that physiological data–based nonparametric rhythmic [ , ] and parametric rhythmic [ ] parameters vary by the characteristics of people, and thus, there may also be a variation in parameters by the individual participant’s rhythm that is solely based on app usage data.Dominant Period
In cosinor analysis, a cosine curve is fitted on the given period (eg, 24 hours) using the least squares method, where the model reduces the difference between observed and estimated values. However, cosinor analysis itself cannot estimate the best-fitting period [
]. Thus, we will empirically investigate through periodogram analysis and by setting the periods from 1 to 24. Later, the cosinor model will be developed for each given period (eg, 13 hours). The best-fitting period will be counted as the dominant period where the proportion of explained variance is the maximum.Rhythm Detection
Cosinor analysis is a parametric method, and hence, we will first test normality [
]. Later, we will process the nonnormally distributed data through log transformation. Next, to find out whether a statistically significant rhythm of a participant exists, we will conduct a zero-amplitude test [ ] by setting the significance level to .05. To find out whether an individual participant’s rhythm differs from the rhythm based on all participants, we will perform population mean cosinor analysis based on all the participants. In addition, we will develop a cosinor model for each participant and then calculate the average of the cosinor rhythmic parameters [ ].MESOR, Amplitude, and Acrophase
After developing the cosinor model, we will extract the following parametric rhythmic parameters for each participant: midline estimating statistic of rhythm (MESOR), amplitude, and acrophase. MESOR is the rhythm-adjusted mean [
], the amplitude presents the difference between the equilibrium position and the peak point of the rhythm oscillation [ ], and the acrophase presents the timing of the high values recurring in each cycle of the rhythm [ ]. Since the amplitude can present the strength of the rhythm [ ], while comparing the diurnal (12-hour period) and circadian (24-hour period) rhythms to determine which rhythm has more strength, we will compare the amplitude. Moreover, we will compare the coefficient of determination as it presents how well a model fits in a given period.Interdaily Stability and Intradaily Variability
A person’s mental state has a relation with the IS and the IV. For example, patients with bipolar disorder have less IS and higher IV [
]. Similar patterns may be found in the IS and the IV based on app usage data, and we will calculate the IS and the IV, based on a previous study [ ] on actimetry.Here, xi is the behavioral marker’s value at a specific time frame, xm is the mean value of the same behavioral marker in all time frames, and N is the number of time frames.
Here, xi is the behavioral marker’s average value in that time frame over days and p is the number of time frames per day. The range of the IV value is 0-2 and that of the IS value is 0-1 [
]. The higher the IS, the greater the stability, as the name implies. However, the higher the IV, the higher the fragmentation in the rhythm. If a difference remains, for example, if someone sleeps during the daytime and keeps waking up at nighttime, their IV will be higher [ ].M10, L5, and Relative Amplitude
M10 presents the mean value of the most active consecutive 10 hours and provides information about the diurnal activity. Active persons have a higher M10 [
], and a lower M10 can be associated with exercise reduction [ ] and also with a negative mental state [ , ]. L5 presents the mean value of the least active 5 consecutive hours of the day. It is a measure of nocturnal activity, and the higher L5 may represent activity during the rest cycle [ ]. After calculating M10 and L5, we will calculate the RA = (M10 – L5)/(M10 + L5). As can be understood from the formula of RA, the RA is actually the normalized difference between M10 and L5. The larger the difference between L5 and M10, the larger the RA. People with psychological problems can have a lower RA [ , ].Statistical Analysis
To explore whether there is any relation between depressive symptoms and app usage rhythmic features, we will perform binomial logistic regression. As having a variance inflation factor (VIF) of more than 5 can create a biased regression model [
], we will eliminate the variable if the VIF of any variable goes beyond 5.ML Model Development
Participant Similarity and Development of Personalized Models
Most of the existing models (eg, [
, , , ]) to predict depression use training data to predict the outcome regardless of the characteristics of the participant whose class will be predicted. These models may have issues regarding generalizability since all individuals have unique characteristics that may not be captured in one-size-fits-all models (ie, 1 set of training data to predict the depression of all test participants). Indeed, through empirical investigation, it has been found that the personalized model performs better than the one-size-fits-all model [ , ]. The one-size-fits-all or global model is likely to capture the general or the “average” characteristics of the participants for the prediction due to which the model may perform well only on the “average” participants and may not work in the case of the participants whose behavior deviates from the “average” participants [ ]. However, the personalized model is trained dynamically for each participant, which can facilitate finding the most relevant set of features to predict the outcome of the test participant [ ]. Therefore, a personalized model may perform better in predicting depressive symptoms since each of the depressed students has a unique app usage signature [ ] and depressed students have statistically significantly different smartphone usage behaviors than nondepressed students [ ], as we found in our previous studies [ , ].In our app to predict depressive symptoms, we will use each participant i’s rhythmic parameters based on each of the app usage behavioral markers of mi to calculate the cosine similarity with all other participants n – 1.
Here, Bi and Bj represent the vectors based on the rhythmic parameter sets mi and mj of participants i and j, respectively. The cosine similarity sij will be close to 1 when there is a higher similarity between participants i and j. After finding the similarities between participant i and all other participants, we will use the most similar N participants to train the MTL framework for participant i. The value of N will be decided empirically through a search on the values 200, 250, 300, and 350.
MTL Framework Development and Validation
Although some similar tasks remain, the development of the model through the MTL framework can facilitate improving the performance of the models through information sharing. In addition, if different models are developed for different tasks, then the system can be resource inefficient. In a previous study [
], researchers used MTL to develop systems for predicting the symptoms of schizophrenia. However, in that system, all symptom prediction tasks were considered the same. In reality, all the tasks included in a model do not help one another improve performance [ ]. Therefore, we will find the similarities among the symptoms and use similar symptom prediction tasks in a model, while other similar symptom prediction tasks will be in another model. To group the tasks, we will calculate the correlation coefficients among the symptoms. Symptoms that are highly (coefficients>0.7), moderately (0.4≤coefficients≤0.7) [ ], and less than moderately correlated or not correlated will be kept in 3 different groups.While developing the MTL framework, we will use the hard parameter-sharing technique since in this approach, the model can find a common representation to capture all the tasks that can reduce the potential risk of overfitting [
]. Combining multiple loss functions leads to promising performances [ ]. In our study, we will use a weighted loss of the hinge and cross-entropy. However, we will not use a fixed weight. Instead, we will tune the weight using the Bayesian search optimization technique, which selects the next parameter based on the performance of the previously selected one.To validate the framework, we will use the nested cross-validation (CV) method since this approach is found to have a generalizable performance compared to K-fold CV [
]. In the outer loop, we will use the leave-one-out cross-validation (LOOCV) method, and in the inner loop, we will use a 10-fold CV, where in each interaction, 9 folds will be used for tuning the hyperparameters and the remaining 1 fold will be used for validation. We are aware of the fact using the LOOCV will increase the time complexity since there are 2902 participants and we will be developing a personalized model for each of them. However, we chose to use the LOOCV because it will work like the real-world scenario, where the model will predict a single participant’s depressive symptoms at a time. In addition, this process will help in personalizing the model, where we will use only those participants in model training who are similar (in terms of app usage behavior without using any information about depressive symptoms) to the student for whom the model will predict the symptoms, as discussed in detail in the Participant Similarity and Development of Personalized Models section. During model development, we will maximize the balanced accuracy as it is based on sensitivity and specificity, and having a higher balanced accuracy can lead to higher precision and F1-score values.To understand the robustness of the proposed MTL framework, we will compare it with the performance in the following approaches:
- Comparison with STL-based models: We will compare the performance of the proposed personalized MTL framework with that of STL models. This will resemble the approach presented in a previous study [ ], where each symptom was considered a single task. To develop the STL model, we will use ML algorithms, such as the random forest (RF), support vector machine (SVM), decision tree (DT) [ ], and logistic regression [ ], which are widely used in medical informatics, as shown in systematic reviews [ , ].
- Comparison with the nonpersonalized MTL framework: Since we expect that personalization may provide better performance, as discussed in the MTL Framework Development and Validation section, we will compare the performance of the personalized MTL framework with that of a nonpersonalized MTL framework, where we will use n – 1 participants’ data for training instead of using a personalized subset of data.
- Comparison with the MTL framework without grouping tasks: To understand how grouping tasks based on similarity impact performance, we will compare the performance of the MTL framework with and without grouping tasks.
Results
As mentioned earlier, after applying inclusion criteria, we kept the data of 2902 (98.31%) of 2952 students for analysis, with the data of 24.48 million app usage events, and 7 days’ app usage of 2849 (98.17%) students.
shows the findings regarding participants’ demographic characteristics. The participants were from the 8 divisions of Bangladesh ( a). Most participants (n=887, 30.56%) were from the Dhaka division, which also reflects the fact that the majority of university students of Bangladesh reside in this division. The participants’ age varied from 18 to 39 years, and 2309 (79.57%) participants were aged 20-23 years ( b). Of the 2902 participants, 1107 (38.15%) and 1783 (61.44%) were female and male participants, respectively ( c). There were 2430 (83.74%) and 472 (16.26%) students from public and private universities, respectively. The participants belonged to 19 universities ( d), including specialized universities, studying the following subjects: agriculture, engineering, and textiles. They also belonged to 52 different departments, including arts (eg, Department of Sculpture), business (eg, Department of Management Studies), engineering (eg, Department of Petroleum & Mining Engineering), science (eg, Department of Botany), textiles (eg, Department of Apparel Engineering), public health, and law faculties.
In the remaining part of the study, we will work on rhythm detection, rhythmic feature extraction, and MTL framework development. We expect to publish our findings by June 2024.
Discussion
Significance
By using the data set constructed through a countrywide study on 2902 students having over 24 million app usage events, we will explore whether there is a statistically significant rhythm based on the different app usage behavioral markers. We hypothesize that app usage behavioral markers, such as the relative importance of an app category, have rhythmic patterns with reproducible waveforms because, like physiological data, the markers vary depending on factors such as the time of day [
- ]. In addition, since rhythmic features based on physiological [ , ] and activity [ ] data have potential applications in problems such as determining which participants have a higher risk of disease [ ], determining sedentary behavior [ ], and finding subtle changes in detecting COVID-19 [ ], an in-depth exploration of app usage marker–based rhythms may show an alternative source of data to understand the rhythms in human life. App usage marker–based rhythms have the possibility to be used for different purposes. For example, a statistically significant relation between the rhythmicity of app usage and depressive symptoms can create the possibility of using these rhythmic features for an intervention to mitigate depression.In addition, by predicting depressive symptoms, our study will extend the findings of previous studies since most studies (details in a recent systematic review [
]) in the pervasive health area have developed classification models (eg, to classify depressed and nondepressed individuals) where the complexity of the psychological problem of depression may be lost. For instance, a participant with a PHQ-9 score of 10 has moderate depression [ ], and a score of 10 can result from different combinations of the subscores of the 9 symptoms in the PHQ-9. As a result, by classifying participants into a few groups based on the overall score on a scale, it is not possible to precisely determine the depressive symptoms that bother a student. However, it is important to know since each depressive symptom (eg, symptoms in the PHQ-9 [ ]) presents a unique phenomenon (eg, anhedonia, sleep disturbance, suicidal ideation) [ ]. Therefore, depending on our proposed personalized MTL framework’s performance based on real-time data, the proposed app can contribute to early diagnosis of depressive symptoms and precise understanding of a depressed student, which, in turn, may contribute to mitigating depression prevalence.Our previous pilot studies in Bangladesh on the relation of app usage with depression [
, ] and loneliness [ ], classifying depressed and nondepressed students [ ] and with and without loneliness [ ], showed promising models solely based on resource-insensitive [ ] app usage behavioral markers. Incorporating app usage rhythmic features and also the MTL framework by leveraging the similarities among the symptoms’ prediction tasks so that tasks do not hurt one another’s performance may help researchers and developers in developing more robust models to predict the symptoms of psychological problems solely through app usage data. In addition, our app’s reliance on data retrieved from a smartphone within 1 second [ ] may also make it feasible in LMICs since smartphones are more affordable [ ] compared to wearables [ ] that are usually used to obtain physiological data and extract rhythmic features.Strengths
The median sample size of previous studies that classified or predicted depression was 58, and none of the studies that developed computational models for prediction tasks had a sample size of over 500, as shown in a recent systematic review [
]. However, we constructed a large data set comprising 2902 students. In addition, the participants of our study are from all 8 divisions of Bangladesh, from both public and private universities and 52 different departments. To the best of our knowledge, this is the largest data set containing data on both app usage and depressive symptoms. Considering these facts, our findings based on the proposed methods may be generalizable, may be robust enough to be impactful in the real world, and may contribute significantly to advancing the knowledge in mobile and pervasive health research areas.To the best of our knowledge, this will be the first study to explore in depth rhythms based on different app usage behavioral markers, which can create an opportunity to find an alternative source of data to understand the rhythms of daily life without depending on physiological data–based rhythms, which are usually retrieved by costly wearables.
In our recent work based on app usage [
], our developed app had higher performance in predicting depression than the existing systems based on app usage as well. Through feature analysis (for details, please see Ref. [ ]), we found that our newly explored behavioral markers (eg, ratio of the hamming distance [ ]) were more important than the features used in previous studies. That being said, performance varies depending on the behavioral markers used in a model. Hence, the novel behavioral markers (eg, relative importance of app categories, cousage of apps) we presented in this protocol that were not explored in previous pervasive health research have significance. In addition, to predict depressive symptoms, we will develop a personalized MTL framework. Although an MTL framework has been developed in some previous studies (eg, [ ]) to predict a person’s mental state, our study will add to this knowledge by showing the performance of a personalized MTL framework.Limitations
Following previous studies [
, , ], to analyze rhythms, we have included 2902 participants in this study who had app usage data of at least 2 days and most of whom (n=2849, 98.17%) had app usage data of 7 days. Data of more than 7 days will help us better understand the stability of the rhythms, the rhythm disruption over weeks, and its potential effect on depressive symptoms’ appearance. However, we believe our study can work as a primary cursor for future studies to further explore app usage rhythms.Although our study includes over 2900 students from different divisions of Bangladesh, our proposed app may not be generalizable to every student since behavior varies depending on many factors, such as season, region, [
], and socioeconomic status [ ]. We recommend future studies to include participants based on more factors that can impact behavior. Moreover, in our study, although we have included participants from all 8 divisions of Bangladesh, we could not include participants from all 64 districts of the 8 divisions. In addition, including more participants from rural areas could have potential to obtain a more reliable picture of the students’ behavior, which, in turn, can be useful to develop a better app.Conclusion
Predicting depressive symptoms accurately could help in better diagnosis of depression and in taking appropriate steps accordingly. However, existing models regarding symptom prediction are limited by various issues, including low performance (eg, specificity is around or below 60% for most symptoms). Our proposed approach to explore rhythmic features from app usage behavioral markers and the development and validation of the MTL framework through our constructed large-scale data set may provide new insights into rhythms and higher performance in predicting depressive symptoms.
Acknowledgments
This research has been funded by North South University (NSU; ID: NonCTRG-22-26) and the Institute of Advanced Research (IAR; ID: UIU-IAR-01-2022-SE-21) of the United International University (UIU). In addition, the NSU and the IAR (ID: UIU-IAR-01-2022-SE-21) supported publication in JMIR Research Protocols. We are grateful to the faculties of universities located in different regions of Bangladesh without whose cordial support, it would not have been possible to conduct this countrywide study. We also thank the participants for their time and willingness to provide data.
Data Availability
Since app usage data are sensitive, making the data publicly available can raise different data privacy and safety concerns (eg, reidentification of the participants [
]). Thus, we do not plan to upload the data to any public data repositories. However, the data can be accessed by sending a reasonable request to the corresponding author.Conflicts of Interest
None declared.
References
- Suicide prevention. World Health Organization. URL: https://www.who.int/health-topics/suicide [accessed 2023-03-18]
- Dong M, Zeng L, Lu L, Li X, Ungvari GS, Ng CH, et al. Prevalence of suicide attempt in individuals with major depressive disorder: a meta-analysis of observational surveys. Psychol Med. Jul 2019;49(10):1691-1704. [CrossRef] [Medline]
- Cai H, Xie X, Zhang Q, Cui X, Lin J, Sim K, et al. Prevalence of suicidality in major depressive disorder: a systematic review and meta-analysis of comparative studies. Front Psychiatry. 2021;12:690130. [FREE Full text] [CrossRef] [Medline]
- The global burden of disease. World Health Organization. 2008. URL: https://apps.who.int/iris/bitstream/handle/10665/43942/9789241563710_eng.pdf [accessed 2023-03-18]
- COVID-19 Mental Disorders Collaborators. Global prevalence and burden of depressive and anxiety disorders in 204 countries and territories in 2020 due to the COVID-19 pandemic. Lancet. Nov 06, 2021;398(10312):1700-1712. [FREE Full text] [CrossRef] [Medline]
- Bangladesh WHO special initiative for mental health situational assessment. World Health Organization. 2019. URL: https://www.who.int/docs/default-source/mental-health/special-initiative/who-special-initiative-country-report---bangladesh---2020.pdf?sfvrsn=c2122a0e_2 [accessed 2023-03-18]
- Hosen I, Al-Mamun F, Mamun MA. Prevalence and risk factors of the symptoms of depression, anxiety, and stress during the COVID-19 pandemic in Bangladesh: a systematic review and meta-analysis. Glob Ment Health (Camb). 2021;8:e47. [FREE Full text] [CrossRef] [Medline]
- Islam MA, Barna SD, Raihan H, Khan MNA, Hossain MT. Depression and anxiety among university students during the COVID-19 pandemic in Bangladesh: a web-based cross-sectional survey. PLoS One. 2020;15(8):e0238162. [FREE Full text] [CrossRef] [Medline]
- Fusar-Poli P, McGorry PD, Kane JM. Improving outcomes of first-episode psychosis: an overview. World Psychiatry. Oct 2017;16(3):251-265. [FREE Full text] [CrossRef] [Medline]
- Depression. National Institute of Mental Health. 2023. URL: https://www.nimh.nih.gov/health/topics/depression [accessed 2023-04-14]
- Depressive disorder (depression). World Health Organization. 2023. URL: https://www.who.int/news-room/fact-sheets/detail/depression [accessed 2023-04-14]
- Davidsen AS, Fosgerau CF. What is depression? Psychiatrists' and GPs' experiences of diagnosis and the diagnostic process. Int J Qual Stud Health Well-being. Nov 06, 2014;9(1):24866. [FREE Full text] [CrossRef] [Medline]
- Coyne JC, Schwenk TL, Fechner-Bates S. Nondetection of depression by primary care physicians reconsidered. Gen Hosp Psychiatry. Jan 1995;17(1):3-12. [CrossRef] [Medline]
- Cepoiu M, McCusker J, Cole MG, Sewitch M, Ciampi A. Recognition of depression in older medical inpatients. J Gen Intern Med. May 2007;22(5):559-564. [FREE Full text] [CrossRef] [Medline]
- Kroenke K, Spitzer RL, Williams JB. The PHQ-9: validity of a brief depression severity measure. J Gen Intern Med. Sep 2001;16(9):606-613. [FREE Full text] [CrossRef] [Medline]
- Depression and other common mental disorders: global health estimates. World Health Organization. 2017. URL: https://apps.who.int/iris/bitstream/handle/10665/254610/WHO-MSD-MER-2017.2-eng.pdf [accessed 2023-03-18]
- BBS. 2022. URL: http://www.bbs.gov.bd/site/page/47856ad0-7e1c-4aab-bd78-892733bc06eb/Population-&-Housing [accessed 2024-03-02]
- Chapter – five: madrasah education. BANBEIS. 2021. URL: https://tinyurl.com/4sa7k2jt [accessed 2023-03-18]
- Delaporte A, Bahia K. Mobile for development: the state of mobile internet connectivity report 2022. GSMA. 2022. URL: https://www.gsma.com/r/somic/ [accessed 2023-04-15]
- Owoyemi A, Owoyemi J, Osiyemi A, Boyd A. Artificial intelligence for healthcare in Africa. Front Digit Health. 2020;2:6. [FREE Full text] [CrossRef] [Medline]
- De Angel V, Lewis S, White K, Oetzmann C, Leightley D, Oprea E, et al. Digital health tools for the passive monitoring of depression: a systematic review of methods. NPJ Digit Med. Jan 11, 2022;5(1):3. [FREE Full text] [CrossRef] [Medline]
- Sekara V, Alessandretti L, Mones E, Jonsson H. Temporal and cultural limits of privacy in smartphone app usage. Sci Rep. Feb 16, 2021;11(1):3861. [FREE Full text] [CrossRef] [Medline]
- Tu Z, Cao H, Lagerspetz E, Fan Y, Flores H, Tarkoma S, et al. Demographics of mobile app usage: long-term analysis of mobile app usage. CCF Trans Pervasive Comp Interact. Apr 21, 2021;3(3):235-252. [CrossRef]
- Quante M, Mariani S, Weng J, Marinac CR, Kaplan ER, Rueschman M, et al. Zeitgebers and their association with rest-activity patterns. Chronobiol Int. Feb 2019;36(2):203-213. [FREE Full text] [CrossRef] [Medline]
- Ryu S, Oh H. Duration and content type of smartphone use in relation to diet and adiposity in 53,133 adolescents. Curr Dev Nutrition. Jun 2021;5:1088. [CrossRef]
- Murnane E, Abdullah S, Matthews M, Kay M, Kientz J, Choudhury T, et al. Mobile manifestations of alertness: connecting biological rhythms with patterns of smartphone app use. 2016. Presented at: MobileHCI '16: 18th International Conference on Human-Computer Interaction with Mobile Devices and Services; September 6-9, 2016; Florence, Italy. [CrossRef]
- Abdullah S, Matthews M, Murnane E, Gay G, Choudhury T. Towards circadian computing: “early to bed and early to rise” makes some of us unhealthy and sleep deprived. 2014. Presented at: UbiComp '14: 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing; September 13-17, 2014; Seattle, WA. [CrossRef]
- Mehrotra A, Müller SR, Harari GM, Gosling SD, Mascolo C, Musolesi M, et al. Understanding the role of places and activities on mobile phone interaction and usage patterns. Proc ACM Interact Mob Wearable Ubiquitous Technol. Sep 11, 2017;1(3):1-22. [CrossRef]
- Gordon M, Gatys L, Guestrin C, Bigham J, Trister A, Patel K. App usage predicts cognitive ability in older adults. 2019. Presented at: CHI '19: CHI Conference on Human Factors in Computing Systems; May 4-9, 2019; Glasgow, Scotland. [CrossRef]
- Böhmer M, Hecht B, Schöning J, Krüger A, Bauer G. Falling asleep with Angry Birds, Facebook and Kindle: a large scale study on mobile application usage. 2011. Presented at: MobileHCI '11: 13th International Conference on Human Computer Interaction with Mobile Devices and Services; August 30-September 2, 2011; Stockholm, Sweden. [CrossRef]
- Morrison A, Xiong X, Higgs M, Bell M, Chalmers M. A large-scale study of iPhone app launch behaviour. 2018. Presented at: CHI'18: CHI Conference on Human Factors in Computing Systems; April 2- 26, 2018; Montreal, QC. [CrossRef]
- Yan R, Liu X, Dutcher J, Tumminia M, Villalba D, Cohen S, et al. A computational framework for modeling biobehavioral rhythms from mobile and wearable data streams. ACM Trans Intell Syst Technol. Mar 03, 2022;13(3):1-27. [CrossRef]
- Ahmed MS, Ahmed N. A fast and minimal system to identify depression using smartphones: explainable machine learning–based approach. JMIR Form Res. Aug 10, 2023;7:e28848. [FREE Full text] [CrossRef] [Medline]
- Ahmed M, Ahmed N. Exploring unique app signature of the depressed and non-depressed through their fingerprints on apps. In: Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering. Cham. Springer International Publishing; 2022;218-239.
- Ahmed M, Rony R, Hasan T, Ahmed N. Smartphone usage behavior between depressed and non-depressed students: an exploratory study in the context of Bangladesh. 2020. Presented at: UbiComp/ISWC '20 Adjunct: 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and 2020 ACM International Symposium on Wearable Computers; September 12-17, 2020; Mexico. [CrossRef]
- Cornelissen G. Cosinor-based rhythmometry. Theor Biol Med Model. Apr 11, 2014;11:16. [FREE Full text] [CrossRef] [Medline]
- Moškon M. CosinorPy: a Python package for cosinor-based rhythmometry. BMC Bioinformat. Oct 29, 2020;21(1):485. [FREE Full text] [CrossRef] [Medline]
- Bingham C, Arbogast B, Guillaume GC, Lee JK, Halberg F. Inferential statistical methods for estimating and comparing cosinor parameters. Chronobiologia. 1982;9(4):397-439. [Medline]
- Miyagi R, Sasawaki Y, Shiotani H. The influence of short-term sedentary behavior on circadian rhythm of heart rate and heart rate variability. Chronobiol Int. Mar 03, 2019;36(3):374-380. [CrossRef] [Medline]
- Doryab A, Dey AK, Kao G, Low C. Modeling biobehavioral rhythms with passive sensing in the wild: a case study to predict readmission risk after pancreatic surgery. Proc ACM Interact Mob Wearable Ubiquitous Technol. Mar 29, 2019;3(1):1-21. [CrossRef]
- Elias J. Why wearables are out of reach for people who need them most. Forbes. 2015. URL: https://www.forbes.com/sites/jenniferelias/2015/10/27/the-leftovers-part-i-the-cost-of-activity/?sh=a0e5ffc63c9b [accessed 2023-07-28]
- Lohchab H. Affordable smartphone sales shoot up as demand grows due to e-learning needs. Economic Times. 2020. URL: https://economictimes.indiatimes.com/tech/hardware/affordable-smartphone-sales-shoot-up-as-demand-grows/articleshow/78241563.cms [accessed 2023-07-28]
- Abd-Alrazaq A, AlSaad R, Aziz S, Ahmed A, Denecke K, Househ M, et al. Wearable artificial intelligence for anxiety and depression: scoping review. J Med Internet Res. Jan 19, 2023;25:e42672. [FREE Full text] [CrossRef] [Medline]
- Xu X, Chikersal P, Dutcher JM, Sefidgar YS, Seo W, Tumminia MJ, et al. Leveraging collaborative-filtering for personalized behavior modeling: a case study of depression detection among college students. Proc ACM Interact Mob Wearable Ubiquitous Technol. Mar 30, 2021;5(1):1-27. [CrossRef]
- Xu X, Chikersal P, Doryab A, Villalba DK, Dutcher JM, Tumminia MJ, et al. Leveraging routine behavior and contextually-filtered features for depression detection among college students. Proc ACM Interact Mob Wearable Ubiquitous Technol. Sep 09, 2019;3(3):1-33. [CrossRef]
- Chikersal P, Doryab A, Tumminia M, Villalba DK, Dutcher JM, Liu X, et al. Detecting depression and predicting its onset using longitudinal symptoms captured by passive sensing: A machine learning approach with robust feature selection. ACM Trans Comput-Hum Interact. Jan 20, 2021;28(1):1-41. [CrossRef]
- Dogrucu A, Perucic A, Isaro A, Ball D, Toto E, Rundensteiner EA, et al. Moodable: on feasibility of instantaneous depression assessment using machine learning on voice samples with retrospectively harvested smartphone and social media data. Smart Health. Jul 2020;17:100118. [CrossRef]
- Yue C, Ware S, Morillo R, Lu J, Shang C, Bi J, et al. Automatic depression prediction using internet traffic characteristics on smartphones. Smart Health (Amst). Nov 2020;18:100137. [FREE Full text] [CrossRef] [Medline]
- Ware S, Yue C, Morillo R, Lu J, Shang C, Kamath J, et al. Large-scale automatic depression screening using meta-data from WiFi infrastructure. Proc ACM Interact Mob Wearable Ubiquitous Technol. Dec 27, 2018;2(4):1-27. [CrossRef]
- Yue C, Ware S, Morillo R, Lu J, Shang C, Bi J, et al. Fusing location data for depression prediction. IEEE Trans Big Data. Jun 2021;7(2):355-370. [FREE Full text] [CrossRef] [Medline]
- Canzian L, Musolesi M. Trajectories of depression: unobtrusive monitoring of depressive states by means of smartphone mobility traces analysis. 2015. Presented at: UbiComp '15: 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing; September 9-11, 2015;1293-1304; Osaka, Japan. [CrossRef]
- Farhan A, Yue C, Morillo R, Ware S, Lu J, Bi J, et al. Behavior vs. introspection: refining prediction of clinical depression via smartphone sensing data. 2016. Presented at: 2016 IEEE Wireless Health (WH); October 25-27, 2016;1-8; Bethesda, MD. [CrossRef]
- Lu J, Shang C, Yue C, Morillo R, Ware S, Kamath J, et al. Joint modeling of heterogeneous sensing data for depression assessment via multi-task learning. Proc ACM Interact Mob Wearable Ubiquitous Technol. Mar 26, 2018;2(1):1-21. [CrossRef]
- Mullick T, Radovic A, Shaaban S, Doryab A. Predicting depression in adolescents using mobile and wearable sensors: multimodal machine learning-based exploratory study. JMIR Form Res. Jun 24, 2022;6(6):e35807. [FREE Full text] [CrossRef] [Medline]
- Pedrelli P, Fedor S, Ghandeharioun A, Howe E, Ionescu DF, Bhathena D, et al. Monitoring changes in depression severity using wearable and mobile sensors. Front Psychiatry. 2020;11:584711. [FREE Full text] [CrossRef] [Medline]
- Ghandeharioun A, Fedor S, Sangermano L, Ionescu D, Alpert J, Dale C, et al. Objective assessment of depressive symptoms with machine learning and wearable sensors data. 2017. Presented at: 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII); October 23-26, 2017;325; San Antonio, TX. [CrossRef]
- Saha K, Grover T, Mattingly SM, swain VD, Gupta P, Martinez GJ, et al. Person-centered predictions of psychological constructs with social media contextualized by multimodal sensing. Proc ACM Interact Mob Wearable Ubiquitous Technol. Mar 30, 2021;5(1):1-32. [CrossRef]
- Cao J, Truong AL, Banu S, Shah AA, Sabharwal A, Moukaddam N. Tracking and predicting depressive symptoms of adolescents using smartphone-based self-reports, parental evaluations, and passive phone sensor data: development and usability study. JMIR Ment Health. Jan 24, 2020;7(1):e14045. [FREE Full text] [CrossRef] [Medline]
- Doryab A. Critical thinking of mobile sensing for health. Crossroads. Sep 21, 2021;28(1):34-37. [CrossRef]
- Rykov Y, Thach T, Bojic I, Christopoulos G, Car J. Digital biomarkers for depression screening with wearable devices: cross-sectional study with machine learning modeling. JMIR Mhealth Uhealth. Oct 25, 2021;9(10):e24872. [FREE Full text] [CrossRef] [Medline]
- Briganti G, Scutari M, Linkowski P. Network structures of symptoms from the Zung Depression Scale. Psychol Rep. Aug 2021;124(4):1897-1911. [CrossRef] [Medline]
- Cheung T, Jin Y, Lam S, Su Z, Hall BJ, Xiang Y, et al. International Research Collaboration on COVID-19. Network analysis of depressive symptoms in Hong Kong residents during the COVID-19 pandemic. Transl Psychiatry. Sep 06, 2021;11(1):460. [FREE Full text] [CrossRef] [Medline]
- Baryshnikov I, Aledavood T, Rosenström T, Heikkilä R, Darst R, Riihimäki K, et al. Relationship between daily rated depression symptom severity and the retrospective self-report on PHQ-9: a prospective ecological momentary assessment study on 80 psychiatric outpatients. J Affect Disord. Mar 01, 2023;324:170-174. [FREE Full text] [CrossRef] [Medline]
- Wang R, Wang W, daSilva A, Huckins JF, Kelley WM, Heatherton TF, et al. Tracking depression dynamics in college students using mobile phone and wearable sensing. Proc ACM Interact Mob Wearable Ubiquitous Technol. Mar 26, 2018;2(1):1-26. [CrossRef]
- Tseng VW, Sano A, Ben-Zeev D, Brian R, Campbell AT, Hauser M, et al. Using behavioral rhythms and multi-task learning to predict fine-grained symptoms of schizophrenia. Sci Rep. Sep 15, 2020;10(1):15100. [FREE Full text] [CrossRef] [Medline]
- Ware S, Knouse L, Draz I, Enikeeva A. Predicting ADHD symptoms using smartphone sensing data. 2022. Presented at: UbiComp/ISWC ’22 Adjunct: 2022 ACM International Joint Conference on Pervasive and Ubiquitous Computing; September 11-15, 2022; Cambridge, UK. URL: https://ubicomp-mental-health.github.io/papers/2022/Ware-ADHD.pdf [CrossRef]
- Ware S, Yue C, Morillo R, Lu J, Shang C, Bi J, et al. Predicting depressive symptoms using smartphone data. Smart Health. Mar 2020;15:100093. [CrossRef]
- Standley T, Zamir A, Chen D, Guibas L, Malik J, Savarese S. Which tasks should be learned together in multi-task learning? 2020. Presented at: ICML'20: 37th International Conference on Machine Learning; July 13-18, 2020;9120-9132; Vienna, Austria. URL: http://proceedings.mlr.press/v119/standley20a/standley20a.pdf
- Ahmed M. Mon Majhi. Google. 2022. URL: https://play.google.com/store/apps/details/?id=net.mn.u [accessed 2023-03-18]
- Ahmed MS, Rony RJ, Hadi MA, Hossain E, Ahmed N. A minimalistic approach to predict and understand the relation of app usage with students’ academic performances. Proc ACM Hum-Comput Interact. Sep 13, 2023;7(MHCI):1-28. [CrossRef]
- Ahmed M, Jahangir RR, Ahmed N. Identifying high and low academic result holders through smartphone usage data. 2021. Presented at: Asian CHI '21: Asian CHI Symposium 2021; May 8-13, 2021; Yokohama, Japan. [CrossRef]
- Ahmed M, Rony R, Ashhab M, Ahmed N. An empirical study to analyze the impact of Instagram on students’ academic results. 2020. Presented at: 2020 IEEE Region 10 Symposium (TENSYMP); June 5-7, 2020;666-669; Dhaka, Bangladesh. [CrossRef]
- Ahmed M, Hasan T, Rahman M, Ahmed N. A rule mining and Bayesian network analysis to explore the link between depression and digital behavioral markers of games app usage. In: Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering. Cham. Springer Nature Switzerland; 2023;557-569.
- Ahmed M, Ahmed N. Less is more: leveraging digital behavioral markers for real-time identification of loneliness in resource-limited settings. In: Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering. Cham. Springer Nature Switzerland; 2023;460-476.
- Ahmed S, Khan SS, Ahmed N. Is there any relation between smartphone usage and loneliness during the COVID-19 pandemic?: a study by exploring two objective app usage datasets. EAI Endorsed Trans Perv Health Tech. Aug 02, 2023;9:1-15. [CrossRef]
- Sample size calculator: understanding sample sizes. SurveyMonkey. URL: https://www.surveymonkey.com/mp/sample-size -calculator/ [accessed 2023-07-06]
- Sample size calculation when the sampling method is simple random sampling. Uidaho. URL: https://www.webpages.uidaho .edu/ed571/OFFLINE-571-Modules/Offline_Modules/M3-S/Sample_Size_Calculations-Formulas.pdf [accessed 2023-12-22]
- Hossain MJ, Ahmmed F, Khandokar L, Rahman SMA, Hridoy A, Ripa FA, et al. Status of psychological health of students following the extended university closure in Bangladesh: results from a web-based cross-sectional study. PLOS Glob Public Health. 2022;2(3):e0000315. [FREE Full text] [CrossRef] [Medline]
- Tu Z, Li R, Li Y, Wang G, Wu D, Hui P, et al. Your apps give you away: distinguishing mobile users by their app usage fingerprints. Proc ACM Interact Mob Wearable Ubiquitous Technol. Sep 18, 2018;2(3):1-23. [CrossRef]
- Malmi E, Weber I. You are what apps you use: demographic prediction based on user’s apps. ICWSM. Aug 04, 2021;10(1):635-638. [CrossRef]
- Dijk D, Duffy JF. Novel approaches for assessing circadian rhythmicity in humans: a review. J Biol Rhythms. Oct 2020;35(5):421-438. [FREE Full text] [CrossRef] [Medline]
- Thomas KA, Burr RL. Circadian research in mothers and infants: how many days of actigraphy data are needed to fit cosinor parameters? J Nurs Meas. 2008;16(3):201-206. [FREE Full text] [CrossRef] [Medline]
- Chowdhury A, Ghosh S, Sanyal D. Bengali adaptation of Brief Patient Health Questionnaire for screening depression at primary care. J Indian Med Assoc. Oct 2004;102(10):544-547. [CrossRef]
- Chiny M, Chihab M, Bencharef O, Chihab Y. Netflix recommendation system based on TF-IDF and cosine similarity algorithms. 2021. Presented at: BML’21: 2nd International Conference on Big Data, Modelling and Machine Learning; June 5-6, 2021; Kenitra, Morocco. [CrossRef]
- Sarsenbayeva Z, Marini G, van BN, Luo C, Jiang W, Yang K, et al. Does smartphone use drive our emotions or vice versa? A causal analysis. 2020. Presented at: CHI '20: CHI Conference on Human Factors in Computing Systems; April 25-30, 2020; Honolulu HI. [CrossRef]
- Gonçalves BSB, Adamowicz T, Louzada FM, Moreno CR, Araujo JF. A fresh look at the use of nonparametric analysis in actimetry. Sleep Med Rev. Apr 2015;20:84-91. [CrossRef] [Medline]
- Anderson ST, FitzGerald GA. Sexual dimorphism in body clocks. Science. Sep 04, 2020;369(6508):1164-1165. [CrossRef] [Medline]
- Duarte LL, Menna-Barreto L. Chronotypes and circadian rhythms in university students. Biol Rhythm Res. Mar 23, 2021;53(7):1058-1072. [CrossRef]
- Mitchell JA, Quante M, Godbole S, James P, Hipp JA, Marinac CR, et al. Variation in actigraphy-estimated rest-activity patterns by demographic factors. Chronobiol Int. 2017;34(8):1042-1056. [FREE Full text] [CrossRef] [Medline]
- Mutak A. cosinor2 vignette. R-project. 2018. URL: https://cran.r-project.org/web/packages/cosinor2/vignettes/cosinor2.html [accessed 2023-07-21]
- Mishra P, Pandey CM, Singh U, Gupta A, Sahu C, Keshri A. Descriptive statistics and normality tests for statistical data. Ann Card Anaesth. 2019;22(1):67-72. [FREE Full text] [CrossRef] [Medline]
- Khan A, Tian XL. Circadian amplitude. In: Encyclopedia of Gerontology and Population Aging. Cham. Springer International Publishing; 2021;1003-1012.
- Masubuchi S, Hashimoto S, Endo T, Honma S, Honma K. Amplitude reduction of plasma melatonin rhythm in association with an internal desynchronization in a subject with non-24-hour sleep-wake syndrome. Psychiatry Clin Neurosci. Apr 12, 1999;53(2):249-251. [FREE Full text] [CrossRef] [Medline]
- Jones SH, Hare DJ, Evershed K. Actigraphic assessment of circadian activity and sleep patterns in bipolar disorder. Bipolar Disord. Apr 2005;7(2):176-186. [CrossRef] [Medline]
- Esaki Y, Obayashi K, Saeki K, Fujita K, Iwata N, Kitajima T. Association between circadian activity rhythms and mood episode relapse in bipolar disorder: a 12-month prospective cohort study. Transl Psychiatry. Oct 13, 2021;11(1):525. [FREE Full text] [CrossRef] [Medline]
- Mayeli A, LaGoy AD, Smagula SF, Wilson JD, Zarbo C, Rocchetti M, DiAPAson Consortium, et al. Shared and distinct abnormalities in sleep-wake patterns and their relationship with the negative symptoms of schizophrenia spectrum disorder patients. Mol Psychiatry. May 14, 2023;28(5):2049-2057. [CrossRef] [Medline]
- James G, Witten D, Hastie T, Tibshirani R. An Introduction to Statistical Learning: With Applications in R. New York, NY. Springer; 2021.
- Ng K, Sun J, Hu J, Wang F. Personalized predictive modeling and risk factor identification using patient similarity. AMIA Jt Summits Transl Sci Proc. 2015;2015:132-136. [FREE Full text] [Medline]
- Lee J, Maslove DM, Dubin JA. Personalized mortality prediction driven by electronic medical data and a patient similarity metric. PLoS One. 2015;10(5):e0127428. [FREE Full text] [CrossRef] [Medline]
- Sharafoddini A, Dubin JA, Lee J. Patient similarity in prediction models based on health data: a scoping review. JMIR Med Inform. Mar 03, 2017;5(1):e7. [FREE Full text] [CrossRef] [Medline]
- Durrheim K, Tredoux C. Numbers, Hypotheses & Conclusions: A Course in Statistics for the Social Sciences. 6th ed. Cape Town. Juta and Company; 2004.
- Ruder S. An overview of multi-task learning in deep neural networks. arXiv . Preprint posted online June 15, 2017. [FREE Full text] [CrossRef]
- Mao Y, Wang Z, Liu W, Lin X, Xie P. MetaWeighting: learning to weight tasks in multi-task learning. In: Muresan S, Nakov P, Villavicencio A, editors. Findings of the Association for Computational Linguistics: ACL 2022. Stroudsburg, PA. Association for Computational Linguistics; 2022;3436-3448.
- Vabalas A, Gowen E, Poliakoff E, Casson AJ. Machine learning algorithm validation with a limited sample size. PLoS One. Nov 7, 2019;14(11):e0224365. [FREE Full text] [CrossRef] [Medline]
- Andaur Navarro CL, Damen JAA, Takada T, Nijman SWJ, Dhiman P, Ma J, et al. Risk of bias in studies on prediction models developed using supervised machine learning techniques: systematic review. BMJ. Oct 20, 2021;375:n2281. [FREE Full text] [CrossRef] [Medline]
- Bernert RA, Hilberg AM, Melia R, Kim JP, Shah NH, Abnousi F. Artificial intelligence and suicide prevention: a systematic review of machine learning investigations. Int J Environ Res Public Health. Aug 15, 2020;17(16):5929. [FREE Full text] [CrossRef] [Medline]
- Sarwar A, Agu EO, Almadani A, Sarwar A. CovidRhythm: a deep learning model for passive prediction of COVID-19 using biobehavioral rhythms derived from wearable physiological data. IEEE Open J Eng Med Biol. 2023;4:21-30. [CrossRef]
- Xu Y, Su S, Li X, Mansuri A, McCall WV, Wang X. Blunted rest-activity circadian rhythm increases the risk of all-cause, cardiovascular disease and cancer mortality in US adults. Sci Rep. Nov 30, 2022;12(1):20665. [FREE Full text] [CrossRef] [Medline]
- Yun J, Yun YH. Health-promoting behavior to enhance perceived meaning and control of life in chronic disease patients with role limitations and depressive symptoms: a network approach. Sci Rep. Mar 24, 2023;13(1):4848. [FREE Full text] [CrossRef] [Medline]
Abbreviations
AI: artificial intelligence |
CV: cross-validation |
IS: interdaily stability |
IV: intradaily variability |
LMIC: low- and middle-income country |
LOOCV: leave-one-out cross-validation |
MDD: major depressive disorder |
MESOR: midline estimating statistic of rhythm |
ML: machine learning |
MTL: multitask learning |
PHQ-9: 9-item Patient Health Questionnaire |
PMI: point mutual information |
RA: relative amplitude |
STL: single-task learning |
TF-IDF: term frequency–inverse document frequency |
VIF: variance inflation factor |
Edited by A Mavragani; submitted 06.08.23; peer-reviewed by J Zhu, X Li; comments to author 03.11.23; revised version received 27.12.23; accepted 11.01.24; published 24.04.24.
Copyright©Md Sabbir Ahmed, Tanvir Hasan, Salekul Islam, Nova Ahmed. Originally published in JMIR Research Protocols (https://www.researchprotocols.org), 24.04.2024.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Research Protocols, is properly cited. The complete bibliographic information, a link to the original publication on https://www.researchprotocols.org, as well as this copyright and license information must be included.