Published on in Vol 11, No 6 (2022): June

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/38407, first published .
Neural Activity During Audiovisual Speech Processing: Protocol For a Functional Neuroimaging Study

Neural Activity During Audiovisual Speech Processing: Protocol For a Functional Neuroimaging Study

Neural Activity During Audiovisual Speech Processing: Protocol For a Functional Neuroimaging Study

Protocol

1Department of Otorhinolaryngology, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland

2Hearing Research Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland

Corresponding Author:

Stefan Weder, PD, MD

Department of Otorhinolaryngology, Head and Neck Surgery

Inselspital, Bern University Hospital, University of Bern

Freiburgstrasse 18

Bern, 3010

Switzerland

Phone: 41 31 632 33 47

Email: stefan.weder@insel.ch


Related ArticleThis is a corrected version. See correction statement in: https://www.researchprotocols.org/2022/6/e40527

Background: Functional near-infrared spectroscopy (fNIRS) studies have demonstrated associations between hearing outcomes after cochlear implantation and plastic brain changes. However, inconsistent results make it difficult to draw conclusions. A major problem is that many variables need to be controlled. To gain further understanding, a careful preparation and planning of such a functional neuroimaging task is key.

Objective: Using fNIRS, our main objective is to develop a well-controlled audiovisual speech comprehension task to study brain activation in individuals with normal hearing and hearing impairment (including cochlear implant users). The task should be deductible from clinically established tests, induce maximal cortical activation, use optimal coverage of relevant brain regions, and be reproducible by other research groups.

Methods: The protocol will consist of a 5-minute resting state and 2 stimulation periods that are 12 minutes each. During the stimulation periods, 13-second video recordings of the clinically established Oldenburg Sentence Test (OLSA) will be presented. Stimuli will be presented in 4 different modalities: (1) speech in quiet, (2) speech in noise, (3) visual only (ie, lipreading), and (4) audiovisual speech. Each stimulus type will be repeated 10 times in a counterbalanced block design. Interactive question windows will monitor speech comprehension during the task. After the measurement, we will perform a 3D scan to digitize optode positions and verify the covered anatomical locations.

Results: This paper reports the study protocol. Enrollment for the study started in August 2021. We expect to publish our first results by the end of 2022.

Conclusions: The proposed audiovisual speech comprehension task will help elucidate neural correlates to speech understanding. The comprehensive study will have the potential to provide additional information beyond the conventional clinical standards about the underlying plastic brain changes of a hearing-impaired person. It will facilitate more precise indication criteria for cochlear implantation and better planning of rehabilitation.

International Registered Report Identifier (IRRID): DERR1-10.2196/38407

JMIR Res Protoc 2022;11(6):e38407

doi:10.2196/38407

Keywords



Background

Disabling hearing loss is a major communication and health problem that affects over 6% of the overall population and over 50% of adults above the age of 65. For adults, deafness leads to social isolation, unemployment, and reliance on social services. This problem will increase with demographic change. It is estimated that by 2050, 10% of the global population will be living with disabling hearing loss [1]. In patients with severe to profound hearing loss, a cochlear implant (CI) offers an effective treatment [2]. A CI is a neuroprosthetic device that electrically stimulates the auditory nerve in response to acoustic stimulation. CIs enable deaf patients to regain their speech understanding [3,4], improve sound localization [5], and increase their quality of life [6]. However, hearing outcomes after implantation surgery vary widely in both prelingually and postlingually deafened patients. About 20%-30% of postlingually deafened patients who receive a CI do not gain the expected benefit from the implant. Nowadays, over 75% of the variance in CI outcomes remains unclear [7-9]. Consequently, it is not possible to predict preoperatively how well a CI candidate will perform with the implant. Therefore, there is an urgent need to better understand this variability and find ways to improve outcomes for people with poor language comprehension.

In the absence of auditory input, the sensory deprivation induces reallocation of cortical areas (so-called brain plasticity). This leads to functional reorganization within the auditory and auditory-related brain cortex, with new functions being assigned to these brain regions [10]. As an example, the visual takeover (also referred as cross-modal reorganization) in the impaired auditory brain areas has been demonstrated. It means that visual information, for instance, during a lipreading task, can be processed partially in former auditory associated brain areas [11-14]. A CI can counteract these hearing loss–induced plastic changes, and the success of the rehabilitation depends on them. It has been shown that different hearing outcomes after implantation correlate with these reorganization processes [3,15-17].

We use functional imaging to study these described plastic brain changes. However, in CI recipients, there are important considerations to make. Despite the efforts of CI manufacturers to allow structural magnetic resonance imaging (MRI) with a surgically implanted device, the technique has limitations. The outer speech processor cannot be worn during MRI scanning and thus cannot be used to assess evoked auditory responses associated with functional MRI. Furthermore, the implanted magnet and electrode array of the CI cause imaging artifacts in MRI and stimulation artifacts in electroencephalography (EEG) [18-20].

Functional near-infrared spectroscopy (fNIRS), on the other hand, is ideal for this patient population [21]. The technique uses near-infrared light to measure the blood oxygen saturation of the cerebral cortex. This allows indirect conclusions to be drawn about neuronal activation. Other advantages of fNIRS are that the measurements are not affected by electrical pulses, do not interfere with the CI, are quiet (which is important in auditory tasks), are noninvasive, suitable for all ages, and enable the evaluation of responses to spoken words and whole sentences.

Previous fNIRS studies with implanted adults showed evidence of cortical reorganization. However, when comparing study findings, there are contradictory results. For example, some studies suggest that strong activation of the auditory cortex during lipreading tasks is a negative predictor of speech understanding with the implant [22,23]. Other publications describe an opposite effect or no effect [24,25].

According to a recent review on fNIRS measurements in CI patients, at the current stage, it is difficult to draw a general conclusion about the potential positive or negative effects of cortical reorganization. Instead, methodological aspects must first be clarified [26]. The effect of cross-modal plasticity may be more complex than suggested in previous studies. One problem with measuring functional brain activation is that many variables need to be controlled. For example, it makes a remarkable difference how patients are selected (pre- or postlingually deafened) [24], whether a study participant is actively engaged in the experiment (otherwise mind wandering might occur) [27], how the stimuli are presented, and whether the task performance is monitored [28]. Poorly controlled variables during an fNIRS experiment can lead to misinterpretations and mistakes in data analysis.

The aim of our study protocol is to develop a well-controlled and reproducible fNIRS task to evaluate brain activation in response to speech comprehension in individuals with normal hearing, those with hearing impairments, and CI users. Our hypothesis is that through such a task, we can identify cortical networks that are clearly correlated to hearing performance with the implant. Identified brain activation patterns may later be used preoperatively as biomarkers of speech understanding with the implant.

Objectives

Using fNIRS, our main objective is to develop an audiovisual speech comprehension task to measure functional brain activity regarding speech understanding. The task should comply with the following criteria: it should (1) be deducible from clinically established hearing tests; (2) induce maximal cortical activation (and thus allow reproducible recognition of activation patterns); (3) align with the international 10-10 system of electrode placement, using optimally spaced optode positions with maximal coverage over the responsible brain regions. Short separation channels should allow noise reduction; (4) be time-efficient (to avoid fatigue due to experiment duration; (5) be suitable for normal hearing, hearing impaired, and cochlear implant users; and (6) be reproducible by other research groups.

We will correlate the fNIRS recordings with (1) data from patients’ history, (2) clinically validated questionnaires, and (3) performance during the fNIRS measurements (eg, speech comprehension during the fNIRS task).


Study Design

This research project is a prospective single-center study and will be conducted at the Department of Otolaryngology, Head and Neck Surgery at the Bern University Hospital, Inselspital, Bern, Switzerland.

Ethics Approval

The protocol was designed in accordance with the ethical principles of the Declaration of Helsinki. The study setup was approved by the local ethical committee (reference number 2020-02978) and fulfils all the patient data regulations of Switzerland.

Participants and Eligibility Criteria

All study participants must (1) be at least 18 years old, (2) be native German speakers, and (3) have preferably light and thin hair [29,30]. Participants with a severe cardiac, psychiatric, or neurological disease (eg, epilepsy) or brain injury will be excluded from the study (refer to Multimedia Appendix 1 for details). CI users must be bilaterally and postlingually deafened, with an unaided pure-tone average (PTA) hearing threshold exceeding a hearing level (HL) of 80 dB.

The ear through which the acoustic stimulation will be presented needs to be implanted for at least 1 year. This will ensure that hearing rehabilitation after implantation is completed.

Participants will be allocated to one of 3 groups: (1) normal hearing “control” cohort, (2) CI users with good speech understanding (“overperformer”), or (3) CI users with poor speech understanding (“underperformer”). CI users with moderate speech perception (ie, between 40% and 70% aided monosyllabic word recognition score) will not be recruited because we want to investigate the functional mechanisms specifically for good and poor outcomes. Table 1 provides an overview of the categorization criteria for each subgroup.

Table 1. Overview of categorization according to participants’ hearing performancea.
CriterionNormal hearingCIb “overperformer”CI “underperformer”
Unaided PTAc hearing threshold≤20 dB HLd≥80 dB HL≥80 dB HL
Word recognition score100%≥70%≤40%

aWord recognition score will be measured using Freiburg monosyllabic test lists at a 65 dB sound pressure level.

bCI: cochlear implant.

cPTA: pure-tone average.

dHL: hearing level.

Sample Size

Pilot measurements were performed on 10 participants to estimate an appropriate sample size. We compared the median relative change in the concentration of oxygenated hemoglobin in the auditory cortex. After acoustic stimulation (speech in quiet), an increase of 1.315 (SD 1.275) µMolar*cm was measured, while during the resting state, the value fluctuated close to 0. A power analysis to test a 2-sided hypothesis at 95% significance and 80% power showed that we need at least 15 participants with normal hearing to detect auditory activations. In addition, we considered previous findings from auditory fNIRS studies [26,28,31,32]. We compared the size of their study cohorts, the fNIRS systems used, the optode arrangements used, and the reliability of their results. To allow for a possibly larger variation, we propose including 60 individuals in this study (20 listeners with normal hearing, 20 CI overperformers, and 20 CI underperformers).

Recruitment

Recruitment will be done through the CI center of our department. Potential study candidates will be screened based on their medical records and will be subsequently informed verbally or in writing about the study procedure. Candidates who are willing to participate and able to complete all tests required for the study will be asked to sign an informed consent form.

Study Procedure

Table 2 shows the time schedule for participants. The enrollment and the data collection sessions are described in more detail in the subsequent subsections.

Table 2. Overview of the study procedure.
ItemEnrollment sessionData collection session
Information sheet
Medical history
Questionnaires
Hearing tests
fNIRSa recording
Behavioral assessment
Optode position registration
Total duration30 min90-120 min

afNIRS: functional near-infrared spectroscopy.

Enrollment Session

Potential study candidates will be invited to an enrollment session. First, we will hand out the information sheet and answer any questions the candidates may have. To assess full eligibility, the candidates will have to fill in questionnaires and perform additional hearing tests before data collection. Bilateral CI users will be asked to turn off and remove the audio processor of the worse hearing ear to limit acoustic stimulation exclusively to the better ear. The worse hearing ear will be covered using an ear plug. The full enrollment session will take a maximum of 30 minutes.

Questionnaires

Questions on medical history will target the candidates’ handedness (Edinburgh Handedness Inventory) and the presence of diseases, which are among the exclusion criteria [33-35]. Additional questions on health status will inquire about the presence of influences that could alter the brain activity of interest, such as the use of stimulants [36]. CI users will receive questions about the duration of their hearing loss, and if they have tinnitus, about the objectivity and laterality of their tinnitus [37]. The Hearing Ability Questionnaires will investigate lipreading experience and hearing-associated factors, including the Speech, Spatial, and Qualities (SSQ-12) questions [38]. The question sheet should cover the subjective assessment of hearing ability in the last 6 months.

Hearing Tests

The audiometric measurements and the fNIRS recordings will take place in an acoustic chamber (6 m × 4 m × 2 m) with a separate ventilation system and electromagnetic shielding. The broadband reverberation time is ~200 ms.

In normal hearing participants, we will assess pure tone air-conduction hearing thresholds with a clinical audiometer (GSI 61, Grason-Stadler). The findings must confirm that participants have no hidden or undetected hearing loss (Table 1). For CI users, audiograms are available from clinical routine measurements.

In all participants, we will measure the word recognition score for Freiburg monosyllabic word lists at a sound pressure level (SPL) of 65 dB [39]. Additionally, we will perform the widely used Oldenburg Sentence Test (OLSA) [40-42]. The sentences will be played with 65 dB SPL background noise, using an adaptive version of the female OLSA test [43-45]. The OLSA sentences will also be used as a stimulus during the fNIRS measurement. Speech material will be presented from a loudspeaker (Control 1 Pro) placed in front of the participants at a distance of 1 m.

Data Collection Session
Experimental Setup

During fNIRS recording, each study participant will sit in a comfortable chair with an armrest, headrest, and lumbar support (Figure 1). A desk will be placed in front of the participant with the electrical equipment. Visual stimuli will be presented through a computer screen (P2210, Dell) placed on the table at a distance of 120 cm in front of the participant. The acoustic stimuli will be played through a loudspeaker (8040B, Genelec) placed above the monitor at a distance of 130 cm from the ears. The loudspeaker will receive input from an external ASIO sound card (Scarlett 2i2, FocusRite) connected to the control laptop (XPS 13, Dell) via USB. The system will be calibrated to 65 dB SPL with the OLSA calibration noise and an acoustic analyzer (XL2, NTi Audio).

The stimulation protocol will be controlled by a custom-written script (Python 3.8.8) using Tkinter and python-vlc libraries. The script will send triggers via the serial interface to a trigger-box (MMBT-S Interface Box, NEUROSPEC AG), which converts the signals to transistor-transistor Logic (TTL) levels. The TTL-encoded signals will then be received by the fNIRS machine (FOIRE-3000, Shimadzu).

Participants will interact with the control laptop using the buttons of a mouse (WM527, Dell). The pointing function of the mouse will be disabled to ensure that participants control the experiment only by clicking and rolling. During the fNIRS measurement, the participants will be able to press an alarm button (Switchbox, Delock) positioned in a reachable distance on the table.

Figure 1. Experimental setup during functional near-infrared spectroscopy (fNIRS) recording. The participant will receive the stimulation via the computer screen (1) and the loudspeaker (2). The loudspeaker will be connected to the control laptop (3) via an external soundcard (4). The fNIRS cap (5) will be fitted on the participant's head, and the subject will interact using a response mouse (6). The alarm button (7) will be positioned in front of the subject.
View this figure
Optode Placement

We will select the regions of interest (ROIs) for the placement of the optodes considering previous studies. We expect responses related to audiovisual speech comprehension in the auditory and visual cortex, more specifically in the following ROIs: superior temporal gyrus (STG), primary visual cortex (V1), and visual association cortex (V2) [28,31,46-50]. Additionally, during similar conditions, the left inferior frontal gyrus (LIFG) has been associated with effortful listening [27,51], and elevated cortical responses have been reported in the middle temporal gyrus (MTG) and middle frontal gyrus (MFG) [52]. Based on the defined ROIs, we will determine the optimal selection of EEG coordinates using the fNIRS optode's location decider (fOLD) toolbox [53]. We will further consider the position of the audio processor and receiver coil in CI participants to avoid interference with optodes.

The montage will consist of 16 sources and 16 detectors placed on the surface of the skull according to the international 10-10 system of electrode placement (Figure 2A) [54]. The source-detector pairs will result in a total of 43 channels in a multidistance setup: 3 of them are short-separation channels with a 15-mm interoptode distance, 4 are extra-long channels with a distance of 36-37 mm, and 36 are normal length channels that are approximately 30 mm apart. In a multidistance approach, shorter channels (15 mm) provide information about the interfering systemic signals in the outer cortex and longer channels (36+ mm) about brain activation in deep regions [55,56]. Practically, however, the signal-to-noise ratio may be poor in long distances, so in many cases we will not be able to use those channels. The Monte Carlo sensitivity simulation of all source-detector pairs is shown in Figure 2B and indicates a uniform sensitivity profile across the temporal, visual, and prefrontal cortical regions [57]. The sampling rate will be set to 14 Hz.

The optode holder cap will be assembled using the manufacturer's components (Holder kit, Shimadzu) and custom 3D printed parts (colored optode markers and stabilizers for different head sizes). The parts will be designed in a solid modelling software (SolidWorks 2019, Dassault Systemes) and printed using a 3D printer (Prusa i3 MK3S+, Prusa Research).

At the end of the experiment, we will digitize the position of all optodes with a depth sensing camera (Structure Sensor Pro, Occipital Inc) connected to an iPad (iPad Pro 2020, Apple Inc). The depth sensing camera will be set up for optimized scanning of dark objects with low ambient infrared light. The infrared exposure time, gain, and depth resolution will be set to the highest available settings so that the colored optode markers can be easily identified on the 3D scan.

Figure 2. Functional near-infrared spectroscopy (fNIRS) montage. (A) Optode arrangement on the head. Sixteen sources (red circles) and 16 detectors (blue and cyan circles) will be placed on the scalp, forming a total of 43 channels. Three of the detectors (cyan circles) will be forming short-separation channels. (B) Sensitivity map of the optode arrangement.
View this figure
Functional Near-Infrared Spectroscopy

During fNIRS recordings, we will instruct the participants to concentrate on the screen, follow the instructions, and reduce head movements. If the participants feel uncomfortable, an emergency button in front of them will be made available to stop the experiment. We will give all instructions both verbally and in writing. Before the recordings, the participants will conduct a short familiarization session with 4 example stimulations. Once the participant confirms that the task is understood, we will start the definitive recording. The functional recordings will begin with a 5-minute resting state period (Figure 3A). We will instruct the participant to sit still, close their eyes, and relax but try not to fall asleep. Then 2 stimulation sessions approximately 12 minutes each will follow. Between the 3 sessions (ie, the resting and the 2 stimulation sessions), the participants can take a break of their chosen duration.

Figure 3. Functional near-infrared spectroscopy (fNIRS) measurement overview. (A) Following the resting state measurement, 2 x 5 counterbalanced blocks will be presented, with breaks in between. (A) A single block consists of a (1) speech in quiet (audio only), (2) speech in noise (audio only), (3) speech in quiet (video only), (4) speech in quiet (audio and video) stimulation, and an additional question.
View this figure
Stimuli

As stimulus material, we will use a video version of the female OLSA test [40,58]. A single stimulus will consist of one sentence (eg, “Nina gives twelve red flowers”) which will be repeated 3 times. The duration of one stimulus will be 13 seconds, comparable to hemodynamic responses [59].

A single stimulation block will contain 4 different stimuli, presented in one of the following modalities in a counterbalanced order (Figure 3B): (1) speech in quiet (audio only), (2) speech in noise (audio only), (3) speech in quiet (video only, ie, lipreading), and (4) speech in quiet (audio and video). The stimulation will be followed by 20-25 seconds of nonstimulus interval, during which a white fixation point will be presented on a black screen. During the audio-only conditions, the same black screen will be displayed so that the participant will have no indication other than hearing whether the stimulation has already started or not.

At random points, participants will be asked to answer questions to ensure attention and monitor speech comprehension during the test. The questions will be displayed in the nonstimulus epoch, for which the nonstimulus interval will be shortened to 10 seconds. The questions will ask to repeat the correct name or number of the last sentence from 4 possible answers. For example, if the sentence is “Nina gives 12 red flowers,” the question is either ”How many red flowers?“ or ”Who gives 12 red flowers?“ To answer the question, the participant will have to select 1 of 4 choices: 2 randomly selected numbers/names from the OLSA sentence matrix (wrong answers), an option if the respondent is not sure of the answer (skipped answer), and the correct answer. For the previous question, a possible combination could be (1) ”Britta,” (2) “Nina,” (3) “Peter,” and (4) “I cannot decide.” The participant will select an option with the roller on the computer mouse and confirm the answer with a double click. In the previous example, the participant must select the second option (”Nina“). The questions and the answers will be randomly generated, and the position of the question within the blocks will also be randomly chosen.

The shortened nonstimulus interval of 10 seconds prior to the question window will allow us to evaluate the fNIRS responses. Therefore, the interleaved questions will not harm the overall effectiveness of the measurement. After the question is answered, the regular 20-25 second relaxation time will be applied to ensure that the brain responses return to baseline. Overall, 2 questions per modality will be asked, resulting in 8 questions throughout the entire fNIRS measurement.

Following the breaks, before the first stimulation, there will be a stimulus-free interval of 20 seconds. This will ensure homogeneity of responses, meaning that all stimuli are perceived under similar circumstances. Overall, 10 blocks will be presented, resulting in 10 responses per stimulation modality, and the total fNIRS measurement time will be around 45 minutes. At the beginning of every event (start/stop of a block, resting state, stimulation, question, answer), a trigger will be sent from the control computer to the fNIRS machine through the trigger-box and stored as an extra channel in the fNIRS raw data.

Listening Effort

Following 5 stimulation blocks, we will ask every participant to rate their listening effort to the different stimuli, their rating of fatigue and their level of mind-wandering (Figure 3A) [60-64]. To evaluate the listening effort, we will use Adaptive Categorical Listening Effort Scaling (ACALES) [65].

Data Management

All written source documents will be completed in a neat, legible manner to ensure accurate interpretation of data. For each participant, a case report form (CRF) will be maintained, including the participant number. In CRFs and other project-specific documents, participants are only identified by a unique participant number. fNIRS measurements will be stored in a closed research environment (REDCap, Vanderbilt University, Nashville, United States). The secure web application is running on a local server maintained and backed up by the University of Bern. All documents related to the study, including the CRFs will be considered as source data, and these will be stored at the measurement site in accordance with relevant standards.

Data Analysis
fNIRS Preprocessing

Data preprocessing will be performed in MATLAB (MathWorks) using the Homer2 (v2.3) [66] and NIRS [67] toolboxes. The signal quality will be checked based on the heart rate content of the signal, using a sliding window approach [68-71]. Channels and time points with insufficient signal quality will be removed. Short channel correction will be applied to the absorbance data, using short separation regression [56,72]. The motion artifacts will be removed with a WaveletFilter module of the NIRS toolbox [67]. The signal will be bandpass filtered between 0.01 and 0.12 Hz with the BandpassFilter function from the Homer toolbox [66]. Then, the absorbance data will be converted to concentration changes of oxygenated hemoglobin (HbO) and deoxygenated hemoglobin (HbR) in mMolar*cm using the following equations, as specified by the manufacturer based on the modified Beer-Lambert law [73]:

ΔHbO = (−1.4887) * Abs[780nm] + 0.5970 * Abs[805nm] + 1.4878 * Abs[830nm]
ΔHbR = 1.8545 * Abs[780nm] - 0.2394 * Abs[805nm] − 1.0947 * Abs[830nm]

A further correction step will be performed to reduce noise based on the principle that the concentration changes of oxygenated and deoxygenated hemoglobin should be negatively correlated [74].

Optode Positions

We will perform the postprocessing of the scans with a 3D mesh processing tool (MeshLab) and custom-written scripts (MathWorks) [75].

We will manually select the coordinates of the optodes and anatomical landmarks with MeshLab on the obtained 3D scans. The list of coordinates will then be exported and projected into Montreal Neurological Institute (MNI) space. The MNI coordinates will be displayed on the preoperative MRI scan of every CI-user participant, and the exact source of measured hemodynamic activation will be determined. Additionally, the mean and the standard deviation of optode coordinates will be calculated and reported as quality measure for optode fittings [55,76].

fNIRS Recordings

Data analysis will be performed in Python using the MNE-Toolbox [77] and MNE-NIRS package [78]. Individual epochs will be subtracted from the channel data, from t=0 seconds to t=24 seconds relative to the stimulus onset. The epochs will be baseline-corrected by subtracting the mean of the signal between t =−5 seconds and t=0 seconds. Using the Glover canonical hemodynamic response function [79] a design matrix for the general linear model (GLM) will be constructed [80,81]. After GLM fitting, the regression results will be stored. Following this, temporal and spatial features will be extracted from each epoch (amplitude, area under curve, peak latency, laterality, power). The regression results and the extracted features will be weight-averaged over ROIs by taking the inverse of the standard error of the GLM fit for each channel [67]. The data will be averaged over the participants, and group-level statistics will be calculated using correlation analysis and linear mixed-effects models.

Behavioral Data

The answers from the questionnaires will be digitized, and correlation analysis will be performed to reveal relations between the measured brain activation patterns and the evaluated questionnaires. Additionally, further behavioral data will be obtained from the triggers, such as reaction time to questions across the measurement as a measure of fatigue or response accuracy for each stimulation type as a measure of speech understanding.


The enrollment for the study described in this protocol started in August 2021. The first results are expected at the end of 2022.


The postoperative adaptive or maladaptive effect of existing cross-modal reorganization in CI candidates is a complex question. The available studies show contradictory findings. A recent review states that it is important to discuss the methodological aspects of such functional neuroimaging examinations [22-26,46,47,50]. One problem with measuring functional brain activation is that many variables must be considered. To better control these variables, we present hereby an audiovisual speech comprehension task that fulfills the 6 points outlined below.

First, the test should be deducible from clinically established hearing tests. We used the video version of a widely used clinical test (Oldenburg Sentence Test) [58]. Functional brain activation patterns can therefore be correlated with clinical findings. These results are easier to interpret than custom-made speech materials [23,46,47,50]. Our employed stimulation design consists of complete sentences, which reflect everyday life and real language comprehension much better than nonspeech auditory stimuli or speech snippets [13,23,25,28,46,47]. Before conducting the fNIRS experiment, we will repeat clinical speech comprehension tests (ie, Freiburg monosyllabic test, Oldenburg Sentence Test). This enables a clear grouping of the CI participants into good and poor performers. During the fNIRS experiment, we will continue to assess speech comprehension in 4 different situations (ie, speech in quiet, speech in noise, visual-speech, audiovisual speech) with interleaved comprehension questions. This allows us to maintain attention and monitor speech comprehension while measuring brain activity. This advantage has only been applied by one research group [22,24]. To assess listening effort during the fNIRS task, we will use a validated questionnaire (ie, ACALES) [65]. Listening effort in CI users is an active topic of discussion [82]; its possible influence on the measured cortical activation, to the best of our knowledge, has never been reported before. To describe the subjective hearing perception in their daily lives, participants will complete validated questionnaires (ie, SSQ-12) on the day of the test [38]. We will conduct our tests in a validated audio chamber (as used in clinically performed hearing tests).

Second, the task should induce maximal cortical activation (and thus allow reproducible recognition of activation patterns). We use an optimized counterbalanced block design. The duration of 1 stimulus will be 13 seconds, and the interstimulus break will be between 20 and 25 seconds, comparable to hemodynamic responses [49,59]. Our task requires the active participation of the participants. Previous studies have shown that this can significantly increase brain activation [63,64]. Furthermore, we mitigate mind wandering and fatigue by filling out validated questionnaires [60-62]. As far as we know, in persons with hearing impairments, this has never been reported in the context of fNIRS measurements. To avoid fatigue (which can lead to decreased brain activation), we keep the fNIRS task as short as possible. Additionally, participants can take 2 breaks of self-selected duration.

Third, it should be in alignment with the international 10-10 system of electrode placement, using optimally spaced optode positions with maximal coverage over the responsible brain regions. Short separation channels should allow noise reduction. Our optode placement covers the following brain regions: superior temporal gyrus (STG), primary visual cortex (V1), visual association cortex (V2), left inferior frontal gyrus (LIFG), middle temporal gyrus (MTG), and middle frontal gyrus (MFG). This allows us to study not only audiovisual activations but also speech perception in noise, the effects of fatigue, and activity related to higher-order cortical processing. Many other studies have not had the opportunity to cover such a wide range of cortical regions, mostly due to hardware limitations [22-25,50]. We use the Edinburgh Handedness Inventory to control for handedness, which might affect the laterality of brain activation [33-35]. We also perform a spatial registration of optode positions to increase reproducibility. Furthermore, these measured positions can be projected into MNI space and displayed on MRI images. In the diagnostic workup, MRI are routinely performed prior to CI surgery. The method will allow a more accurate localization of hemodynamic responses compared to atlas-based approaches [55,76].

Additionally, we use a multidistance channel setup. The optodes of the regular channels are ~30 mm apart from each other. Additional short channels with a 15-mm interoptode distance over the auditory and visual cortex provide extracerebral information to remove confounding systemic signals. It is recommended to use a systemic physiology controlled fNIRS approach, although this has rarely been applied in previous studies [55].

Fourth, it should be time efficient to avoid fatigue due to experiment duration. The longest task the participants will be required to complete will last 12 minutes, and the total measurement time will be around 45 minutes. Regular breaks will be provided, and the total duration of the experiment is expected to be around 120-150 minutes.

Fifth, it should be suitable for participants with normal hearing, hearing impairments, and those using CIs. The audio material is presented through a loudspeaker, so the task is suitable for people with normal hearing as well as for hearing aid and CI users. Alternatively, an insert earphone or a direct CI audio input simulation would be feasible. However, these 2 approaches have the disadvantage that the 3 aforementioned groups cannot not be stimulated identically. Our optode placement was chosen to allow for easy attachment of the implant coil.

Sixth, it should be reproducible by other research groups. The audiovisual version of the OLSA was published in 2021 and is now accessible [58]. Moreover, we are happy to share our setup upon request.

In summary, the proposed audiovisual speech comprehension task will help us understand neural correlates to speech understanding. In the first stage, we will perform these measurements postoperatively to better understand the corresponding neuronal networks with an activated implant. In the subsequent stage, we will perform the measurements pre- and postoperatively to make prognostic calculations. The comprehensive study will have the potential to provide additional prognostic information beyond the conventional clinical standards regarding the underlying plastic brain changes of a person with hearing impairment. Our study will facilitate more precise indication criteria for cochlear implantation and a better planning of rehabilitation.

Acknowledgments

The study is funded by the Wonderland Foundation, the Gottfried und Julia Bangerter-Rhyner Foundation, and the UniBern Research Foundation.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Neurological, cardiac, psychiatric, or other major diseases used as exclusion criteria.

PDF File (Adobe PDF File), 50 KB

  1. World Health Organization. Addressing the rising prevalence of hearing loss. Geneve, Switzerland: World Health Organization; 2018.   URL: https://www.who.int/
  2. Bond M, Mealing S, Anderson R, Elston J, Weiner G, Taylor R, et al. The effectiveness and cost-effectiveness of cochlear implants for severe to profound deafness in children and adults: a systematic review and economic model. Health Technol Assess 2009 Sep;13(44):1-330 [FREE Full text] [CrossRef] [Medline]
  3. Strelnikov K, Marx M, Lagleyre S, Fraysse B, Deguine O, Barone P. PET-imaging of brain plasticity after cochlear implantation. Hear Res 2015 Apr;322:180-187. [CrossRef] [Medline]
  4. Wimmer W, Weder S, Caversaccio M, Kompis M. Speech intelligibility in noise with a pinna effect imitating cochlear implant processor. Otol Neurotol 2016 Jan;37(1):19-23. [CrossRef] [Medline]
  5. Fischer T, Schmid C, Kompis M, Mantokoudis G, Caversaccio M, Wimmer W. Pinna-imitating microphone directionality improves sound localization and discrimination in bilateral cochlear implant users. Ear Hear 2021;42(1):214-222 [FREE Full text] [CrossRef] [Medline]
  6. McRackan TR, Bauschard M, Hatch JL, Franko-Tobin E, Droghini HR, Nguyen SA, et al. Meta-analysis of quality-of-life improvement after cochlear implantation and associations with speech recognition abilities. Laryngoscope 2018 Apr 21;128(4):982-990 [FREE Full text] [CrossRef] [Medline]
  7. Geers AE, Nicholas J, Tobey E, Davidson L. Persistent language delay versus late language emergence in children with early cochlear implantation. J Speech Lang Hear Res 2016 Feb;59(1):155-170 [FREE Full text] [CrossRef] [Medline]
  8. Blamey P, Artieres F, Başkent D, Bergeron F, Beynon A, Burke E, et al. Factors affecting auditory performance of postlinguistically deaf adults using cochlear implants: an update with 2251 patients. Audiol Neurootol 2013;18(1):36-47. [CrossRef] [Medline]
  9. Lazard DS, Vincent C, Venail F, Van de Heyning P, Truy E, Sterkers O, et al. Pre-, per- and postoperative factors affecting performance of postlinguistically deaf adults using cochlear implants: a new conceptual model over time. PLoS One 2012 Nov 9;7(11):e48739 [FREE Full text] [CrossRef] [Medline]
  10. Bavelier D, Neville HJ. Cross-modal plasticity: where and how? Nat Rev Neurosci 2002 Jun;3(6):443-452. [CrossRef] [Medline]
  11. Lazard DS, Lee H, Truy E, Giraud A. Bilateral reorganization of posterior temporal cortices in post-lingual deafness and its relation to cochlear implant outcome. Hum Brain Mapp 2013 May 30;34(5):1208-1219 [FREE Full text] [CrossRef] [Medline]
  12. Karns CM, Dow MW, Neville HJ. Altered Cross-Modal Processing in the Primary Auditory Cortex of Congenitally Deaf Adults: A Visual-Somatosensory fMRI Study with a Double-Flash Illusion. J Neurosci 2012 Jul 11;32(28):9626-9638. [CrossRef]
  13. Stropahl M, Chen L, Debener S. Cortical reorganization in postlingually deaf cochlear implant users: Intra-modal and cross-modal considerations. Hear Res 2017 Jan;343:128-137 [FREE Full text] [CrossRef] [Medline]
  14. Lee H, Truy E, Mamou G, Sappey-Marinier D, Giraud A. Visual speech circuits in profound acquired deafness: a possible role for latent multimodal connectivity. Brain 2007 Nov 05;130(11):2929-2941. [CrossRef] [Medline]
  15. McKay CM. Brain plasticity and rehabilitation with a cochlear implant. Adv Otorhinolaryngol 2018;81:57-65. [CrossRef] [Medline]
  16. Sharma A, Campbell J, Cardon G. Developmental and cross-modal plasticity in deafness: evidence from the P1 and N1 event related potentials in cochlear implanted children. Int J Psychophysiol 2015 Feb;95(2):135-144 [FREE Full text] [CrossRef] [Medline]
  17. Rouger J, Lagleyre S, Démonet JF, Fraysse B, Deguine O, Barone P. Evolution of crossmodal reorganization of the voice area in cochlear-implanted deaf patients. Hum Brain Mapp 2012 Aug 06;33(8):1929-1940 [FREE Full text] [CrossRef] [Medline]
  18. Srinivasan R, So C, Amin N, Jaikaransingh D, D'Arco F, Nash R. A review of the safety of MRI in cochlear implant patients with retained magnets. Clin Radiol 2019 Dec;74(12):972.e9-972.e16. [CrossRef] [Medline]
  19. Shew M, Wichova H, Lin J, Ledbetter LN, Staecker H. Magnetic resonance imaging with cochlear implants and auditory brainstem implants: Are we truly practicing MRI safety? Laryngoscope 2019 Feb 09;129(2):482-489. [CrossRef] [Medline]
  20. Wagner L, Maurits N, Maat B, Baskent D, Wagner AE. The cochlear implant EEG artifact recorded from an artificial brain for complex acoustic stimuli. IEEE Trans Neural Syst Rehabil Eng 2018 Feb;26(2):392-399. [CrossRef]
  21. Saliba J, Bortfeld H, Levitin DJ, Oghalai JS. Functional near-infrared spectroscopy for neuroimaging in cochlear implant recipients. Hear Res 2016 Aug;338:64-75 [FREE Full text] [CrossRef] [Medline]
  22. Anderson CA, Wiggins IM, Kitterick PT, Hartley DEH. Pre-operative brain imaging using functional near-infrared spectroscopy helps predict cochlear implant outcome in deaf adults. J Assoc Res Otolaryngol 2019 Oct 8;20(5):511-528 [FREE Full text] [CrossRef] [Medline]
  23. Zhou X, Seghouane A, Shah A, Innes-Brown H, Cross W, Litovsky R, et al. Cortical speech processing in postlingually deaf adult cochlear implant users, as revealed by functional near-infrared spectroscopy. Trends Hear 2018 Jul 19;22:2331216518786850 [FREE Full text] [CrossRef] [Medline]
  24. Anderson CA, Wiggins IM, Kitterick PT, Hartley DEH. Adaptive benefit of cross-modal plasticity following cochlear implantation in deaf adults. Proc Natl Acad Sci U S A 2017 Sep 19;114(38):10256-10261 [FREE Full text] [CrossRef] [Medline]
  25. Mushtaq F, Wiggins IM, Kitterick PT, Anderson CA, Hartley DEH. The benefit of cross-modal reorganization on speech perception in pediatric cochlear implant recipients revealed using functional near-infrared spectroscopy. Front Hum Neurosci 2020 Aug 14;14:308 [FREE Full text] [CrossRef] [Medline]
  26. Harrison SC, Lawrence R, Hoare DJ, Wiggins IM, Hartley DEH. Use of functional near-infrared spectroscopy to predict and measure cochlear implant outcomes: a scoping review. Brain Sci 2021 Oct 28;11(11):1439 [FREE Full text] [CrossRef] [Medline]
  27. Wild CJ, Yusuf A, Wilson DE, Peelle JE, Davis MH, Johnsrude IS. Effortful listening: the processing of degraded speech depends critically on attention. J Neurosci 2012 Oct 03;32(40):14010-14021. [CrossRef]
  28. Weder S, Shoushtarian M, Olivares V, Zhou X, Innes-Brown H, McKay C. Cortical fNIRS Responses Can Be Better Explained by Loudness Percept than Sound Intensity. Ear Hear 2020;41(5):1187-1195. [CrossRef] [Medline]
  29. Orihuela-Espina F, Leff DR, James DRC, Darzi AW, Yang GZ. Quality control and assurance in functional near infrared spectroscopy (fNIRS) experimentation. Phys Med Biol 2010 Jul 07;55(13):3701-3724. [CrossRef] [Medline]
  30. Wyser DG, Kanzler CM, Salzmann L, Lambercy O, Wolf M, Scholkmann F, et al. Characterizing reproducibility of cerebral hemodynamic responses when applying short-channel regression in functional near-infrared spectroscopy. Neurophoton 2022 Jan 1;9(01):15004. [CrossRef]
  31. Shoushtarian M, Alizadehsani R, Khosravi A, Acevedo N, McKay CM, Nahavandi S, et al. Objective measurement of tinnitus using functional near-infrared spectroscopy and machine learning. PLoS One 2020 Nov 18;15(11):e0241695 [FREE Full text] [CrossRef] [Medline]
  32. Shoushtarian M, Weder S, Innes-Brown H, McKay CM. Assessing hearing by measuring heartbeat: The effect of sound level. PLoS One 2019 Feb 28;14(2):e0212940 [FREE Full text] [CrossRef] [Medline]
  33. Oldfield R. The assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia 1971 Mar;9(1):97-113. [CrossRef]
  34. Khedr E, Hamed E, Said A, Basahi J. Handedness and language cerebral lateralization. Eur J Appl Physiol 2002 Aug 1;87(4-5):469-473. [CrossRef] [Medline]
  35. Watson NF, Dodrill C, Farrell D, Holmes MD, Miller JW. Determination of language dominance with near-infrared spectroscopy: comparison with the intracarotid amobarbital procedure. Seizure 2004 Sep;13(6):399-402 [FREE Full text] [CrossRef] [Medline]
  36. Sargent A, Watson J, Topoglu Y, Ye H, Suri R, Ayaz H. Impact of tea and coffee consumption on cognitive performance: An fNIRS and EDA Study. Appl Sci 2020 Apr 01;10(7):2390. [CrossRef]
  37. Hu S, Anschuetz L, Huth ME, Sznitman R, Blaser D, Kompis M, et al. Association between residual inhibition and neural activity in patients with tinnitus: protocol for a controlled within- and between-subject comparison study. JMIR Res Protoc 2019 Jan 09;8(1):e12270 [FREE Full text] [CrossRef] [Medline]
  38. Noble W, Jensen NS, Naylor G, Bhullar N, Akeroyd MA. A short form of the Speech, Spatial and Qualities of Hearing scale suitable for clinical use: the SSQ12. Int J Audiol 2013 Jun 08;52(6):409-412 [FREE Full text] [CrossRef] [Medline]
  39. Hahlbrock KH. Sprachaudiometrie: Grundlagen und Praktische Anwendung Einer Sprachaudiometrie für das deutsche Sprachgebiet; 157 Abbildungen in 305 Einzeldarstellungen 9 Tabellen. Teningen, Germany: Thieme; 1970.
  40. Wagener K, Brand T, Kollmeier B. Entwicklung und Evaluation eines Satztests für die deutsche Sprache Teil III: Evaluation des Oldenburger Satztests. Zeitschrift für Audiologie 1999:38 [FREE Full text]
  41. Ahrlich M. Optimierung und Evaluation des Oldenburger Satztests mit Weiblicher Sprecherin und Untersuchung des Effekts des Sprechers auf die Sprachverständlichkeit. Oldenburg, Germany: Carl von Ossietzky Universität Oldenburg; 2013.
  42. Wagener K, Hochmuth S, Ahrlich M, Zokoll-v. d. Laan M, Kollmeier B. Der weibliche oldenburger satztest. The female version of the Oldenburg sentence test. In: Proceedings of the 17th Jahrestagung der Deutschen Gesellschaft für Audiologie. 2014 Mar 13 Presented at: Jahrestagung der Deutschen Gesellschaft für Audiologie; 2014; Oldenburg, Germany   URL: http://www.uzh.ch/orl/dga2014/programm/wissprog/Wagener.pdf
  43. Wimmer W, Kompis M, Stieger C, Caversaccio M, Weder S. Directional microphone contralateral routing of signals in cochlear implant users. Ear Hear 2017;38(3):368-373. [CrossRef]
  44. Gawliczek T, Wimmer W, Munzinger F, Caversaccio M, Kompis M. Speech understanding and sound localization with a new nonimplantable wearing option for Baha. Biomed Res Int 2018 Sep 25;2018:5264124 [FREE Full text] [CrossRef] [Medline]
  45. Wardenga N, Batsoulis C, Wagener KC, Brand T, Lenarz T, Maier H. Do you hear the noise? The German matrix sentence test with a fixed noise level in subjects with normal hearing and hearing impairment. Int J Audiol 2015 Nov 10;54 Suppl 2(sup2):71-79. [CrossRef] [Medline]
  46. Chen L, Puschmann S, Debener S. Increased cross-modal functional connectivity in cochlear implant users. Sci Rep 2017 Aug 30;7(1):10043 [FREE Full text] [CrossRef] [Medline]
  47. Chen L, Sandmann P, Thorne JD, Bleichner MG, Debener S. Cross-modal functional reorganization of visual and auditory cortex in adult cochlear implant users identified with fNIRS. Neural Plast 2016;2016:4382656-4382613 [FREE Full text] [CrossRef] [Medline]
  48. Weder S, Zhou X, Shoushtarian M, Innes-Brown H, McKay C. Cortical processing related to intensity of a modulated noise stimulus-a functional near-infrared study. J Assoc Res Otolaryngol 2018 Jun 9;19(3):273-286 [FREE Full text] [CrossRef] [Medline]
  49. Wiggins IM, Anderson CA, Kitterick PT, Hartley DE. Speech-evoked activation in adult temporal cortex measured using functional near-infrared spectroscopy (fNIRS): Are the measurements reliable? Hear Res 2016 Sep;339:142-154 [FREE Full text] [CrossRef] [Medline]
  50. Olds C, Pollonini L, Abaya H, Larky J, Loy M, Bortfeld H, et al. Cortical activation patterns correlate with speech understanding after cochlear implantation. Ear Hear 2016;37(3):e160-e172 [FREE Full text] [CrossRef] [Medline]
  51. Lawrence RJ, Wiggins IM, Anderson CA, Davies-Thompson J, Hartley DE. Cortical correlates of speech intelligibility measured using functional near-infrared spectroscopy (fNIRS). Hear Res 2018 Dec;370:53-64 [FREE Full text] [CrossRef] [Medline]
  52. Defenderfer J, Forbes S, Wijeakumar S, Hedrick M, Plyler P, Buss AT. Frontotemporal activation differs between perception of simulated cochlear implant speech and speech in background noise: An image-based fNIRS study. Neuroimage 2021 Oct 15;240:118385 [FREE Full text] [CrossRef] [Medline]
  53. Zimeo Morais GA, Balardin JB, Sato JR. fNIRS Optodes' Location Decider (fOLD): a toolbox for probe arrangement guided by brain regions-of-interest. Sci Rep 2018 Feb 20;8(1):3341 [FREE Full text] [CrossRef] [Medline]
  54. Chatrian GE, Lettich E, Nelson PL. Ten percent electrode system for topographic studies of spontaneous and evoked EEG activities. Am J EEG Technol 2015 Feb 10;25(2):83-92. [CrossRef]
  55. Yücel MA, Lühmann AV, Scholkmann F, Gervain J, Dan I, Ayaz H, et al. Best practices for fNIRS publications. Neurophoton 2021 Jan 1;8(01):12101. [CrossRef]
  56. Scholkmann F, Kleiser S, Metz AJ, Zimmermann R, Mata Pavia J, Wolf U, et al. A review on continuous wave functional near-infrared spectroscopy and imaging instrumentation and methodology. Neuroimage 2014 Jan 15;85 Pt 1:6-27. [CrossRef] [Medline]
  57. Zhang Q, Brown EN, Strangman GE. Adaptive filtering for global interference cancellation and real-time recovery of evoked brain activity: a Monte Carlo simulation study. J Biomed Opt 2007;12(4):044014 [FREE Full text] [CrossRef] [Medline]
  58. Llorach G, Kirschner F, Grimm G, Zokoll MA, Wagener KC, Hohmann V. Development and evaluation of video recordings for the OLSA matrix sentence test. Int J Audiol 2022 Apr 10;61(4):311-321. [CrossRef] [Medline]
  59. Handwerker DA, Ollinger JM, D'Esposito M. Variation of BOLD hemodynamic responses across subjects and brain regions and their effects on statistical analyses. Neuroimage 2004 Apr;21(4):1639-1651. [CrossRef] [Medline]
  60. Micklewright D, St Clair Gibson A, Gladwell V, Al Salman A. Development and Validity of the Rating-of-Fatigue Scale. Sports Med 2017 Mar 10;47(11):2375-2393. [CrossRef]
  61. Unsworth N, McMillan BD. Similarities and differences between mind-wandering and external distraction: a latent variable analysis of lapses of attention and their relation to cognitive abilities. Acta Psychol (Amst) 2014 Jul;150:14-25. [CrossRef] [Medline]
  62. Stawarczyk D, Majerus S, Maj M, Van der Linden M, D'Argembeau A. Mind-wandering: phenomenology and function as assessed with a novel experience sampling method. Acta Psychol (Amst) 2011 Mar;136(3):370-381. [CrossRef] [Medline]
  63. Woods DL, Stecker GC, Rinne T, Herron TJ, Cate AD, Yund EW, et al. Functional maps of human auditory cortex: effects of acoustic features and attention. PLoS One 2009 Apr 13;4(4):e5183 [FREE Full text] [CrossRef] [Medline]
  64. Jäncke L, Mirzazade S, Joni Shah N. Attention modulates activity in the primary and the secondary auditory cortex: a functional magnetic resonance imaging study in human subjects. Neurosci 1999 May;266(2):125-128. [CrossRef]
  65. Krueger M, Schulte M, Zokoll MA, Wagener KC, Meis M, Brand T, et al. Relation between listening effort and speech intelligibility in noise. Am J Audiol 2017 Oct 12;26(3S):378-392. [CrossRef]
  66. Huppert TJ, Diamond SG, Franceschini MA, Boas DA. HomER: a review of time-series analysis methods for near-infrared spectroscopy of the brain. Appl Opt 2009 Apr 01;48(10):D280-D298 [FREE Full text] [CrossRef] [Medline]
  67. Santosa H, Zhai X, Fishburn F, Huppert T. The NIRS Brain AnalyzIR Toolbox. Algorithms 2018 May 16;11(5):73. [CrossRef]
  68. Pollonini L, Bortfeld H, Oghalai JS. PHOEBE: a method for real time mapping of optodes-scalp coupling in functional near-infrared spectroscopy. Biomed Opt Express 2016 Nov 15;7(12):5104. [CrossRef]
  69. Wyser D, Mattille M, Wolf M, Lambercy O, Scholkmann F, Gassert R. Short-channel regression in functional near-infrared spectroscopy is more effective when considering heterogeneous scalp hemodynamics. Neurophoton 2020 Jul 1;7(03). [CrossRef]
  70. Sappia MS, Hakimi N, Colier WNJM, Horschig JM. Signal quality index: an algorithm for quantitative assessment of functional near infrared spectroscopy signal quality. Biomed Opt Express 2020 Oct 27;11(11):6732. [CrossRef]
  71. Perdue KL, Westerlund A, McCormick SA, Nelson CA. Extraction of heart rate from functional near-infrared spectroscopy in infants. J Biomed Opt 2014 Jun 01;19(6):067010. [CrossRef]
  72. Saager RB, Berger AJ. Direct characterization and removal of interfering absorption trends in two-layer turbid media. J Opt Soc Am A Opt Image Sci Vis 2005 Sep 01;22(9):1874-1882. [CrossRef] [Medline]
  73. Baker WB, Parthasarathy AB, Busch DR, Mesquita RC, Greenberg JH, Yodh AG. Modified Beer-Lambert law for blood flow. Biomed Opt Express 2014 Oct 28;5(11):4053. [CrossRef]
  74. Cui X, Bray S, Reiss AL. Functional near infrared spectroscopy (NIRS) signal improvement based on negative correlation between oxygenated and deoxygenated hemoglobin dynamics. Neuroimage 2010 Feb 15;49(4):3039-3046 [FREE Full text] [CrossRef] [Medline]
  75. Cignoni P, Callieri M, Corsini M, Dellepiane M, Ganovelli F, Ranzuglia G. MeshLab: an Open-Source Mesh Processing Tool. In: Eurographics Italian Chapter Conference. Geneva, Switzerland: The Eurographics Association; 2008:129-136.
  76. Novi SL, Forero EJ, Rubianes Silva JAI, de Souza NGSR, Martins GG, Quiroga A, et al. Integration of spatial information increases reproducibility in functional near-infrared spectroscopy. Front Neurosci 2020 Jul 28;14:746 [FREE Full text] [CrossRef] [Medline]
  77. Gramfort A, Luessi M, Larson E, Engemann DA, Strohmeier D, Brodbeck C, et al. MNE software for processing MEG and EEG data. Neuroimage 2014 Feb 01;86:446-460 [FREE Full text] [CrossRef] [Medline]
  78. Luke R, Larson E, Shader M, Innes-Brown H, Van Yper L, Lee A, et al. Analysis methods for measuring passive auditory fNIRS responses generated by a block-design paradigm. Neurophoton 2021 Apr 1;8(02):2020. [CrossRef]
  79. Glover GH. Deconvolution of impulse response in event-related BOLD fMRI. Neuroimage 1999 Apr;9(4):416-429. [CrossRef] [Medline]
  80. Abraham A, Pedregosa F, Eickenberg M, Gervais P, Mueller A, Kossaifi J, et al. Machine learning for neuroimaging with scikit-learn. Front Neuroinform 2014;8:14 [FREE Full text] [CrossRef] [Medline]
  81. Kamran MA, Jeong MY, Mannan MMN. Optimal hemodynamic response model for functional near-infrared spectroscopy. Front Behav Neurosci 2015 Jun 16;9:151 [FREE Full text] [CrossRef] [Medline]
  82. Abdel-Latif KHA, Meister H. Speech recognition and listening effort in cochlear implant recipients and normal-hearing listeners. Front Neurosci 2021 Feb 10;15:725412 [FREE Full text] [CrossRef] [Medline]


ACALES: Adaptive Categorical Listening Effort Scaling
CI: cochlear implant
CRF: case report form
EEG: electroencephalography
fNIRS: functional near-infrared spectroscopy
GLM: general linear model
HbO: oxygenated hemoglobin
HbR: deoxygenated hemoglobin
HL: hearing level
LIFG: left inferior frontal gyrus
MFG: and middle frontal gyrus
MNI: Montreal Neurological Institute
MRI: magnetic resonance imaging
MTG: middle temporal gyrus
OLSA: Oldenburg Sentence Test
PTA: pure-tone average
ROI: region of interest
SPL: sound pressure level
SSQ-12: Speech, Spatial, and Qualities
STG: superior temporal gyrus
TTL: Transistor-Transistor Logic
V1: primary visual cortex
V2: visual association cortex


Edited by T Leung; submitted 31.03.22; peer-reviewed by CC Wu, M Shoushtarian; comments to author 28.04.22; revised version received 12.05.22; accepted 03.06.22; published 21.06.22

Copyright

©András Bálint, Wilhelm Wimmer, Marco Caversaccio, Stefan Weder. Originally published in JMIR Research Protocols (https://www.researchprotocols.org), 21.06.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Research Protocols, is properly cited. The complete bibliographic information, a link to the original publication on https://www.researchprotocols.org, as well as this copyright and license information must be included.