<?xml version="1.0" encoding="iso-8859-1" standalone="no"?>
<!DOCTYPE GmsArticle SYSTEM "http://www.egms.de/dtd/2.0.34/GmsArticle.dtd">
<GmsArticle xmlns:xlink="http://www.w3.org/1999/xlink">
  <MetaData>
    <Identifier>zaud000072</Identifier>
    <IdentifierDoi>10.3205/zaud000072</IdentifierDoi>
    <IdentifierUrn>urn:nbn:de:0183-zaud0000722</IdentifierUrn>
    <ArticleType>Short Report</ArticleType>
    <TitleGroup>
      <Title language="en">Prediction of aided speech recognition using Random Forest Regression</Title>
      <TitleTranslated language="de">Pr&#228;diktion des Sprachverstehens mit H&#246;rger&#228;t mittels Random Forest Regression</TitleTranslated>
    </TitleGroup>
    <CreatorList>
      <Creator>
        <PersonNames>
          <Lastname>Engler</Lastname>
          <LastnameHeading>Engler</LastnameHeading>
          <Firstname>Max</Firstname>
          <Initials>M</Initials>
        </PersonNames>
        <Address>HNO-Klinik des Uni-Klinikums Erlangen, Waldstra&#223;e 1, 91054 Erlangen, Germany<Affiliation>Department of Audiology, ENT Clinic, University of Erlangen-N&#252;rnberg, Erlangen, Germany</Affiliation></Address>
        <Email>max.engler&#64;uk-erlangen.de</Email>
        <Creatorrole corresponding="yes" presenting="no">author</Creatorrole>
      </Creator>
      <Creator>
        <PersonNames>
          <Lastname>Digeser</Lastname>
          <LastnameHeading>Digeser</LastnameHeading>
          <Firstname>Frank</Firstname>
          <Initials>F</Initials>
        </PersonNames>
        <Address>
          <Affiliation>Department of Audiology, ENT Clinic, University of Erlangen-N&#252;rnberg, Erlangen, Germany</Affiliation>
        </Address>
        <Creatorrole corresponding="no" presenting="no">author</Creatorrole>
      </Creator>
      <Creator>
        <PersonNames>
          <Lastname>Hoppe</Lastname>
          <LastnameHeading>Hoppe</LastnameHeading>
          <Firstname>Ulrich</Firstname>
          <Initials>U</Initials>
        </PersonNames>
        <Address>
          <Affiliation>Department of Audiology, ENT Clinic, University of Erlangen-N&#252;rnberg, Erlangen, Germany</Affiliation>
        </Address>
        <Creatorrole corresponding="no" presenting="no">author</Creatorrole>
      </Creator>
    </CreatorList>
    <PublisherList>
      <Publisher>
        <Corporation>
          <Corporatename>German Medical Science GMS Publishing House</Corporatename>
        </Corporation>
        <Address>D&#252;sseldorf</Address>
      </Publisher>
    </PublisherList>
    <SubjectGroup>
      <SubjectheadingDDB>610</SubjectheadingDDB>
      <Keyword language="en">hearing aid</Keyword>
      <Keyword language="en">real-ear-measurements</Keyword>
      <Keyword language="en">machine learning</Keyword>
      <Keyword language="de">H&#246;rger&#228;t</Keyword>
      <Keyword language="de">In-situ-Messungen</Keyword>
      <Keyword language="de">maschinelles Lernen</Keyword>
    </SubjectGroup>
    <DatePublishedList>
      <DatePublished>20250930</DatePublished>
    </DatePublishedList>
    <Language>engl</Language>
    <License license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
      <AltText language="en">This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 License.</AltText>
      <AltText language="de">Dieser Artikel ist ein Open-Access-Artikel und steht unter den Lizenzbedingungen der Creative Commons Attribution 4.0 License (Namensnennung).</AltText>
    </License>
    <SourceGroup>
      <Journal>
        <ISSN>2628-9083</ISSN>
        <Volume>7</Volume>
        <JournalTitle>GMS Zeitschrift f&#252;r Audiologie - Audiological Acoustics</JournalTitle>
        <JournalTitleAbbr>GMS Z Audiol (Audiol Acoust)</JournalTitleAbbr>
      </Journal>
    </SourceGroup>
    <ArticleNo>09</ArticleNo>
    <Erratum><DateLastErratum>20251114</DateLastErratum><Pgraph>The note &#8220;Cumulative dissertation&#8221; was added.</Pgraph></Erratum>
    <Fundings>
      <Funding fundId="IIR-2398">Cochlear Research and Development Ltd</Funding>
    </Fundings>
  </MetaData>
  <OrigData>
    <Abstract language="de" linked="yes"><Pgraph>Menschen mit H&#246;rger&#228;ten (HGs) zeigen eine bislang weitgehend une<TextGroup><PlainText>rkl&#228;r</PlainText></TextGroup>te Variabilit&#228;t im Sprachverstehen mit HG. Bisher gibt es keine klare Empfehlung zur Bewertung der HG-Versorgung, insbesondere im Hinblick auf das erreichbare Sprachverstehen mit HG. Ziel dieser Studie war es, die einflussreichsten Faktoren auf das Sprachverstehen mit HG bei einem Schalldruckpegel von 65 dB SPL (Sound Pressure Level) zu identifizieren; im Folgenden als WRS<Subscript>65</Subscript>(HA) bezeichnet. Retrospektiv wurden Daten aus klinischen Routinemessungen von 635 H&#246;rger&#228;tetr&#228;gern analysiert, wobei 18 demografische, audiologische und h&#246;rger&#228;tespezifische Merkmale ber&#252;cksichtigt wurden. Zur Vorhersage von WRS<Subscript>65</Subscript>(HA) wurde ein Random Forest Regressionsmodell (RFR) eingesetzt. Durch ein iteratives Merkmals-Auswahlverfahren wurde die Kombination von Merkmalen mit dem geringsten mittleren absoluten Fehler (MAE) ermittelt. Audiologische Merkmale wie das maximale Einsilberverstehen (WRS<Subscript>max</Subscript>), der mittlere H&#246;rverlust (PTA) und das unversorgte Einsilberverstehen bei 65 dB SPL (WRS<Subscript>65</Subscript>) wiesen die h&#246;chste individuelle Vorhersagegenauigkeit auf. Demografische Merkmale wie Alter und Geschlecht schnitten deutlich schlechter ab. Der niedrigste signifikante MAE (<TextGroup><PlainText>9,8 P</PlainText></TextGroup>rozentpunkte, pp) wurde mit einer Drei-Merkmals-Kombination erreicht: WRS<Subscript>max</Subscript>, WRS<Subscript>65</Subscript> und die Ziel-Abweichungen bei in-situ-Messungen im mittleren Frequenzbereich bei 65 dB SPL. Die Einbeziehung zus&#228;tzlicher Funktionen scheint nur einen begrenzten Nutzen zu bringen und kann das Risiko einer &#220;beranpassung erh&#246;hen. Das einfache, PTA-basierte Vorhersagemodell von Hoppe et al. (2014) erreichte einen MAE von 14,4 pp und wurde durch die ermittelte Drei-Merkmals-Kombination um 4,6 pp &#252;bertroffen. Die Ergebnisse zeigen, dass PTA allein nicht ausreicht, um WRS<Subscript>65</Subscript>(HA) zuverl&#228;ssig vorherzus<TextGroup><PlainText>age</PlainText></TextGroup>n. Eine Kombination aus audiologischen und HG-spezifischen <TextGroup><PlainText>Parametern</PlainText></TextGroup> lieferte deutlich bessere Ergebnisse. </Pgraph></Abstract>
    <Abstract language="en" linked="yes"><Pgraph>Hearing aid (HA) users show largely unexplained variability in aided speech recognition. Until now, there is no clear recommendation for evaluating HA outcomes, particularly regarding the achievable speech recognition with HA. This study aimed to identify the most influential factors affecting the word recognition score with HA at 65 dB sound pressure level (SPL), referred to hereinafter as WRS<Subscript>65</Subscript>(HA). Retrospective data from clinical routine measurements of 635 HA users were analysed, including 18 demographic, audiological, and HA-related features. <TextGroup><PlainText>A Ran</PlainText></TextGroup>dom Forest Regression (RFR) was applied to predict WRS<Subscript>65</Subscript>(HA); and an iterative feature-selection process was used to determine the feature combination with the lowest mean absolute error (MAE). Audiological features such as maximum word recognition score (WRS<Subscript>max</Subscript>), pure-tone average (PTA), and unaided word recognition score at 65 dB SPL (WRS<Subscript>65</Subscript>) showed the highest individual predictive accuracy. Demographic features such as age and sex performed considerably worse. The lowest statistically significant MAE (9.8 percentage points, pp) was achieved with a three-feature combination: WRS<Subscript>max</Subscript>, WRS<Subscript>65</Subscript> and the fit-to-target accuracy in real-ear measurements for medium frequencies at 65 dB SPL input level. The inclusion of additional features appears to yield limited benefit and may increase the risk of overfitting. The simple prediction model by Hoppe et al. (2014) based on the PTA achieved an MAE of 14.4 pp and was outperformed by 4.6 pp when using the best three-feature combination. These findings highlight that PTA alone is insufficient for accurately predicting WRS<Subscript>65</Subscript>(HA). Combining speech audiometric data in combination with HA-specific parameters provides substantially better results. </Pgraph></Abstract>
    <TextBlock name="Introduction" linked="yes">
      <MainHeadline>Introduction</MainHeadline><Pgraph>Speech recognition improvement through hearing aids (HAs) is a key indicator of successful hearing rehabilitation and remains a central objective in audiological care. Effective HA fitting requires a delicate balance between providing sufficient amplification and maintaining user comfort and speech clarity. In clinical practice, the word recognition score at 65 dB sound pressure level (SPL) is commonly evaluated using standardised speech tests, such as the Freiburg monosyllable test <TextLink reference="1"></TextLink>, and is referred to hereinafter as WRS<Subscript>65</Subscript>(HA). Despite certain limitations, it remains the most commonly used tool for evaluating HA benefit in German-speaking countries and is endorsed in the German health-care <TextLink reference="2"></TextLink>.</Pgraph><Pgraph>Nevertheless, considerable individual variability in speech recognition persists, even among patients with similar hearing loss <TextLink reference="3"></TextLink>, <TextLink reference="4"></TextLink>, <TextLink reference="5"></TextLink>, <TextLink reference="6"></TextLink>, <TextLink reference="7"></TextLink>, <TextLink reference="8"></TextLink>, <TextLink reference="9"></TextLink>. According to Hoppe et al. (2014), the greatest variability in WRS<Subscript>65</Subscript>(HA), ranging from 0&#37; to 95&#37;, was observed around a pure-tone average (PTA) of 60 dB hearing level <TextLink reference="3"></TextLink>. Holube and Kollmeier (1996) demonstrated that speech recognition in HA users depends not only on audiometric thresholds but also on auditory processing factors such as temporal and spectral resolution <TextLink reference="10"></TextLink>. These factors are described as the &#8220;distortion&#8221; component in Plomp&#8217;s model (1986) <TextLink reference="11"></TextLink>. While Holube et al. (1996) and Plomp (1986) emphasize speech recognition in noise, where processing deficits have a greater impact and performance is more strongly influenced by auditory processing abilities (&#8216;distortion&#8217; <TextLink reference="11"></TextLink>), the studies by other authors (e.g., <TextLink reference="3"></TextLink>, <TextLink reference="4"></TextLink>, <TextLink reference="5"></TextLink>, <TextLink reference="6"></TextLink>, <TextLink reference="7"></TextLink>, <TextLink reference="8"></TextLink>, <TextLink reference="9"></TextLink>) primarily focus on speech recognition in quiet, which is largely determined by audibility (&#8216;attenuation&#8217; <TextLink reference="11"></TextLink>). Although many studies have developed predictive models for speech recognition in noise, relatively few focus on speech recognition in quiet while integrating multiple influencing factors. These findings underscore the need for predictive models that extend beyond pure measures of hearing loss. Such models could also support clinical practice by enabling faster detection and interpretation of individual results in speech recognition.</Pgraph><Pgraph>Machine learning methods like Random Forest Regression (RFR <TextLink reference="12"></TextLink>) are well-suited for modelling complex and nonlinear relationships between input features. RFR is an ensemble-based algorithm that combines multiple decision tree regressors to predict continuous outcomes. In contrast to so-called &#8220;black box&#8221; models, such as deep neural networks, which offer limited insight into the underlying decision process, RFR provides access to feature importance metrics and decision pathways. These aspects are crucial for fostering trust, transparency, and practical applicability in healthcare settings. </Pgraph><Pgraph>In this retrospective study, data from routine clinical assessments, including audiometric, demographic, and HA-related parameters, were utilised to develop a predictive model of WRS<Subscript>65</Subscript>(HA) based on RFR. The primary objective was to identify the most influential predictors of WRS<Subscript>65</Subscript>(HA) using a forward feature selection process <TextLink reference="13"></TextLink>, and to investigate the added value of combining features across different domains. This data-driven approach enhances understanding of the factors affecting HA performance, supporting more individualised and evidence-based HA fitting in clinical practice. </Pgraph></TextBlock>
    <TextBlock name="Methods" linked="yes">
      <MainHeadline>Methods</MainHeadline><Pgraph>Clinical routine data were collected retrospectively, encompassing a broad range of variables such as demographic information, audiological measures, and HA-re<TextGroup><PlainText>late</PlainText></TextGroup>d parameters. These variables underwent a feature selection process to identify those with the greatest impact on WRS<Subscript>65</Subscript>(HA).</Pgraph><SubHeadline>Data preparation</SubHeadline><Pgraph><TextGroup><PlainText>In this study, 635 HA evaluations of 374 patients (166 f,</PlainText></TextGroup> 208 m), comprising 303 bilateral and 71 unilateral HA users, aged 20&#8211;96 years (mean and standard deviation: 66.6&#177;15.0 years) were analysed. Demographic details are given in <TextLink reference="9"></TextLink>. For pure-tone and speech audiometry, a standard clinical audiometer (AT900&#47;AT1000 Auritec, Hamburg, Germany) was used. The four-frequency PTA, hereinafter referred to as PTA, was measured separately for both ears. For each speech recognition measurement, one list of 20 words of the Freiburg monosyllable test <TextLink reference="1"></TextLink> was presented. The maximum word recognition score (WRS<Subscript>max</Subscript>) was measured via headphones by stepwise increasing the presentation level, starting at 65 dB SPL. The level at which WRS<Subscript>max</Subscript> was reached is referred to as L(WRS<Subscript>max</Subscript>). Aided (WRS<Subscript>65</Subscript>(HA)) and unaided (WRS<Subscript>65</Subscript>) speech recognition were determined in quiet at 65 dB SPL in sound field, using a loudspeaker placed in front <TextGroup><PlainText>(0&#176;, 1 m).</PlainText></TextGroup> Additionally, the HA manufacturer and the HA experience in years were documented.</Pgraph><Pgraph>In order to evaluate the fitting quality of the HA, the sound-pressure level in the aided ear was measured by real-ear measurements with the Aurical II (Aurical, Natus, M&#252;nster, Germany). The international speech test signal (ISTS <TextLink reference="14"></TextLink>) was presented at 50, 65 and 80 dB SPL to determine the long-term average speech spectrum (LTASS <TextLink reference="15"></TextLink>) for 20 third-octave frequency bands f<Subscript>n</Subscript> (f<Subscript>n</Subscript>&#61;0.125&#42;2<Superscript>((n&#8211;1)&#47;</Superscript><TextGroup><Superscript>3)</Superscript><PlainText> k</PlainText></TextGroup>Hz, n&#61;1, 2, &#8230;, 20), based on established LTASS characteristics. The corresponding target levels were derived according to the DSL v5.0 (Desired Sensation Level version 5.0) prescription rule <TextLink reference="16"></TextLink>, <TextLink reference="17"></TextLink>. To quantify the match between prescribed and measured output, the mean difference between LTASS and targets was calculated and referred to as the fit-to-target value (FtT). These FtT values were analysed across three frequency ranges &#8212; Low (0.25&#8211;0.63 kHz), Mid (0.8&#8211;2.5 kHz), and High (3.15&#8211;6 kHz) &#8212; and for each of the three input levels (50, 65, and 80 dB SPL). This resulted in nine distinct features: FtT<Subscript>50</Subscript>(Low), FtT<Subscript>50</Subscript>(Mid), FtT<Subscript>50</Subscript>(High), FtT<Subscript>65</Subscript>(Low), FtT<Subscript>65</Subscript>(Mid), FtT<Subscript>65</Subscript>(High), FtT<Subscript>80</Subscript>(Low), FtT<Subscript>80</Subscript>(Mid), and FtT<Subscript>80</Subscript>(High), which together represent the accuracy of HA fitting across the speech spectrum and varying input levels.</Pgraph><SubHeadline>Model setup</SubHeadline><Pgraph>Random forest models are widely used for classification, regression, and predictive modelling due to their robustness and ability to handle high-dimensional data. In this study, the predictive performance of an RFR was evaluated using 18 features (see Figure 1 <ImgLink imgNo="1" imgType="figure" />), following an iterative forward feature selection process <TextLink reference="13"></TextLink> based on mean absolute error (MAE): </Pgraph><Pgraph><Indentation><ImgLink imgNo="1" imgType="inlineFigure" /></Indentation></Pgraph><Pgraph>where n is the number of data points, <ImgLink imgNo="2" imgType="inlineFigure" /> represents the measured value and <ImgLink imgNo="3" imgType="inlineFigure" /> denotes the predicted value of speech recognition using the Freiburg monosyllable test, and &#124;<ImgLink imgNo="2" imgType="inlineFigure" />&#8211;<ImgLink imgNo="3" imgType="inlineFigure" />&#124; is the absolute error for the i-th data point.</Pgraph><Pgraph><ImgPlaceholder imgNo="1" imgType="figure"/></Pgraph><Pgraph>For each iteration of the RFR model, the dataset was randomly split into 80&#37; training and 20&#37; test data. The selected features were evaluated over 100 independent runs to account for variability in random sampling, and the resulting MAE represents the average across these runs. Initially, each of the 18 features was tested indiv<TextGroup><PlainText>idually and </PlainText></TextGroup>ranked based on their average MAE (see Figure 1 <ImgLink imgNo="1" imgType="figure" />). The best-performing feature (lowest MAE) was selected for the second iteration. In the next step, this top-ranked feature was combined with each of the remaining 1<TextGroup><PlainText>7 f</PlainText></TextGroup>eatures to identify the optimal two-feature combination (see Figure 2 <ImgLink imgNo="2" imgType="figure" />), again based on the lowest MAE. This greedy forward-selection approach was repeated iteratively, adding one feature at a time based on performance, until all features were ranked. The optimal feature subset was identified at the iteration step with the minimum MAE. Additionally, statistical significance tests were used to determine the point up to which the MAE continued to decrease significantly. This step was considered the best balance between model complexity and predictive performance. For the entire feature selection process, we used fixed standard hyperparameters: </Pgraph><Pgraph><UnorderedList><ListItem level="1">Forest size (number of trees)&#61;100</ListItem><ListItem level="1">Min leaf size (minimum number of data points required in a leaf node)&#61;5</ListItem><ListItem level="1">Max splits (maximum number of splits allowed in any decision tree)&#61;25</ListItem></UnorderedList></Pgraph><Pgraph><ImgPlaceholder imgNo="2" imgType="figure"/></Pgraph><Pgraph>Fixed hyperparameters were chosen to ensure that differences in model performance could be attributed to feature selection rather than changes in model complexity. While hyperparameter optimization is known to potentially improve model performance, the focus here was on evaluating feature importance under consistent model settings. Performing hyperparameter optimization initially with the full feature set could bias the feature selection process, as optimal settings for a large feature set may not generalize well to smaller subsets. Furthermore, conducting hyperparameter tuning at every iteration of the feature selection process would drastically increase the total computational time, with likely only marginal improvements in performance. </Pgraph><SubHeadline>Data analysis</SubHeadline><Pgraph>The dataset was complete and contained no missing or erroneous values, so no data cleaning was required. The Shapiro&#8211;Wilk test was conducted to assess the normality of the data. Based on the results, either t-tests or rank-sum tests were applied for pairwise comparisons, using a significance level of &#945;&#61;0.05. Spearman&#8217;s method was used to calculate correlations. The statistical tests were carried out with Statistical Package for Social Sciences <TextGroup><PlainText>(SPSS</PlainText><Superscript>&#174;</Superscript><PlainText> V24, IBM Corp., Armonk&#47;NY, USA) and the RFR was</PlainText></TextGroup> performed with Matlab<Superscript>&#174;</Superscript> R2020b (Mathworks, Natick&#47;MA, USA).</Pgraph></TextBlock>
    <TextBlock name="Results" linked="yes">
      <MainHeadline>Results</MainHeadline><Pgraph>Figure 1 <ImgLink imgNo="1" imgType="figure" /> presents the MAE as a result of the RFR for each feature used individually as an input parameter. The features are ranked in descending order of MAE, starting with the highest (worst) and ending with the lowest (best). Side, sex, HA manufacturer, and age yielded the highest MAEs (27.7&#8211;28 percentage points, pp). Including HA-e<TextGroup><PlainText>xperie</PlainText></TextGroup>nce reduced the MAE to 25 pp, with further reduction<TextGroup><PlainText>s down to 21.1 pp across the FtT-values and L(WRS</PlainText><Subscript>max</Subscript><PlainText>).</PlainText></TextGroup> Among all features, the largest MAE reduction was found between FtT<Subscript>80</Subscript>(Low) and WRS<Subscript>65</Subscript>, where the MAE dropped from 21.1 pp to 15.1 pp. Ultimately, PTA emerged as the second-best feature (14.4 pp), while WRS<Subscript>max</Subscript> achieved the lowest MAE (13.7 pp) in the first iteration, making it the best-performing feature.</Pgraph><Pgraph>In Figure 2 <ImgLink imgNo="2" imgType="figure" />, each iteration represents a stepwise feature selection process, starting with the single best-performing feature WRS<Subscript>max</Subscript> with the lowest MAE of 13.7 pp from the first iteration (see Figure 1 <ImgLink imgNo="1" imgType="figure" />). Iteration 2 shows the MAE of the best combination of two features: the best feature from Iteration 1 paired with the remaining feature that resulted in the largest additional performance gain (WRS<Subscript>max</Subscript> and WRS<Subscript>65</Subscript>). This process continues iteratively, where each step selects the feature that, when combined with the previously chosen set, yields the lowest MAE. The MAE reaches its lowest point with the optimal feature set in the fifth iteration, including WRS<Subscript>max</Subscript>, WRS<Subscript>65</Subscript>, FtT<Subscript>65</Subscript>(Mid), FtT<Subscript>65</Subscript>(High) and age, yielding an MAE (MAE<Subscript>min</Subscript>) of 9.6 pp with a standard deviation of &#177;0.7 pp. Beyond this point, adding more features provided no further improvements and might have even slightly increased the MAE. Finally, t-tests or rank-sum tests with Bonferroni correction were conducted to determine up to which iteration the MAE continued to decrease significantly. The last iteration showing a significant improvement in MAE was defined as the best trade-off between model complexity and predictive accuracy, which occurred in the third iteration including WRS<Subscript>max</Subscript>, WRS<Subscript>65</Subscript> and FtT<Subscript>65</Subscript>(Mid) (MAE<Subscript>sig</Subscript>&#61;9.8&#177;0.7 pp).</Pgraph><Pgraph>The predicted WRS<Subscript>65</Subscript>(HA) is plotted against the measured WRS<Subscript>65</Subscript>(HA) in Figure 3 <ImgLink imgNo="3" imgType="figure" /> for a randomly selected subset of test data from the third iteration, which represents the significantly best-performing feature combination. The correlation analysis revealed a strong correlation (r&#61;0.93, &#945;&#61;0.001), indicating a high degree of alignment between the model&#39;s predictions and the actual measured scores.</Pgraph></TextBlock>
    <TextBlock name="Discussion and conclusion" linked="yes">
      <MainHeadline>Discussion and conclusion</MainHeadline><Pgraph>Demographic, audiological, and HA-related data from a large cohort of HA users were analysed to identify key factors influencing aided speech recognition. A Random Forest Regression with iterative feature selection was applied to determine the most predictive features.</Pgraph><Pgraph>When used as individual input features for the RFR, demographic data such as sex, side, and age, as well as HA manufacturer resulted in the poorest performance, with an MAE of approximately 28 pp (see Figure 1 <ImgLink imgNo="1" imgType="figure" />). For reference, a MAE of <ImgLink imgNo="4" imgType="inlineFigure" /> pp corresponds to completely random predictions, for example, when both measured (<ImgLink imgNo="5" imgType="inlineFigure" />) and predicted (<ImgLink imgNo="6" imgType="inlineFigure" />) values are uniformly distributed from 0 to 100. To improve the interpretability of these features, an auxiliary analysis was performed for each of the four features: the original values were replaced with random samples drawn from the same empirical distribution (i.e., probability density function, PDF) as the respective feature. Assuming statistical independence <TextGroup><PlainText>between </PlainText><ImgLink imgNo="5" imgType="inlineFigure" /><PlainText> and </PlainText><ImgLink imgNo="6" imgType="inlineFigure" /><PlainText>,</PlainText></TextGroup> the resulting MAEs were again close to 28 pp. This suggests that the observed performance likely reflects a statistical lower bound rather than meaningful predictive power.</Pgraph><Pgraph>However, the authors were surprised that age did not emerge as a more significant factor and leads to similarly poor performance as the categorical features such as sex, side, and HA manufacturer. This finding contrasts with previous studies suggesting that age influences WRS<Subscript>65</Subscript>(HA), with HA users aged 70 and older demonstrating significantly poorer outcomes compared to younger users <TextLink reference="3"></TextLink>, <TextLink reference="4"></TextLink>. On the other hand, Kronlachner et al. (2018) reported no significant effect of age on WRS<Subscript>65</Subscript>(HA) among a cohort of seniors aged 65 to 88 years <TextLink reference="5"></TextLink>. While HA manufacturer was more similar to the demographic data, other HA-related features such as HA experience, and the nine fit-to-target values showed a continuous improvement in MAE (25&#8211;21.1 pp). The analysis of the audiological features yielded the best performance, with WRS<Subscript>65</Subscript> (15.1 pp), PTA (14.3 pp) and WRS<Subscript>max</Subscript> (13.7 pp) achieving nearly half the MAE compared to the demographic features. </Pgraph><Pgraph>In comparison, the generalised formula proposed by Hoppe et al. (2014) offers a simple predictive approach based solely on PTA measurements. This formula, derived using logistic regression, was applied to our dataset (n&#61;635) and yielded an MAE of 14.4 pp <TextLink reference="3"></TextLink>. This result demonstrates that, despite the simplicity of the formula, it achieves a prediction accuracy comparable to our audiologic feature-based model and closely matches the MAE observed when using only PTA as the input feature for the RFR in this study. </Pgraph><Pgraph>In order to reduce the MAE even further, the best-pe<TextGroup><PlainText>rformi</PlainText></TextGroup>ng feature of the first iteration was combined with all of the remaining ones, to evaluate the best-pe<TextGroup><PlainText>rform</PlainText></TextGroup>ing two-feature combination. Subsequently, the best-per<TextGroup><PlainText>form</PlainText></TextGroup>ing three-feature combination was evaluated, and this process repeated iteratively until a feature combination yielding the lowest MAE across all 18 iterations was identified (see Figure 2 <ImgLink imgNo="2" imgType="figure" />). The minimum MAE (MAE<Subscript>min</Subscript>&#61;<TextGroup><PlainText>9.6 p</PlainText></TextGroup>p) occurred in the fifth iteration and included WRS<Subscript>max</Subscript>, WRS<Subscript>65</Subscript>, FtT<Subscript>65</Subscript>(Mid), FtT<Subscript>65</Subscript>(High) and age. </Pgraph><Pgraph>The statistical analysis revealed a statistically significant improvement in MAE compared to the previous iteration only up to the third iteration (MAE<Subscript>sig</Subscript>&#61;9.8 pp), which excluded FtT<Subscript>65</Subscript>(High) and age. In general, including more than five features did not lead to further improvements in performance and even caused a slight decline due to potential overfitting. However, fine-tuning the hyperparameters in each iteration could still enhance the performance of the RFR, but it would significantly increase the computational effort. Notably, the three-feature combination from the third iteration resulted in a 4.6 pp improvement over the model reported by Hoppe et al. (2014), which only used PTA as an input feature <TextLink reference="3"></TextLink>. Despite <TextGroup><PlainText>PTA being the second</PlainText></TextGroup>-best single-performing feature, it showed no significant influence on WRS<Subscript>65</Subscript>(HA) when combined with other features. A possible explanation for this could be the high correlation between PTA and both WRS<Subscript>max</Subscript> and WRS<Subscript>65</Subscript>, as reported in previous studies <TextLink reference="3"></TextLink>, <TextLink reference="4"></TextLink>, <TextLink reference="5"></TextLink>, <TextLink reference="6"></TextLink>, <TextLink reference="7"></TextLink>, <TextLink reference="8"></TextLink>, <TextLink reference="9"></TextLink>. Consequently, much of the information provided by PTA may already be accounted for by WRS<Subscript>max</Subscript> and WRS<Subscript>65</Subscript>. </Pgraph><Pgraph>For the fit-to-target values, only those at 65 dB SPL input level in the mid and high-frequency ranges showed an influence on WRS<Subscript>65</Subscript>(HA). Digeser et al. (2020) highlighted that adequate amplification in these frequency ranges <TextGroup><PlainText>is crucial for</PlainText></TextGroup> speech recognition, particularly for high-fr<TextGroup><PlainText>equen</PlainText></TextGroup>cy speech cues <TextLink reference="18"></TextLink>. To derive these fit-to-ta<TextGroup><PlainText>rg</PlainText></TextGroup>et values, the mean differences between LTASS and prescriptive target values of DSL v5.0 were used. While alternative targets could be considered, a recently published study demonstrated that, with a focus on 65 dB SPL input levels, HA users with a close match to the DSL-v5.0-targets exhibited consistently good speech recognition across all degrees of hearing loss <TextLink reference="9"></TextLink>. </Pgraph><Pgraph>This study demonstrated that parameters from aud<TextGroup><PlainText>i</PlainText></TextGroup>ometry, particularly WRS<Subscript>max</Subscript> and WRS<Subscript>65</Subscript>, are the most influential predictors of WRS<Subscript>65</Subscript>(HA). While HA-related features such as fitting accuracy in the 0.8&#8211;2.5 kHz frequency range at 65 dB SPL input level performed poorly on their own, their combination with audiological features significantly improved model accuracy. This underscores not only the relevance of feature interactions, but also the important role of optimal HA fitting in achieving successful HA outcomes. </Pgraph><SubHeadline>Limitations of the study</SubHeadline><Pgraph>Unfortunately, the influence of fit-to-target accuracy for low and high input levels was not examined. However, the results in this study suggested that the fit-to-target values for these low and high input levels did not influence WRS<Subscript>65</Subscript>(HA), likely due to redundancy with the fit-to-target values established for 65 dB SPL input level. Furthermore, many of the features used in this RFR are likely strongly correlated, leading to a certain degree of redundancy within the overall feature set. A detailed correlation analysis was not performed in this study. </Pgraph><Pgraph>Principal Component Analysis (PCA) was not applied in this study, as the number of features was limited and model interpretability was prioritised. However, future work may include PCA or other dimensionality reduction techniques to evaluate their effect on model performance.</Pgraph></TextBlock>
    <TextBlock name="Notes" linked="yes">
      <MainHeadline>Notes</MainHeadline><SubHeadline>Cumulative dissertation</SubHeadline><Pgraph>The present work was performed in partial fulfillment of the requirements for obtaining the degree &#8220;Dr. rer. biol. hum.&#8221; at the Friedrich-Alexander-Universit&#228;t Erlangen-N&#252;rnberg (FAU).</Pgraph><SubHeadline>Conference presentation</SubHeadline><Pgraph>This contribution was presented at the 27<Superscript>th</Superscript> Annual Conference of the German Society of Audiology and published as an abstract <TextLink reference="19"></TextLink>.</Pgraph><SubHeadline>Data availability</SubHeadline><Pgraph>Raw data supporting the findings of this study are available from the corresponding author upon reasonable request.</Pgraph><SubHeadline>Funding</SubHeadline><Pgraph>This work was supported by Cochlear Research and Development Ltd &#91;IIR-2398&#93;.</Pgraph><SubHeadline>Competing interests</SubHeadline><Pgraph>The authors declare that they have no competing interests.</Pgraph></TextBlock>
    <References linked="yes">
      <Reference refNo="1">
        <RefAuthor>Hahlbrock KH</RefAuthor>
        <RefTitle></RefTitle>
        <RefYear>1957</RefYear>
        <RefBookTitle>Sprachaudiometrie: Grundlagen und Praktische Anwendung einer Sprachaudiometrie f&#252;r das Deutsche Sprachgebiet</RefBookTitle>
        <RefPage></RefPage>
        <RefTotal>Hahlbrock KH. Sprachaudiometrie: Grundlagen und Praktische Anwendung einer Sprachaudiometrie f&#252;r das Deutsche Sprachgebiet. Stuttgart: Thieme; 1957.</RefTotal>
      </Reference>
      <Reference refNo="2">
        <RefAuthor>Gemeinsamer Bundesausschuss</RefAuthor>
        <RefTitle></RefTitle>
        <RefYear></RefYear>
        <RefBookTitle>Richtlinie des Gemeinsamen Bundesausschusses &#252;ber die Verordnung von Hilfsmitteln in der vertrags&#228;rztlichen Versorgung (Hilfsmittel-Richtlinie &#47; HilfsM-RL) in der Neufassung vom 01.04.2021. BAnz AT 15.04.2021 B3</RefBookTitle>
        <RefPage></RefPage>
        <RefTotal>Gemeinsamer Bundesausschuss. Richtlinie des Gemeinsamen Bundesausschusses &#252;ber die Verordnung von Hilfsmitteln in der vertrags&#228;rztlichen Versorgung (Hilfsmittel-Richtlinie &#47; HilfsM-RL) in der Neufassung vom 01.04.2021. BAnz AT 15.04.2021 B3. Berlin: Gemeinsamer Bundesausschuss.</RefTotal>
      </Reference>
      <Reference refNo="3">
        <RefAuthor>Hoppe U</RefAuthor>
        <RefAuthor>Hast A</RefAuthor>
        <RefAuthor>Hocke T</RefAuthor>
        <RefTitle>Sprachverstehen mit H&#246;rger&#228;ten in Abh&#228;ngigkeit vom Tongeh&#246;r</RefTitle>
        <RefYear>2014</RefYear>
        <RefJournal>HNO</RefJournal>
        <RefPage>443-8</RefPage>
        <RefTotal>Hoppe U, Hast A, Hocke T. Sprachverstehen mit H&#246;rger&#228;ten in Abh&#228;ngigkeit vom Tongeh&#246;r &#91;Speech perception with hearing aids in comparison to pure-tone hearing loss&#93;. HNO. 2014 Jun;62(6):443-8. DOI: 10.1007&#47;s00106-013-2813-1</RefTotal>
        <RefLink>https:&#47;&#47;doi.org&#47;10.1007&#47;s00106-013-2813-1</RefLink>
      </Reference>
      <Reference refNo="4">
        <RefAuthor>M&#252;ller A</RefAuthor>
        <RefAuthor>Hocke T</RefAuthor>
        <RefAuthor>Hoppe U</RefAuthor>
        <RefAuthor>Mir-Salim P</RefAuthor>
        <RefTitle>Der Einfluss des Alters bei der Evaluierung des funktionellen H&#246;rger&#228;tenutzens mittels Sprachaudiometrie</RefTitle>
        <RefYear>2016</RefYear>
        <RefJournal>HNO</RefJournal>
        <RefPage>143-8</RefPage>
        <RefTotal>M&#252;ller A, Hocke T, Hoppe U, Mir-Salim P. Der Einfluss des Alters bei der Evaluierung des funktionellen H&#246;rger&#228;tenutzens mittels Sprachaudiometrie &#91;The age effect in evaluation of hearing aid benefits by speech audiometry&#93;. HNO. 2016 Mar;64(3):143-8. DOI: 10.1007&#47;s00106-015-0115-5</RefTotal>
        <RefLink>https:&#47;&#47;doi.org&#47;10.1007&#47;s00106-015-0115-5</RefLink>
      </Reference>
      <Reference refNo="5">
        <RefAuthor>Kronlachner M</RefAuthor>
        <RefAuthor>Baumann U</RefAuthor>
        <RefAuthor>St&#246;ver T</RefAuthor>
        <RefAuthor>Wei&#223;gerber T</RefAuthor>
        <RefTitle>Untersuchung der Qualit&#228;t der H&#246;rger&#228;teversorgung bei Senioren unter Ber&#252;cksichtigung kognitiver Einflussfaktoren</RefTitle>
        <RefYear>2018</RefYear>
        <RefJournal>Laryngorhinootologie</RefJournal>
        <RefPage>852-9</RefPage>
        <RefTotal>Kronlachner M, Baumann U, St&#246;ver T, Wei&#223;gerber T. Untersuchung der Qualit&#228;t der H&#246;rger&#228;teversorgung bei Senioren unter Ber&#252;cksichtigung kognitiver Einflussfaktoren &#91;Investigation of the quality of hearing aid provision in seniors considering cognitive functions&#93;. Laryngorhinootologie. 2018 Dec;97(12):852-9. DOI: 10.1055&#47;a-0671-2295</RefTotal>
        <RefLink>https:&#47;&#47;doi.org&#47;10.1055&#47;a-0671-2295</RefLink>
      </Reference>
      <Reference refNo="6">
        <RefAuthor>D&#246;rfler C</RefAuthor>
        <RefAuthor>Hocke T</RefAuthor>
        <RefAuthor>Hast A</RefAuthor>
        <RefAuthor>Hoppe U</RefAuthor>
        <RefTitle>Sprachverstehen mit H&#246;rger&#228;ten f&#252;r 10 Standardaudiogramme</RefTitle>
        <RefYear>2020</RefYear>
        <RefJournal>HNO</RefJournal>
        <RefPage>93-9</RefPage>
        <RefTotal>D&#246;rfler C, Hocke T, Hast A, Hoppe U. Sprachverstehen mit H&#246;rger&#228;ten f&#252;r 10 Standardaudiogramme &#91;Speech recognition with hearing aids for 10 standard audiograms: English version&#93;. HNO. 2020 Aug;68(Suppl 2):93-9. DOI: 10.1007&#47;s00106-020-00843-y</RefTotal>
        <RefLink>https:&#47;&#47;doi.org&#47;10.1007&#47;s00106-020-00843-y</RefLink>
      </Reference>
      <Reference refNo="7">
        <RefAuthor>Engler M</RefAuthor>
        <RefAuthor>Digeser F</RefAuthor>
        <RefAuthor>Hoppe U</RefAuthor>
        <RefTitle>Wirksamkeit der H&#246;rger&#228;teversorgung bei hochgradigem H&#246;rverlust</RefTitle>
        <RefYear>2022</RefYear>
        <RefJournal>HNO</RefJournal>
        <RefPage>520-32</RefPage>
        <RefTotal>Engler M, Digeser F, Hoppe U. Wirksamkeit der H&#246;rger&#228;teversorgung bei hochgradigem H&#246;rverlust &#91;Effectiveness of hearing aid provision for severe hearing loss&#93;. HNO. 2022 Jul;70(7):520-32. DOI: 10.1007&#47;s00106-021-01139-5</RefTotal>
        <RefLink>https:&#47;&#47;doi.org&#47;10.1007&#47;s00106-021-01139-5</RefLink>
      </Reference>
      <Reference refNo="8">
        <RefAuthor>Hoppe U</RefAuthor>
        <RefAuthor>Hast A</RefAuthor>
        <RefAuthor>Hocke T</RefAuthor>
        <RefTitle>Disproportional hoher Verlust an Sprachverstehen</RefTitle>
        <RefYear>2024</RefYear>
        <RefJournal>HNO</RefJournal>
        <RefPage>885-92</RefPage>
        <RefTotal>Hoppe U, Hast A, Hocke T. Disproportional hoher Verlust an Sprachverstehen &#91;Disproportionately high loss in speech intelligibility&#93;. HNO. 2024 Dec;72(12):885-92. 
DOI: 10.1007&#47;s00106-024-01518-8</RefTotal>
        <RefLink>https:&#47;&#47;doi.org&#47;10.1007&#47;s00106-024-01518-8</RefLink>
      </Reference>
      <Reference refNo="9">
        <RefAuthor>Engler M</RefAuthor>
        <RefAuthor>Digeser F</RefAuthor>
        <RefAuthor>Hoppe U</RefAuthor>
        <RefTitle>Speech recognition and real-ear-measured amplification in hearing-aid users with various grades of hearing loss</RefTitle>
        <RefYear>2024</RefYear>
        <RefJournal>Int J Audiol</RefJournal>
        <RefPage>1-12</RefPage>
        <RefTotal>Engler M, Digeser F, Hoppe U. Speech recognition and real-ear-measured amplification in hearing-aid users with various grades of hearing loss. Int J Audiol. 2024 Dec;63:1-12. 
DOI: 10.1080&#47;14992027.2024.2426009</RefTotal>
        <RefLink>https:&#47;&#47;doi.org&#47;10.1080&#47;14992027.2024.2426009</RefLink>
      </Reference>
      <Reference refNo="10">
        <RefAuthor>Holube I</RefAuthor>
        <RefAuthor>Kollmeier B</RefAuthor>
        <RefTitle>Speech intelligibility prediction in hearing-impaired listeners based on a psychoacoustically motivated perception model</RefTitle>
        <RefYear>1996</RefYear>
        <RefJournal>J Acoust Soc Am</RefJournal>
        <RefPage>1703-16</RefPage>
        <RefTotal>Holube I, Kollmeier B. Speech intelligibility prediction in hearing-impaired listeners based on a psychoacoustically motivated perception model. J Acoust Soc Am. 1996 Sep;100(3):1703-16. DOI: 10.1121&#47;1.417354</RefTotal>
        <RefLink>https:&#47;&#47;doi.org&#47;10.1121&#47;1.417354</RefLink>
      </Reference>
      <Reference refNo="11">
        <RefAuthor>Plomp R</RefAuthor>
        <RefTitle>A signal-to-noise ratio model for the speech-reception threshold of the hearing impaired</RefTitle>
        <RefYear>1986</RefYear>
        <RefJournal>J Speech Hear Res</RefJournal>
        <RefPage>146-54</RefPage>
        <RefTotal>Plomp R. A signal-to-noise ratio model for the speech-reception threshold of the hearing impaired. J Speech Hear Res. 1986 Jun;29(2):146-54. DOI: 10.1044&#47;jshr.2902.146 </RefTotal>
        <RefLink>https:&#47;&#47;doi.org&#47;10.1044&#47;jshr.2902.146</RefLink>
      </Reference>
      <Reference refNo="12">
        <RefAuthor>Breiman L</RefAuthor>
        <RefTitle>Random Forests</RefTitle>
        <RefYear>2001</RefYear>
        <RefJournal>Machine Learning</RefJournal>
        <RefPage>5-32</RefPage>
        <RefTotal>Breiman L. Random Forests. Machine Learning. 2001;45:5-32. DOI: 10.1023&#47;A:1010933404324</RefTotal>
        <RefLink>https:&#47;&#47;doi.org&#47;10.1023&#47;A:1010933404324</RefLink>
      </Reference>
      <Reference refNo="13">
        <RefAuthor>Pudil P</RefAuthor>
        <RefAuthor>Novovi&#269;ov&#225; J</RefAuthor>
        <RefAuthor>Kittler J</RefAuthor>
        <RefTitle>Floating search methods in feature selection</RefTitle>
        <RefYear>1994</RefYear>
        <RefJournal>Pattern Recognit Lett</RefJournal>
        <RefPage>1119-25</RefPage>
        <RefTotal>Pudil P, Novovi&#269;ov&#225; J, Kittler J. Floating search methods in feature selection. Pattern Recognit Lett. 1994;15(11):1119-25. 
DOI: 10.1016&#47;0167-8655(94)90127-9</RefTotal>
        <RefLink>https:&#47;&#47;doi.org&#47;10.1016&#47;0167-8655(94)90127-9</RefLink>
      </Reference>
      <Reference refNo="14">
        <RefAuthor>Holube I</RefAuthor>
        <RefAuthor>Fredelake S</RefAuthor>
        <RefAuthor>Vlaming M</RefAuthor>
        <RefAuthor>Kollmeier B</RefAuthor>
        <RefTitle>Development and analysis of an International Speech Test Signal (ISTS)</RefTitle>
        <RefYear>2010</RefYear>
        <RefJournal>Int J Audiol</RefJournal>
        <RefPage>891-903</RefPage>
        <RefTotal>Holube I, Fredelake S, Vlaming M, Kollmeier B. Development and analysis of an International Speech Test Signal (ISTS). Int J Audiol. 2010 Dec;49(12):891-903. 
DOI: 10.3109&#47;14992027.2010.506889</RefTotal>
        <RefLink>https:&#47;&#47;doi.org&#47;10.3109&#47;14992027.2010.506889</RefLink>
      </Reference>
      <Reference refNo="15">
        <RefAuthor>Byrne D</RefAuthor>
        <RefAuthor>Dillon H</RefAuthor>
        <RefAuthor>Tran K</RefAuthor>
        <RefAuthor>Arlinger S</RefAuthor>
        <RefAuthor>Wilbraham K</RefAuthor>
        <RefAuthor>Cox R</RefAuthor>
        <RefAuthor>Hagerman B</RefAuthor>
        <RefAuthor>Hetu R</RefAuthor>
        <RefAuthor>Kei J</RefAuthor>
        <RefAuthor>Lui C</RefAuthor>
        <RefAuthor>Kiessling J</RefAuthor>
        <RefAuthor>Nasser Kotby M</RefAuthor>
        <RefAuthor>Nasse NHA</RefAuthor>
        <RefAuthor>El Kholy WAH</RefAuthor>
        <RefAuthor>Y Nakanishi</RefAuthor>
        <RefAuthor>Oyer H</RefAuthor>
        <RefAuthor>Powell R</RefAuthor>
        <RefAuthor>Stephens D</RefAuthor>
        <RefAuthor>Meredith R</RefAuthor>
        <RefAuthor>Sirimanna T</RefAuthor>
        <RefAuthor>Tavartkiladze G</RefAuthor>
        <RefAuthor>Frolenkov GI</RefAuthor>
        <RefAuthor>Westerman S</RefAuthor>
        <RefAuthor>Ludvigsen C</RefAuthor>
        <RefTitle>An international comparison of long-term average speech spectra</RefTitle>
        <RefYear>1994</RefYear>
        <RefJournal>J Acoust Soc Am</RefJournal>
        <RefPage>2108-20</RefPage>
        <RefTotal>Byrne D, Dillon H, Tran K, Arlinger S, Wilbraham K, Cox R, Hagerman B, Hetu R, Kei J, Lui C, Kiessling J, Nasser Kotby M, Nasse NHA, El Kholy WAH, Y Nakanishi, Oyer H, Powell R, Stephens D, Meredith R, Sirimanna T, Tavartkiladze G, Frolenkov GI, Westerman S, Ludvigsen C. An international comparison of long-term average speech spectra. J Acoust Soc Am. 1994;96(4):2108-20. DOI: 10.1121&#47;1.410152</RefTotal>
        <RefLink>https:&#47;&#47;doi.org&#47;10.1121&#47;1.410152</RefLink>
      </Reference>
      <Reference refNo="16">
        <RefAuthor>Scollie S</RefAuthor>
        <RefAuthor>Seewald R</RefAuthor>
        <RefAuthor>Cornelisse L</RefAuthor>
        <RefAuthor>Moodie S</RefAuthor>
        <RefAuthor>Bagatto M</RefAuthor>
        <RefAuthor>Laurnagaray D</RefAuthor>
        <RefAuthor>Beaulac S</RefAuthor>
        <RefAuthor>Pumford J</RefAuthor>
        <RefTitle>The Desired Sensation Level multistage input&#47;output algorithm</RefTitle>
        <RefYear>2005</RefYear>
        <RefJournal>Trends Amplif</RefJournal>
        <RefPage>159-97</RefPage>
        <RefTotal>Scollie S, Seewald R, Cornelisse L, Moodie S, Bagatto M, Laurnagaray D, Beaulac S, Pumford J. The Desired Sensation Level multistage input&#47;output algorithm. Trends Amplif. 2005;9(4):159-97. DOI: 10.1177&#47;108471380500900403</RefTotal>
        <RefLink>https:&#47;&#47;doi.org&#47;10.1177&#47;108471380500900403</RefLink>
      </Reference>
      <Reference refNo="17">
        <RefAuthor>Keidser G</RefAuthor>
        <RefAuthor>Dillon H</RefAuthor>
        <RefAuthor>Flax M</RefAuthor>
        <RefAuthor>Ching T</RefAuthor>
        <RefAuthor>Brewer S</RefAuthor>
        <RefTitle>The NAL-NL2 Prescription Procedure</RefTitle>
        <RefYear>2011</RefYear>
        <RefJournal>Audiol Res</RefJournal>
        <RefPage>e24</RefPage>
        <RefTotal>Keidser G, Dillon H, Flax M, Ching T, Brewer S. The NAL-NL2 Prescription Procedure. Audiol Res. 2011 May;1(1):e24. 
DOI: 10.4081&#47;audiores.2011.e24</RefTotal>
        <RefLink>https:&#47;&#47;doi.org&#47;10.4081&#47;audiores.2011.e24</RefLink>
      </Reference>
      <Reference refNo="18">
        <RefAuthor>Digeser FM</RefAuthor>
        <RefAuthor>Engler M</RefAuthor>
        <RefAuthor>Hoppe U</RefAuthor>
        <RefTitle>Comparison of bimodal benefit for the use of DSL v5.0 and NAL-NL2 in cochlear implant listeners</RefTitle>
        <RefYear>2020</RefYear>
        <RefJournal>Int J Audiol</RefJournal>
        <RefPage>383-91</RefPage>
        <RefTotal>Digeser FM, Engler M, Hoppe U. Comparison of bimodal benefit for the use of DSL v5.0 and NAL-NL2 in cochlear implant listeners. Int J Audiol. 2020 May;59(5):383-91. 
DOI: 10.1080&#47;14992027.2019.1697902</RefTotal>
        <RefLink>https:&#47;&#47;doi.org&#47;10.1080&#47;14992027.2019.1697902</RefLink>
      </Reference>
      <Reference refNo="19">
        <RefAuthor>Engler M</RefAuthor>
        <RefAuthor>Digeser F</RefAuthor>
        <RefAuthor>Hoppe U</RefAuthor>
        <RefTitle>Pr&#228;diktion des Sprachverstehens mit H&#246;rger&#228;t: Neue Erkenntnisse durch Machine-Learning-Modelle</RefTitle>
        <RefYear>2025</RefYear>
        <RefBookTitle>27. Jahrestagung der Deutschen Gesellschaft f&#252;r Audiologie und Arbeitstagung der Arbeitsgemeinschaft Deutschsprachiger Audiologen, Neurootologen und Otologen. G&#246;ttingen, 19.-21.03.2025</RefBookTitle>
        <RefPage>Doc078</RefPage>
        <RefTotal>Engler M, Digeser F, Hoppe U. Pr&#228;diktion des Sprachverstehens mit H&#246;rger&#228;t: Neue Erkenntnisse durch Machine-Learning-Modelle. In: Deutsche Gesellschaft f&#252;r Audiologie e. V.; ADANO, editors. 27. Jahrestagung der Deutschen Gesellschaft f&#252;r Audiologie und Arbeitstagung der Arbeitsgemeinschaft Deutschsprachiger Audiologen, Neurootologen und Otologen. G&#246;ttingen, 19.-21.03.2025. D&#252;sseldorf: German Medical Science GMS Publishing House; 2025. Doc078. DOI: 10.3205&#47;25dga078</RefTotal>
        <RefLink>https:&#47;&#47;doi.org&#47;10.3205&#47;25dga078</RefLink>
      </Reference>
    </References>
    <Media>
      <Tables>
        <NoOfTables>0</NoOfTables>
      </Tables>
      <Figures>
        <Figure width="782" height="626" format="png">
          <MediaNo>1</MediaNo>
          <MediaID>1</MediaID>
          <Caption><Pgraph><Mark1>Figure 1: Mean absolute error (MAE) for 18 features, sorted by their MAE (highest to lowest). Each feature was individually fed into a Random Forest Regression and evaluated 100 times. </Mark1></Pgraph></Caption>
        </Figure>
        <Figure width="781" height="626" format="png">
          <MediaNo>2</MediaNo>
          <MediaID>2</MediaID>
          <Caption><Pgraph><Mark1>Figure 2: Mean absolute error (MAE) across 18 iterations. Each iteration illustrates the lowest MAE by using the selected feature with the previous ones. The red rectangle highlights the iteration that led to the overall lowest MAE (MAE</Mark1><Mark1><Subscript>min</Subscript></Mark1><Mark1>). The blue rectangle marks the iteration where the reduction in MAE was still statistically significant compared to the previous iteration (MAE</Mark1><Mark1><Subscript>sig</Subscript></Mark1><Mark1>). (&#42;&#42;&#42; p&#60;0.001, Iteration 1&#47;2; &#42;&#42;&#42; p&#60;0.001, Iteration 2&#47;3)</Mark1></Pgraph></Caption>
        </Figure>
        <Figure width="633" height="500" format="png">
          <MediaNo>3</MediaNo>
          <MediaID>3</MediaID>
          <Caption><Pgraph><Mark1>Figure 3: Scatter plot and correlation analysis between measured and predicted speech recognition with hearing aid (WRS</Mark1><Mark1><Subscript>65</Subscript></Mark1><Mark1>(HA)) for a randomly selected subset of test data from the third iteration (n&#61;127, 20&#37; test data) </Mark1></Pgraph></Caption>
        </Figure>
        <NoOfPictures>3</NoOfPictures>
      </Figures>
      <InlineFigures>
        <Figure width="202" height="50" format="png">
          <MediaNo>1</MediaNo>
          <MediaID>1</MediaID>
          <AltText>Equation 1</AltText>
        </Figure>
        <Figure width="12" height="12" format="png">
          <MediaNo>2</MediaNo>
          <MediaID>2</MediaID>
          <AltText>Equation 2</AltText>
        </Figure>
        <Figure width="11" height="14" format="png">
          <MediaNo>3</MediaNo>
          <MediaID>3</MediaID>
          <AltText>Equation 3</AltText>
        </Figure>
        <Figure width="30" height="15" format="png">
          <MediaNo>4</MediaNo>
          <MediaID>4</MediaID>
          <AltText>Equation 4</AltText>
        </Figure>
        <Figure width="8" height="11" format="png">
          <MediaNo>5</MediaNo>
          <MediaID>5</MediaID>
          <AltText>Equation 5</AltText>
        </Figure>
        <Figure width="8" height="15" format="png">
          <MediaNo>6</MediaNo>
          <MediaID>6</MediaID>
          <AltText>Equation 6</AltText>
        </Figure>
        <NoOfPictures>6</NoOfPictures>
      </InlineFigures>
      <Attachments>
        <NoOfAttachments>0</NoOfAttachments>
      </Attachments>
    </Media>
  </OrigData>
</GmsArticle>